[GitHub] [flink] flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation 
for DDL introduction
URL: https://github.com/apache/flink/pull/9366#issuecomment-518523202
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6e8dbc06ec17458f96c429e1c01a06afdf916c94 (Thu Aug 08 
05:54:58 UTC 2019)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9366: [FLINK-13359][docs] Add 
documentation for DDL introduction
URL: https://github.com/apache/flink/pull/9366#discussion_r311865117
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -656,9 +656,11 @@ connector:
 
 The file system connector itself is included in Flink and does not require an 
additional dependency. A corresponding format needs to be specified for reading 
and writing rows from and to a file system.
 
+The file system connector can also be defined with a *CREATE TABLE DDL* 
statement, please see the dedicated page [DDL](ddl.html) examples.
 
 Review comment:
   I mean we need to describe all the supported property keys and values for 
each connector.  And I think `connect.md` is a good place to go instead of to 
list all the connectors again in `ddl.md`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9366: [FLINK-13359][docs] Add 
documentation for DDL introduction
URL: https://github.com/apache/flink/pull/9366#discussion_r311865117
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -656,9 +656,11 @@ connector:
 
 The file system connector itself is included in Flink and does not require an 
additional dependency. A corresponding format needs to be specified for reading 
and writing rows from and to a file system.
 
+The file system connector can also be defined with a *CREATE TABLE DDL* 
statement, please see the dedicated page [DDL](ddl.html) examples.
 
 Review comment:
   I mean we need to describe all the supported property keys and values in 
different connectors.  And I think `connect.md` is a good place to go instead 
of to list all the connectors again in `ddl.md`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13405) Translate "Basic API Concepts" page into Chinese

2019-08-07 Thread Xingcan Cui (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902700#comment-16902700
 ] 

Xingcan Cui commented on FLINK-13405:
-

[~WangHW], personally, I'd like to translate it to "数据汇", which corresponds to 
source ("数据源"). However, as [~jark] suggested, you can choose not to translate 
it.

> Translate "Basic API Concepts" page into Chinese
> 
>
> Key: FLINK-13405
> URL: https://issues.apache.org/jira/browse/FLINK-13405
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: WangHengWei
>Assignee: WangHengWei
>Priority: Major
>  Labels: documentation, pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The page url is 
> [https://github.com/apache/flink/blob/master/docs/dev/api_concepts.zh.md]
> The markdown file is located in flink/docs/dev/api_concepts.zh.md



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13644) Translate "State Backends" page into Chinese

2019-08-07 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13644:
---

Assignee: fanrui

> Translate "State Backends" page into Chinese
> 
>
> Key: FLINK-13644
> URL: https://issues.apache.org/jira/browse/FLINK-13644
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: fanrui
>Assignee: fanrui
>Priority: Major
> Fix For: 1.10.0
>
>
> 1、The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/stream/state/state_backends.html]
> The markdown file is located in "docs/dev/stream/state/state_backends.zh.md"
> 2、The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/state/state_backends.html]
> The markdown file is located in "docs/ops/state/state_backends.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13225) Introduce type inference for hive functions in blink

2019-08-07 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902696#comment-16902696
 ] 

Kurt Young commented on FLINK-13225:


[~twalthr] Yes, but I think the original plan for 1.9 is to include this 
temporal solution. The code and pr are prepared 1 month ago, but staled with 
lacking of review. I'm not sure if we should consider this as a new feature. 

> Introduce type inference for hive functions in blink
> -
>
> Key: FLINK-13225
> URL: https://issues.apache.org/jira/browse/FLINK-13225
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> See some conversation in [https://github.com/apache/flink/pull/8920]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9390: [FLINK-13534][hive] Unable to query Hive table with decimal column

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9390: [FLINK-13534][hive] Unable to query 
Hive table with decimal column
URL: https://github.com/apache/flink/pull/9390#issuecomment-519355742
 
 
   ## CI report:
   
   * b2432f2b0e522f84bf3029dc80cf46a00d13ea54 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122376827)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9389: [FLINK-13645][table-planner] Error in code-gen when using blink planner in scala shell

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9389: [FLINK-13645][table-planner] Error in 
code-gen when using blink planner in scala shell
URL: https://github.com/apache/flink/pull/9389#issuecomment-519351757
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1e89a949f197d2feb5c56f737edfe67c250b346b (Thu Aug 08 
05:34:35 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13645).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9389: [FLINK-13645][table-planner] Error in code-gen when using blink planner in scala shell

2019-08-07 Thread GitBox
danny0405 commented on a change in pull request #9389: 
[FLINK-13645][table-planner] Error in code-gen when using blink planner in 
scala shell
URL: https://github.com/apache/flink/pull/9389#discussion_r311861291
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/CodeGeneratorContext.scala
 ##
 @@ -603,7 +603,7 @@ class CodeGeneratorContext(val tableConfig: TableConfig) {
 val byteArray = InstantiationUtil.serializeObject(obj)
 val objCopy: AnyRef = InstantiationUtil.deserializeObject(
   byteArray,
-  obj.getClass.getClassLoader)
+  Thread.currentThread().getContextClassLoader)
 references += objCopy
 
 Review comment:
   Can we give a test case to prove that the 
`Thread.currentThread().getContextClassLoader` is the right one to choose ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file 
replication config for yarn configuration
URL: https://github.com/apache/flink/pull/8303#issuecomment-511684151
 
 
   ## CI report:
   
   * 6a7ca58b4a04f6dce250045e021702e67e82b893 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119421914)
   * 4d38a8df0d59734c4b2386689a2f17b9f2b44b12 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441376)
   * 9c14836f8639e98d58cf7bb32e38b938b3843994 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119577044)
   * 76186776c5620598a19234245bbd05dfdfb1c62c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120113740)
   * 628ca7b316ad3968c90192a47a84dd01f26e2578 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122381349)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation 
for DDL introduction
URL: https://github.com/apache/flink/pull/9366#issuecomment-518523202
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6e8dbc06ec17458f96c429e1c01a06afdf916c94 (Thu Aug 08 
05:27:26 UTC 2019)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add 
documentation for DDL introduction
URL: https://github.com/apache/flink/pull/9366#discussion_r311860029
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -656,9 +656,11 @@ connector:
 
 The file system connector itself is included in Flink and does not require an 
additional dependency. A corresponding format needs to be specified for reading 
and writing rows from and to a file system.
 
+The file system connector can also be defined with a *CREATE TABLE DDL* 
statement, please see the dedicated page [DDL](ddl.html) examples.
 
 Review comment:
   Actually we still execute DDLs with  Java, Scala or Python, i suggest to 
move it in the DDL page and make a link here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13405) Translate "Basic API Concepts" page into Chinese

2019-08-07 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902690#comment-16902690
 ] 

Jark Wu commented on FLINK-13405:
-

I would like to not translate it.

> Translate "Basic API Concepts" page into Chinese
> 
>
> Key: FLINK-13405
> URL: https://issues.apache.org/jira/browse/FLINK-13405
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: WangHengWei
>Assignee: WangHengWei
>Priority: Major
>  Labels: documentation, pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The page url is 
> [https://github.com/apache/flink/blob/master/docs/dev/api_concepts.zh.md]
> The markdown file is located in flink/docs/dev/api_concepts.zh.md



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13636) Translate "Flink DataStream API Programming Guide" page into Chinese

2019-08-07 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13636:
---

Assignee: WangHengWei

> Translate "Flink DataStream API Programming Guide" page into Chinese
> 
>
> Key: FLINK-13636
> URL: https://issues.apache.org/jira/browse/FLINK-13636
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: WangHengWei
>Assignee: WangHengWei
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/datastream_api.html]
> The markdown file is located in 
> "flink/docs/dev/[datastream_api.zh.md|https://github.com/apache/flink/blob/master/docs/dev/datastream_api.zh.md];



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] 
Verify and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-517546275
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7d17148e43ebb9a62f4c6c383516c9fbb2094876 (Thu Aug 08 
05:15:13 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9331: 
[FLINK-13523][table-planner-blink] Verify and correct arithmetic function's 
semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#discussion_r311858049
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/SplitAggregateRule.scala
 ##
 @@ -304,19 +302,22 @@ class SplitAggregateRule extends RelOptRule(
 aggGroupCount + index + avgAggCount + 1,
 finalAggregate.getRowType)
   avgAggCount += 1
-  // TODO
+  // Make a guarantee that the final aggregation returns NULL if 
underlying count is ZERO.
+  // We use SUM0 for underlying sum, which may run into ZERO / ZERO,
+  // and division by zero exception occurs.
+  // @see Glossary#SQL2011 SQL:2011 Part 2 Section 6.27
   val equals = relBuilder.call(
 FlinkSqlOperatorTable.EQUALS,
 countInputRef,
 relBuilder.getRexBuilder.makeBigintLiteral(JBigDecimal.valueOf(0)))
-  val falseT = relBuilder.call(FlinkSqlOperatorTable.DIVIDE, 
sumInputRef, countInputRef)
-  val trueT = relBuilder.cast(
+  val ifTrue = relBuilder.cast(
 relBuilder.getRexBuilder.constantNull(), 
aggCall.`type`.getSqlTypeName)
+  val ifFalse = relBuilder.call(FlinkSqlOperatorTable.DIVIDE, 
sumInputRef, countInputRef)
 
 Review comment:
   Do we need a cast here?  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation 
for DDL introduction
URL: https://github.com/apache/flink/pull/9366#issuecomment-518524777
 
 
   ## CI report:
   
   * f99e66ffb4356f8132b48d352b27686a6ad958f5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122058269)
   * 6e8dbc06ec17458f96c429e1c01a06afdf916c94 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122379544)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] 
Verify and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-517548502
 
 
   ## CI report:
   
   * 5e757e180999aea0a3346ba841d9d48e456cdc0c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121703207)
   * 6069d5191f51436b98882198354a683003d49481 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122046271)
   * 56dc76d98ac3e666ec722443b8d62486c7adaf4c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122046565)
   * 49f315b9eebe2b2bc15719d54f1eb9f91d4638e1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122046887)
   * 33059bd1fb73b47343dbbc33e0e05ecb56a3e6a2 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122060322)
   * 25b39b0b5b351574581c546902aef007497d3ee3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122070149)
   * 48c66a9d7f5b1903fa3271fcfc2ce048ac25a45d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122209945)
   * 7d17148e43ebb9a62f4c6c383516c9fbb2094876 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122378482)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation 
for DDL introduction
URL: https://github.com/apache/flink/pull/9366#issuecomment-518523202
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6e8dbc06ec17458f96c429e1c01a06afdf916c94 (Thu Aug 08 
04:56:54 UTC 2019)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation 
for DDL introduction
URL: https://github.com/apache/flink/pull/9366#issuecomment-518523202
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit f99e66ffb4356f8132b48d352b27686a6ad958f5 (Thu Aug 08 
04:47:44 UTC 2019)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add 
documentation for DDL introduction
URL: https://github.com/apache/flink/pull/9366#discussion_r311852454
 
 

 ##
 File path: docs/dev/table/ddl.md
 ##
 @@ -0,0 +1,230 @@
+---
+title: "DDL"
+nav-parent_id: tableapi
+nav-pos: 0
+---
+
+
+The Table API and SQL are integrated in a joint API. The central concept of 
this API is a `Table` which serves as input and output of queries. This 
document shows all the DDL grammar Flink support, how to register a `Table`(or 
view) through DDL, how to drop a `Table`(or view) through DDL.
+
+* This will be replaced by the TOC
+{:toc}
+
+Create Table
+---
+{% highlight sql %}
+CREATE [OR REPLACE] TABLE [catalog_name.][db_name.]table_name
+  [(col_name1 col_type1 [COMMENT col_comment1], ...)]
+  [COMMENT table_comment]
+  [PARTITIONED BY (col_name1, col_name2, ...)]
+  [WITH (key1=val1, key2=val2, ...)]
+{% endhighlight %}
+
+Create a table with the given table properties. If a table with the same name 
already exists in the database, an exception is thrown except that *IF NOT 
EXIST* is declared.
+
+**OR REPLACE**
+
+If a table with the same name already exists in the database, replace it if 
this is declared. **Notes:** The OR REPLACE option is always false now.
+
+**PARTITIONED BY**
+
+Partition the created table by the specified columns. A directory is created 
for each partition.
+
+**WITH OPTIONS**
+
+Table properties used to create a table source/sink. The properties are 
usually used to find and create the underlying connector. **Notes:** the key 
and value of expression `key1=val1` should both be string literal.
+
+**Examples**:
+{% highlight sql %}
+-- CREATE a partitioned CSV table using the CREATE TABLE syntax.
+create table csv_table (
+  f0 int,
+  f1 bigint,
+  f2 string
+) 
+COMMENT 'This is a csv table.' 
+PARTITIONED BY(f0)
+WITH (
+  'connector.type' = 'filesystem',
+  'format.type' = 'csv',
+  'connector.path' = 'path1'
 
 Review comment:
   Yep, thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add 
documentation for DDL introduction
URL: https://github.com/apache/flink/pull/9366#discussion_r311851845
 
 

 ##
 File path: docs/dev/table/ddl.md
 ##
 @@ -0,0 +1,230 @@
+---
+title: "DDL"
+nav-parent_id: tableapi
+nav-pos: 0
+---
+
+
+The Table API and SQL are integrated in a joint API. The central concept of 
this API is a `Table` which serves as input and output of queries. This 
document shows all the DDL grammar Flink support, how to register a `Table`(or 
view) through DDL, how to drop a `Table`(or view) through DDL.
+
+* This will be replaced by the TOC
+{:toc}
+
+Create Table
+---
+{% highlight sql %}
+CREATE [OR REPLACE] TABLE [catalog_name.][db_name.]table_name
+  [(col_name1 col_type1 [COMMENT col_comment1], ...)]
+  [COMMENT table_comment]
+  [PARTITIONED BY (col_name1, col_name2, ...)]
+  [WITH (key1=val1, key2=val2, ...)]
+{% endhighlight %}
+
+Create a table with the given table properties. If a table with the same name 
already exists in the database, an exception is thrown except that *IF NOT 
EXIST* is declared.
+
+**OR REPLACE**
+
+If a table with the same name already exists in the database, replace it if 
this is declared. **Notes:** The OR REPLACE option is always false now.
+
+**PARTITIONED BY**
+
+Partition the created table by the specified columns. A directory is created 
for each partition.
 
 Review comment:
   Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add 
documentation for DDL introduction
URL: https://github.com/apache/flink/pull/9366#discussion_r311852423
 
 

 ##
 File path: docs/dev/table/ddl.md
 ##
 @@ -0,0 +1,230 @@
+---
+title: "DDL"
+nav-parent_id: tableapi
+nav-pos: 0
+---
+
+
+The Table API and SQL are integrated in a joint API. The central concept of 
this API is a `Table` which serves as input and output of queries. This 
document shows all the DDL grammar Flink support, how to register a `Table`(or 
view) through DDL, how to drop a `Table`(or view) through DDL.
+
+* This will be replaced by the TOC
+{:toc}
+
+Create Table
+---
+{% highlight sql %}
+CREATE [OR REPLACE] TABLE [catalog_name.][db_name.]table_name
+  [(col_name1 col_type1 [COMMENT col_comment1], ...)]
+  [COMMENT table_comment]
+  [PARTITIONED BY (col_name1, col_name2, ...)]
+  [WITH (key1=val1, key2=val2, ...)]
+{% endhighlight %}
+
+Create a table with the given table properties. If a table with the same name 
already exists in the database, an exception is thrown except that *IF NOT 
EXIST* is declared.
+
+**OR REPLACE**
+
+If a table with the same name already exists in the database, replace it if 
this is declared. **Notes:** The OR REPLACE option is always false now.
+
+**PARTITIONED BY**
+
+Partition the created table by the specified columns. A directory is created 
for each partition.
+
+**WITH OPTIONS**
+
+Table properties used to create a table source/sink. The properties are 
usually used to find and create the underlying connector. **Notes:** the key 
and value of expression `key1=val1` should both be string literal.
+
+**Examples**:
 
 Review comment:
   I have tried and updated all the DDLs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add 
documentation for DDL introduction
URL: https://github.com/apache/flink/pull/9366#discussion_r311834429
 
 

 ##
 File path: docs/dev/table/ddl.md
 ##
 @@ -0,0 +1,230 @@
+---
+title: "DDL"
+nav-parent_id: tableapi
+nav-pos: 0
+---
+
+
+The Table API and SQL are integrated in a joint API. The central concept of 
this API is a `Table` which serves as input and output of queries. This 
document shows all the DDL grammar Flink support, how to register a `Table`(or 
view) through DDL, how to drop a `Table`(or view) through DDL.
+
+* This will be replaced by the TOC
+{:toc}
+
+Create Table
+---
+{% highlight sql %}
+CREATE [OR REPLACE] TABLE [catalog_name.][db_name.]table_name
+  [(col_name1 col_type1 [COMMENT col_comment1], ...)]
 
 Review comment:
   We actually support omit column definitions now, just like Spark and Hive, 
it would produce a table with empty row type. I have referenced the DataTypes 
page in the bottom.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add 
documentation for DDL introduction
URL: https://github.com/apache/flink/pull/9366#discussion_r311835677
 
 

 ##
 File path: docs/dev/table/ddl.md
 ##
 @@ -0,0 +1,230 @@
+---
+title: "DDL"
 
 Review comment:
   It is a dedicated page which belongs to the `SQL` page, just like the `Data 
Types` page, i have add the link for it.
   I put it in a single page because the `DDL` grammar is very different from 
the query, and we need to give description of almost every grammar block 
(optional or required).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add 
documentation for DDL introduction
URL: https://github.com/apache/flink/pull/9366#discussion_r311852353
 
 

 ##
 File path: docs/dev/table/ddl.md
 ##
 @@ -0,0 +1,230 @@
+---
+title: "DDL"
+nav-parent_id: tableapi
+nav-pos: 0
+---
+
+
+The Table API and SQL are integrated in a joint API. The central concept of 
this API is a `Table` which serves as input and output of queries. This 
document shows all the DDL grammar Flink support, how to register a `Table`(or 
view) through DDL, how to drop a `Table`(or view) through DDL.
+
+* This will be replaced by the TOC
+{:toc}
+
+Create Table
+---
+{% highlight sql %}
+CREATE [OR REPLACE] TABLE [catalog_name.][db_name.]table_name
+  [(col_name1 col_type1 [COMMENT col_comment1], ...)]
+  [COMMENT table_comment]
+  [PARTITIONED BY (col_name1, col_name2, ...)]
+  [WITH (key1=val1, key2=val2, ...)]
+{% endhighlight %}
+
+Create a table with the given table properties. If a table with the same name 
already exists in the database, an exception is thrown except that *IF NOT 
EXIST* is declared.
+
+**OR REPLACE**
+
+If a table with the same name already exists in the database, replace it if 
this is declared. **Notes:** The OR REPLACE option is always false now.
+
+**PARTITIONED BY**
+
+Partition the created table by the specified columns. A directory is created 
for each partition.
+
+**WITH OPTIONS**
+
+Table properties used to create a table source/sink. The properties are 
usually used to find and create the underlying connector. **Notes:** the key 
and value of expression `key1=val1` should both be string literal.
 
 Review comment:
   Agree, thanks, i have added the properties link.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add 
documentation for DDL introduction
URL: https://github.com/apache/flink/pull/9366#discussion_r311835114
 
 

 ##
 File path: docs/dev/table/ddl.md
 ##
 @@ -0,0 +1,230 @@
+---
+title: "DDL"
+nav-parent_id: tableapi
+nav-pos: 0
+---
+
+
+The Table API and SQL are integrated in a joint API. The central concept of 
this API is a `Table` which serves as input and output of queries. This 
document shows all the DDL grammar Flink support, how to register a `Table`(or 
view) through DDL, how to drop a `Table`(or view) through DDL.
+
+* This will be replaced by the TOC
+{:toc}
+
+Create Table
+---
+{% highlight sql %}
+CREATE [OR REPLACE] TABLE [catalog_name.][db_name.]table_name
+  [(col_name1 col_type1 [COMMENT col_comment1], ...)]
+  [COMMENT table_comment]
+  [PARTITIONED BY (col_name1, col_name2, ...)]
+  [WITH (key1=val1, key2=val2, ...)]
+{% endhighlight %}
+
+Create a table with the given table properties. If a table with the same name 
already exists in the database, an exception is thrown except that *IF NOT 
EXIST* is declared.
+
+**OR REPLACE**
+
+If a table with the same name already exists in the database, replace it if 
this is declared. **Notes:** The OR REPLACE option is always false now.
+
+**PARTITIONED BY**
+
+Partition the created table by the specified columns. A directory is created 
for each partition.
+
+**WITH OPTIONS**
+
+Table properties used to create a table source/sink. The properties are 
usually used to find and create the underlying connector. **Notes:** the key 
and value of expression `key1=val1` should both be string literal.
+
+**Examples**:
+{% highlight sql %}
+-- CREATE a partitioned CSV table using the CREATE TABLE syntax.
+create table csv_table (
+  f0 int,
+  f1 bigint,
+  f2 string
+) 
+COMMENT 'This is a csv table.' 
+PARTITIONED BY(f0)
+WITH (
+  'connector.type' = 'filesystem',
+  'format.type' = 'csv',
+  'connector.path' = 'path1'
+  'format.fields.0.name' = 'f0',
+  'format.fields.0.type' = 'INT',
+  'format.fields.1.name' = 'f1',
+  'format.fields.1.type' = 'BIGINT',
+  'format.fields.2.name' = 'f2',
+  'format.fields.2.type' = 'STRING',
+);
+
+-- CREATE a Kafka table start from the earliest offset(as table source) and 
append mode(as table sink).
+create table kafka_table (
+  f0 int,
+  f1 bigint,
+  f2 string
+) with (
+  'connector.type' = 'kafka',
+  'update-mode' = 'append',
+  'connector.topic' = 'topic_name',
+  'connector.startup-mode' = 'earliest-offset',
+  'connector.properties.0.key' = 'props-key0',
+  'connector.properties.0.value' = 'props-val0',
+  'format.fields.0.name' = 'f0',
+  'format.fields.0.type' = 'INT',
+  'format.fields.1.name' = 'f1',
+  'format.fields.1.type' = 'BIGINT',
+  'format.fields.2.name' = 'f2',
+  'format.fields.2.type' = 'STRING'
+);
+
+-- CREATE a Elasticsearch table.
+create table kafka_table (
+  f0 int,
+  f1 bigint,
+  f2 string
+) with (
+  'connector.type' = 'elasticsearch',
+  'update-mode' = 'append',
+  'connector.hosts.0.hostname' = 'host_name',
+  'connector.hosts.0.port' = '9092',
+  'connector.hosts.0.protocal' = 'http',
+  'connector.index' = 'index_name',
+  'connector.document-type' = 'type_name',
+  'format.fields.0.name' = 'f0',
+  'format.fields.0.type' = 'INT',
+  'format.fields.1.name' = 'f1',
+  'format.fields.1.type' = 'BIGINT',
+  'format.fields.2.name' = 'f2',
+  'format.fields.2.type' = 'STRING'
+);
+{% endhighlight %}
+
+{% top %}
+
+Drop Table
+---
+{% highlight sql %}
+DROP TABLE [IF EXISTS] [catalog_name.][db_name.]table_name
+{% endhighlight %}
+
+Drop a table with the given table name. If the table to drop does not exist, 
an exception is thrown.
+
+**IF EXISTS**
+
+If the table does not exist, nothing happens.
+
+{% top %}
+
+Create View
+---
+{% highlight sql %}
+CREATE [OR REPLACE] VIEW [catalog_name.][db_name.]view_name
+[COMMENT view_comment]
+AS
+select_statement
+{% endhighlight %}
+
+Define a logical view on a sql query which may be from multiple tables or 
views.
+
+**OR REPLACE**
+
+If the view does not exist, CREATE OR REPLACE VIEW is equivalent to CREATE 
VIEW. If the view does exist, CREATE OR REPLACE VIEW is equivalent to ALTER 
VIEW. **Notes:** The OR REPLACE option is always false now.
+
+**AS select_statement**
+
+A SELECT statement that defines the view. The statement can select from base 
tables or the other views.
+
+**Examples**:
+{% highlight sql %}
+-- Create a view view_deptDetails in database1. The view definition is 
recorded in the specified catalog and database.
+CREATE VIEW catalog1.database1.view1
+  AS SELECT * FROM company JOIN dept ON company.dept_id = dept.id;
+
+-- Create or replace a view from a persistent view with an extra filter
+CREATE OR REPLACE VIEW view2
+  AS SELECT * FROM catalog1.database1.view1 WHERE loc = 'Shanghai';
+
+-- Access the base 

[GitHub] [flink] danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-07 Thread GitBox
danny0405 commented on a change in pull request #9366: [FLINK-13359][docs] Add 
documentation for DDL introduction
URL: https://github.com/apache/flink/pull/9366#discussion_r311834574
 
 

 ##
 File path: docs/dev/table/ddl.md
 ##
 @@ -0,0 +1,230 @@
+---
+title: "DDL"
+nav-parent_id: tableapi
+nav-pos: 0
+---
+
+
+The Table API and SQL are integrated in a joint API. The central concept of 
this API is a `Table` which serves as input and output of queries. This 
document shows all the DDL grammar Flink support, how to register a `Table`(or 
view) through DDL, how to drop a `Table`(or view) through DDL.
+
+* This will be replaced by the TOC
+{:toc}
+
+Create Table
+---
+{% highlight sql %}
+CREATE [OR REPLACE] TABLE [catalog_name.][db_name.]table_name
+  [(col_name1 col_type1 [COMMENT col_comment1], ...)]
+  [COMMENT table_comment]
+  [PARTITIONED BY (col_name1, col_name2, ...)]
+  [WITH (key1=val1, key2=val2, ...)]
+{% endhighlight %}
+
+Create a table with the given table properties. If a table with the same name 
already exists in the database, an exception is thrown except that *IF NOT 
EXIST* is declared.
+
+**OR REPLACE**
 
 Review comment:
   I suggest to list the grammar we supported and give the right notes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] 
Verify and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-517548502
 
 
   ## CI report:
   
   * 5e757e180999aea0a3346ba841d9d48e456cdc0c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121703207)
   * 6069d5191f51436b98882198354a683003d49481 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122046271)
   * 56dc76d98ac3e666ec722443b8d62486c7adaf4c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122046565)
   * 49f315b9eebe2b2bc15719d54f1eb9f91d4638e1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122046887)
   * 33059bd1fb73b47343dbbc33e0e05ecb56a3e6a2 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122060322)
   * 25b39b0b5b351574581c546902aef007497d3ee3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122070149)
   * 48c66a9d7f5b1903fa3271fcfc2ce048ac25a45d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122209945)
   * 7d17148e43ebb9a62f4c6c383516c9fbb2094876 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122378482)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support 
priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#issuecomment-517608148
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit eb8a3c29bbb32a99d5d8e5e8baaabc390f123663 (Thu Aug 08 
04:39:35 UTC 2019)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13548).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wzhero1 commented on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-07 Thread GitBox
wzhero1 commented on issue #9336: [FLINK-13548][Deployment/YARN]Support 
priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#issuecomment-519360009
 
 
   @walterddr Thanks for your review, I have modified YARN config description 
according to the comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support 
priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#issuecomment-517610510
 
 
   ## CI report:
   
   * 4fe9e1ba5707fb4d208290116bc172142e6be08a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121719017)
   * 346ed33756127b27aed16fc91d8ce81048186c06 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121827648)
   * d9b31af0157fe9b2adf080575272502b6f2e0cb5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122217463)
   * eb8a3c29bbb32a99d5d8e5e8baaabc390f123663 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122378181)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] 
Verify and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-517546275
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7d17148e43ebb9a62f4c6c383516c9fbb2094876 (Thu Aug 08 
04:38:34 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support 
priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#issuecomment-517608148
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit eb8a3c29bbb32a99d5d8e5e8baaabc390f123663 (Thu Aug 08 
04:37:32 UTC 2019)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13548).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wzhero1 commented on a change in pull request #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-07 Thread GitBox
wzhero1 commented on a change in pull request #9336: 
[FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#discussion_r311852609
 
 

 ##
 File path: docs/_includes/generated/yarn_config_configuration.html
 ##
 @@ -22,6 +22,11 @@
 "0"
 With this configuration option, users can specify a port, a 
range of ports or a list of ports for the Application Master (and JobManager) 
RPC port. By default we recommend using the default value (0) to let the 
operating system choose an appropriate port. In particular when multiple AMs 
are running on the same physical host, fixed port assignments prevent the AM 
from starting. For example when running Flink on YARN on an environment with a 
restrictive firewall, this option allows specifying a range of allowed 
ports.
 
+
 
 Review comment:
   ok, I have added the instruction in yarn-setup.md and yarn-setup.zh.md(used 
in English).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wzhero1 commented on a change in pull request #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-07 Thread GitBox
wzhero1 commented on a change in pull request #9336: 
[FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#discussion_r311852509
 
 

 ##
 File path: docs/_includes/generated/yarn_config_configuration.html
 ##
 @@ -22,6 +22,11 @@
 "0"
 With this configuration option, users can specify a port, a 
range of ports or a list of ports for the Application Master (and JobManager) 
RPC port. By default we recommend using the default value (0) to let the 
operating system choose an appropriate port. In particular when multiple AMs 
are running on the same physical host, fixed port assignments prevent the AM 
from starting. For example when running Flink on YARN on an environment with a 
restrictive firewall, this option allows specifying a range of allowed 
ports.
 
+
+yarn.application.priority
+-1
+The priority which Flink used to submit YARN application. The 
priority is non negative, the bigger the number, the higher the priority. By 
default, we take -1. When the priority is negative, we use default yarn queue 
priority.
 
 Review comment:
   It is very useful, I have changed the option description according to this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support 
priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#issuecomment-517608148
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit eb8a3c29bbb32a99d5d8e5e8baaabc390f123663 (Thu Aug 08 
04:33:28 UTC 2019)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13548).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix 
DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
URL: https://github.com/apache/flink/pull/9342#issuecomment-517768053
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b2d4875b20874041f90db3473010cf454a2cba66 (Thu Aug 08 
04:31:26 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors

2019-08-07 Thread GitBox
xuefuz commented on a change in pull request #9342: [FLINK-13438][hive] Fix 
DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
URL: https://github.com/apache/flink/pull/9342#discussion_r311851697
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTypeUtil.java
 ##
 @@ -256,9 +258,9 @@ private static DataType 
toFlinkPrimitiveType(PrimitiveTypeInfo hiveType) {
case DOUBLE:
return DataTypes.DOUBLE();
case DATE:
-   return DataTypes.DATE();
+   return DataTypes.DATE().bridgedTo(Date.class);
case TIMESTAMP:
-   return DataTypes.TIMESTAMP();
+   return 
DataTypes.TIMESTAMP(3).bridgedTo(Timestamp.class);
 
 Review comment:
   @twalthr Do you have any comment on this loss of precision in conversion? 
Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9390: [FLINK-13534][hive] Unable to query Hive table with decimal column

2019-08-07 Thread GitBox
flinkbot commented on issue #9390: [FLINK-13534][hive] Unable to query Hive 
table with decimal column
URL: https://github.com/apache/flink/pull/9390#issuecomment-519355742
 
 
   ## CI report:
   
   * b2432f2b0e522f84bf3029dc80cf46a00d13ea54 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122376827)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9390: [FLINK-13534][hive] Unable to query Hive table with decimal column

2019-08-07 Thread GitBox
flinkbot commented on issue #9390: [FLINK-13534][hive] Unable to query Hive 
table with decimal column
URL: https://github.com/apache/flink/pull/9390#issuecomment-519354933
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b2432f2b0e522f84bf3029dc80cf46a00d13ea54 (Thu Aug 08 
04:10:33 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13534).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9389: [FLINK-13645][table-planner] Error in code-gen when using blink planner in scala shell

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9389: [FLINK-13645][table-planner] Error in 
code-gen when using blink planner in scala shell
URL: https://github.com/apache/flink/pull/9389#issuecomment-519353026
 
 
   ## CI report:
   
   * 1e89a949f197d2feb5c56f737edfe67c250b346b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122376001)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13534) Unable to query Hive table with decimal column

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13534:
---
Labels: pull-request-available  (was: )

> Unable to query Hive table with decimal column
> --
>
> Key: FLINK-13534
> URL: https://issues.apache.org/jira/browse/FLINK-13534
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
>  Labels: pull-request-available
>
> Hit the following exception when access a Hive table with decimal column:
> {noformat}
> Caused by: org.apache.flink.table.api.TableException: TableSource of type 
> org.apache.flink.batch.connectors.hive.HiveTableSource returned a DataSet of 
> data type ROW<`x` LEGACY(BigDecimal)> that does not match with the data type 
> ROW<`x` DECIMAL(10, 0)> declared by the TableSource.getProducedDataType() 
> method. Please validate the implementation of the TableSource.
>  at 
> org.apache.flink.table.plan.nodes.dataset.BatchTableSourceScan.translateToPlan(BatchTableSourceScan.scala:118)
>  at 
> org.apache.flink.table.api.internal.BatchTableEnvImpl.translate(BatchTableEnvImpl.scala:303)
>  at 
> org.apache.flink.table.api.internal.BatchTableEnvImpl.translate(BatchTableEnvImpl.scala:281)
>  at 
> org.apache.flink.table.api.internal.BatchTableEnvImpl.writeToSink(BatchTableEnvImpl.scala:117)
>  at 
> org.apache.flink.table.api.internal.TableEnvImpl.insertInto(TableEnvImpl.scala:564)
>  at 
> org.apache.flink.table.api.internal.TableEnvImpl.insertInto(TableEnvImpl.scala:516)
>  at 
> org.apache.flink.table.api.internal.BatchTableEnvImpl.insertInto(BatchTableEnvImpl.scala:59)
>  at 
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] lirui-apache opened a new pull request #9390: [FLINK-13534][hive] Unable to query Hive table with decimal column

2019-08-07 Thread GitBox
lirui-apache opened a new pull request #9390: [FLINK-13534][hive] Unable to 
query Hive table with decimal column
URL: https://github.com/apache/flink/pull/9390
 
 
   
   
   ## What is the purpose of the change
   
   Fix the issue that Flink cannot access Hive table with decimal columns.
   
   
   ## Brief change log
   
 - Avoid conversion between `DataType` and `TypeInformation` in several 
places. Because such conversions can lose type parameters.
 - Add a test for decimal type.
   
   
   ## Verifying this change
   
   New test case.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? NA
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9389: [FLINK-13645][table-planner] Error in code-gen when using blink planner in scala shell

2019-08-07 Thread GitBox
flinkbot commented on issue #9389: [FLINK-13645][table-planner] Error in 
code-gen when using blink planner in scala shell
URL: https://github.com/apache/flink/pull/9389#issuecomment-519353026
 
 
   ## CI report:
   
   * 1e89a949f197d2feb5c56f737edfe67c250b346b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122376001)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-5726) Add the RocketMQ plugin for the Apache Flink

2019-08-07 Thread miki.huang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-5726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902648#comment-16902648
 ] 

miki.huang edited comment on FLINK-5726 at 8/8/19 3:55 AM:
---

Hi all, I would like to join this issue and let's make it happen :)


was (Author: mikiaichiyu):
Hi all, I would like to join this issue and let make it happen :)

> Add the RocketMQ plugin for the Apache Flink
> 
>
> Key: FLINK-5726
> URL: https://issues.apache.org/jira/browse/FLINK-5726
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Longda Feng
>Assignee: yuemeng
>Priority: Minor
>
> Apache RocketMQ® is an open source distributed messaging and streaming data 
> platform. It has been used in a lot of companies. Please refer to 
> http://rocketmq.incubator.apache.org/ for more details.
> Since the Apache RocketMq 4.0 will be released in the next few days, we can 
> start the job of adding the RocketMq plugin for the Apache Flink.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13225) Introduce type inference for hive functions in blink

2019-08-07 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902649#comment-16902649
 ] 

Timo Walther commented on FLINK-13225:
--

[~ykt836] The feature freeze was over 1 month ago and this is definitely a new 
feature as it adds support for Hive functions. If we would have waited with 
this, we could have merged a clean and long-term solution in 1.10. This 
introduces a lot of hacks and temporary code.

> Introduce type inference for hive functions in blink
> -
>
> Key: FLINK-13225
> URL: https://issues.apache.org/jira/browse/FLINK-13225
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> See some conversation in [https://github.com/apache/flink/pull/8920]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-5726) Add the RocketMQ plugin for the Apache Flink

2019-08-07 Thread miki.huang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-5726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902648#comment-16902648
 ] 

miki.huang commented on FLINK-5726:
---

Hi all, I would like to join this issue and let make it happen :)

> Add the RocketMQ plugin for the Apache Flink
> 
>
> Key: FLINK-5726
> URL: https://issues.apache.org/jira/browse/FLINK-5726
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Longda Feng
>Assignee: yuemeng
>Priority: Minor
>
> Apache RocketMQ® is an open source distributed messaging and streaming data 
> platform. It has been used in a lot of companies. Please refer to 
> http://rocketmq.incubator.apache.org/ for more details.
> Since the Apache RocketMq 4.0 will be released in the next few days, we can 
> start the job of adding the RocketMq plugin for the Apache Flink.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9331: [FLINK-13523][table-planner-blink] 
Verify and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-517546275
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 48c66a9d7f5b1903fa3271fcfc2ce048ac25a45d (Thu Aug 08 
03:51:40 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on a change in pull request #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
docete commented on a change in pull request #9331: 
[FLINK-13523][table-planner-blink] Verify and correct arithmetic function's 
semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#discussion_r311846370
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/agg/DeclarativeAggCodeGen.scala
 ##
 @@ -204,8 +204,13 @@ class DeclarativeAggCodeGen(
   }
 
   def getValue(generator: ExprCodeGenerator): GeneratedExpression = {
-val resolvedGetValueExpression = function.getValueExpression
+val expr = function.getValueExpression
   .accept(ResolveReference())
+val resolvedGetValueExpression = ApiExpressionUtils.unresolvedCall(
 
 Review comment:
   Will try to fix this in AvgAggFunction


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9389: [FLINK-13645][table-planner] Error in code-gen when using blink planner in scala shell

2019-08-07 Thread GitBox
flinkbot commented on issue #9389: [FLINK-13645][table-planner] Error in 
code-gen when using blink planner in scala shell
URL: https://github.com/apache/flink/pull/9389#issuecomment-519351757
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit cb59801438a77fc35f1b1f822948fd52dcb08d9f (Thu Aug 08 
03:49:50 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13645).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13645) Error in code-gen when using blink planner in scala shell

2019-08-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13645:
---
Labels: pull-request-available  (was: )

> Error in code-gen when using blink planner in scala shell
> -
>
> Key: FLINK-13645
> URL: https://issues.apache.org/jira/browse/FLINK-13645
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Jeff Zhang
>Priority: Blocker
>  Labels: pull-request-available
> Attachments: image-2019-08-08-11-43-08-741.png
>
>
>  !image-2019-08-08-11-43-08-741.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zjffdu opened a new pull request #9389: [FLINK-13645][planner] Error in code-gen when using blink planner in scala shell

2019-08-07 Thread GitBox
zjffdu opened a new pull request #9389: [FLINK-13645][planner] Error in 
code-gen when using blink planner in scala shell
URL: https://github.com/apache/flink/pull/9389
 
 
   
   ## What is the purpose of the change
   
   This is a trivial PR to fix the issue when using blink planner in scala 
shell. The root cause is that we didn't use the right ClassPathLoader. This PR 
just fix it.
   
   ## Brief change log
   
   Set the ClassPathLoader properly.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage. I 
verify it manually in apache zeppelin which uses scala shell related code.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: ( no )
 - The runtime per-record code paths (performance sensitive): ( no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no )
 - The S3 file system connector: ( no )
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation 
of Hive source/sink
URL: https://github.com/apache/flink/pull/9217#issuecomment-514589043
 
 
   ## CI report:
   
   * 516e655f7f0853d6585ae5de2fbecc438d57e474 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120432519)
   * fee6f2df235f113b7757ce436ee127711b0094e6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121184693)
   * 61c360e0902ded2939ba3c8b9662a1b58074e4d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121348454)
   * 7dafc731904fb3ae9dcee24f851803fddf87b551 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122371437)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null.

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add 
contract in `LookupableTableSource` to specify the behavior when lookupKeys 
contains null.
URL: https://github.com/apache/flink/pull/9335#discussion_r311844679
 
 

 ##
 File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseLookupFunction.java
 ##
 @@ -34,7 +35,9 @@
 import org.apache.hadoop.hbase.TableNotFoundException;
 import org.apache.hadoop.hbase.client.Connection;
 import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Mutation;
 
 Review comment:
   useless import


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null.

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add 
contract in `LookupableTableSource` to specify the behavior when lookupKeys 
contains null.
URL: https://github.com/apache/flink/pull/9335#discussion_r310451676
 
 

 ##
 File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseLookupFunction.java
 ##
 @@ -72,7 +75,15 @@ public HBaseLookupFunction(
 */
public void eval(Object rowKey) throws IOException {
// fetch result
-   Result result = table.get(readHelper.createGet(rowKey));
+   byte[] row = readHelper.serialize(rowKey);
+   Get get;
+   try {
+   get = readHelper.createGet(row);
+   } catch (IllegalArgumentException e) {
 
 Review comment:
   We shouldn't use try catch to do this job in performance critical code. We 
can return if length of `row` is zero.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null.

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add 
contract in `LookupableTableSource` to specify the behavior when lookupKeys 
contains null.
URL: https://github.com/apache/flink/pull/9335#discussion_r311845039
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCLookupFunction.java
 ##
 @@ -141,7 +169,12 @@ public void eval(Object... keys) {
try {
statement.clearParameters();
for (int i = 0; i < keys.length; i++) {
-   JDBCUtils.setField(statement, 
keySqlTypes[i], keys[i], i);
+   if (containsNull) {
+   JDBCUtils.setField(statement, 
keySqlTypes[i], keys[i], 2 * i);
+   JDBCUtils.setField(statement, 
keySqlTypes[i], keys[i], 2 * i + 1);
 
 Review comment:
   We can keep this special logic for now. But please open a JIRA to improve 
this. We can introduced a custom `NamedPreparedStatement` to pass each field 
only once. see 
https://www.javaworld.com/article/2077706/named-parameters-for-preparedstatement.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null.

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add 
contract in `LookupableTableSource` to specify the behavior when lookupKeys 
contains null.
URL: https://github.com/apache/flink/pull/9335#discussion_r311845349
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCLookupFunction.java
 ##
 @@ -101,7 +113,8 @@ public JDBCLookupFunction(
this.maxRetryTimes = lookupOptions.getMaxRetryTimes();
this.keySqlTypes = 
Arrays.stream(keyTypes).mapToInt(JDBCTypeUtil::typeInformationToSqlType).toArray();
this.outputSqlTypes = 
Arrays.stream(fieldTypes).mapToInt(JDBCTypeUtil::typeInformationToSqlType).toArray();
-   this.query = options.getDialect().getSelectFromStatement(
+   this.nonNullableQuery = 
options.getDialect().getSelectFromStatement(options.getTableName(), fieldNames, 
keyNames);
+   this.nullableQuery = 
options.getDialect().getSelectNotDistinctFromStatement(
 
 Review comment:
   If we have a consensus on the new methods of `SqlDialect`. we can call the 
field `nullSafeQuery` and `nonNullableQuery` -> `nullUnsafeQuery`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null.

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add 
contract in `LookupableTableSource` to specify the behavior when lookupKeys 
contains null.
URL: https://github.com/apache/flink/pull/9335#discussion_r310451076
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/sources/LookupableTableSource.java
 ##
 @@ -33,13 +33,33 @@
 public interface LookupableTableSource extends TableSource {
 
/**
-* Gets the {@link TableFunction} which supports lookup one key at a 
time.
+* Gets the {@link TableFunction} which supports lookup one key at a 
time. Calling `eval`
+* method in the returned {@link TableFunction} means send a lookup 
request to the TableSource.
+*
+* IMPORTANT:
+* Lookup keys in a request may contain null value. When it happens, it 
expects to lookup
+* records with null value on the lookup key field.
+* E.g., for a MySQL table with the following schema, send a lookup 
request with null value
+* on `age` field means to find students whose age are unknown 
(CAUTION: It is equivalent to filter condition:
+* `WHERE age IS NULL` instead of `WHERE age = null`).
+*
+* -
+*  Table : Student
+* -
+* id|   LONG
+* age   |   INT
+* name  |   STRING
+* -
+* For the external system which does not support null value (E.g, 
HBase does not support null value on rowKey),
+* it could throw an exception or discard the request when receiving a 
request with null value on lookup key.
+*
 * @param lookupKeys the chosen field names as lookup keys, it is in 
the defined order
 */
TableFunction getLookupFunction(String[] lookupKeys);
 
/**
 * Gets the {@link AsyncTableFunction} which supports async lookup one 
key at a time.
+*
 
 Review comment:
   Please update javadoc of this method too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null.

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add 
contract in `LookupableTableSource` to specify the behavior when lookupKeys 
contains null.
URL: https://github.com/apache/flink/pull/9335#discussion_r311844694
 
 

 ##
 File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseLookupFunction.java
 ##
 @@ -20,6 +20,7 @@
 
 import org.apache.flink.addons.hbase.util.HBaseConfigurationUtil;
 import org.apache.flink.addons.hbase.util.HBaseReadWriteHelper;
+import org.apache.flink.addons.hbase.util.HBaseTypeUtils;
 
 Review comment:
   useless import


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null.

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add 
contract in `LookupableTableSource` to specify the behavior when lookupKeys 
contains null.
URL: https://github.com/apache/flink/pull/9335#discussion_r310451737
 
 

 ##
 File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/util/HBaseReadWriteHelper.java
 ##
 @@ -86,17 +86,27 @@ public HBaseReadWriteHelper(HBaseTableSchema 
hbaseTableSchema) {
this.resultRow = new Row(fieldLength);
}
 
+   /**
+* Serializes a rowkey object into byte array.
+* @param rowKey rowkey object to serialize
+*
+* @return serialize bytes.
+*/
+   public byte[] serialize(Object rowKey) {
+   byte[] key = HBaseTypeUtils.serializeFromObject(
 
 Review comment:
   `return HBaseTypeUtils.serializeFromObject(...`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null.

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add 
contract in `LookupableTableSource` to specify the behavior when lookupKeys 
contains null.
URL: https://github.com/apache/flink/pull/9335#discussion_r311845514
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCLookupFunction.java
 ##
 @@ -76,7 +87,8 @@
private final int maxRetryTimes;
 
private transient Connection dbConn;
-   private transient PreparedStatement statement;
+   private transient PreparedStatement fastStatement;
+   private transient PreparedStatement slowStatement;
 
 Review comment:
   If we have a consensus on the new methods of `SqlDialect`. We can rename 
these fields to `nullSafeStatemen` and `nullUnsafeStatement`, we can add 
comment on the fields that we should use `nullUnsafeStatement` as much as 
possible because it is faster.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null.

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add 
contract in `LookupableTableSource` to specify the behavior when lookupKeys 
contains null.
URL: https://github.com/apache/flink/pull/9335#discussion_r311844392
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/dialect/JDBCDialect.java
 ##
 @@ -115,6 +115,20 @@ default String getDeleteStatement(String tableName, 
String[] conditionFields) {
return "DELETE FROM " + quoteIdentifier(tableName) + " WHERE " 
+ conditionClause;
}
 
+   /**
+* Get select fields statement by `is not distinct from` condition 
fields. Default use SELECT.
+*/
+   default String getSelectNotDistinctFromStatement(String tableName, 
String[] selectFields, String[] conditionFields) {
 
 Review comment:
   I would re-design the method names a bit. 
   
   - Add a new method `getSelectStatement(String tableName, String[] 
selectFields)` which doesn't contain `WHERE` clause.
   - Rename `getSelectFromStatement` to `getFilterStatement(String tableName, 
String[] selectFields, String[] conditionFields)` and forces `conditionFields` 
shouldn't be empty.
   - Rename `getSelectNotDistinctFromStatement` to 
`getNullSafeFilterStatement(...)`, because the default implementation is not 
`is not distinct from` but is a null safe comparison different from 
`getFilterStatement`. We should also outline this in the method javadoc.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null.

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add 
contract in `LookupableTableSource` to specify the behavior when lookupKeys 
contains null.
URL: https://github.com/apache/flink/pull/9335#discussion_r311842402
 
 

 ##
 File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/util/HBaseReadWriteHelper.java
 ##
 @@ -86,17 +86,27 @@ public HBaseReadWriteHelper(HBaseTableSchema 
hbaseTableSchema) {
this.resultRow = new Row(fieldLength);
}
 
+   /**
+* Serializes a rowkey object into byte array.
+* @param rowKey rowkey object to serialize
+*
+* @return serialize bytes.
+*/
+   public byte[] serialize(Object rowKey) {
+   byte[] key = HBaseTypeUtils.serializeFromObject(
+   rowKey,
+   rowKeyType,
+   charset);
+   return key;
+   }
+
/**
 * Returns an instance of Get that retrieves the matches records from 
the HBase table.
 *
 * @return The appropriate instance of Get for this use case.
 */
-   public Get createGet(Object rowKey) {
-   byte[] rowkey = HBaseTypeUtils.serializeFromObject(
-   rowKey,
-   rowKeyType,
-   charset);
-   Get get = new Get(rowkey);
+   public Get createGet(byte[] row) {
 
 Review comment:
   `row` -> `rowkey`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null.

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9335: [FLINK-13503][API] Add 
contract in `LookupableTableSource` to specify the behavior when lookupKeys 
contains null.
URL: https://github.com/apache/flink/pull/9335#discussion_r310451016
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/sources/LookupableTableSource.java
 ##
 @@ -33,13 +33,33 @@
 public interface LookupableTableSource extends TableSource {
 
/**
-* Gets the {@link TableFunction} which supports lookup one key at a 
time.
+* Gets the {@link TableFunction} which supports lookup one key at a 
time. Calling `eval`
+* method in the returned {@link TableFunction} means send a lookup 
request to the TableSource.
+*
+* IMPORTANT:
+* Lookup keys in a request may contain null value. When it happens, it 
expects to lookup
+* records with null value on the lookup key field.
+* E.g., for a MySQL table with the following schema, send a lookup 
request with null value
+* on `age` field means to find students whose age are unknown 
(CAUTION: It is equivalent to filter condition:
+* `WHERE age IS NULL` instead of `WHERE age = null`).
+*
+* -
+*  Table : Student
+* -
+* id|   LONG
+* age   |   INT
+* name  |   STRING
+* -
+* For the external system which does not support null value (E.g, 
HBase does not support null value on rowKey),
+* it could throw an exception or discard the request when receiving a 
request with null value on lookup key.
 
 Review comment:
   Please never throw an exception. We should discard the request because HBase 
don't have null rowkeys.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13645) Error in code-gen when using blink planner in scala shell

2019-08-07 Thread Jeff Zhang (JIRA)
Jeff Zhang created FLINK-13645:
--

 Summary: Error in code-gen when using blink planner in scala shell
 Key: FLINK-13645
 URL: https://issues.apache.org/jira/browse/FLINK-13645
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.9.0
Reporter: Jeff Zhang
 Attachments: image-2019-08-08-11-43-08-741.png

 !image-2019-08-08-11-43-08-741.png! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9377: [FLINK-13561][table-planner-blink] 
Verify and correct time function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9377#issuecomment-518975063
 
 
   ## CI report:
   
   * 84be6c933af6b8a960df17f6767d620db7f3a59f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/16159)
   * 1680bb5323d636c2593dbfaf9fc3350b333ed018 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122373926)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] Fix some operator names are not set in blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] 
Fix some operator names are not set in blink planner
URL: https://github.com/apache/flink/pull/9363#issuecomment-518265997
 
 
   ## CI report:
   
   * 1fe6c332279c34546ec3db24a574dfd53500d20b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121971735)
   * fa3e7406f9664a59efcb448748511b656474e74c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122089384)
   * 28175449cb1d5eb8f318359090ea87e5b2af42d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122102858)
   * 81593a4dcb3573843c1c02cba0cb17abe1693065 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122290876)
   * 829830f8df1eb814ac44716f230c6aedcfaa5128 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122369808)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9377: [FLINK-13561][table-planner-blink] 
Verify and correct time function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9377#issuecomment-518975063
 
 
   ## CI report:
   
   * 84be6c933af6b8a960df17f6767d620db7f3a59f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/16159)
   * 1680bb5323d636c2593dbfaf9fc3350b333ed018 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122373926)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9377: [FLINK-13561][table-planner-blink] 
Verify and correct time function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9377#issuecomment-518973162
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1680bb5323d636c2593dbfaf9fc3350b333ed018 (Thu Aug 08 
03:11:56 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
wuchong commented on issue #9377: [FLINK-13561][table-planner-blink] Verify and 
correct time function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9377#issuecomment-519345696
 
 
   Thanks @JingsongLi  for the reviewing. I have updated the PR. I think the 
commit message explains the changes. Please have a look again when you are 
free. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9178: Typo in `scala_api_quickstart.md`

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9178: Typo in `scala_api_quickstart.md`
URL: https://github.com/apache/flink/pull/9178#issuecomment-513163943
 
 
   ## CI report:
   
   * 6de11ffcff3e65f5c44a7365e0cd716405242c94 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119766079)
   * 75d450b0e04e8f9b0290497c1804d082cdcd4446 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122373016)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9377: [FLINK-13561][table-planner-blink] 
Verify and correct time function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9377#issuecomment-518973162
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 84be6c933af6b8a960df17f6767d620db7f3a59f (Thu Aug 08 
03:04:50 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9377: 
[FLINK-13561][table-planner-blink] Verify and correct time function's semantic 
for Blink planner
URL: https://github.com/apache/flink/pull/9377#discussion_r311839539
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/CodeGeneratorContext.scala
 ##
 @@ -518,17 +518,6 @@ class CodeGeneratorContext(val tableConfig: TableConfig) {
 DEFAULT_TIMEZONE_TERM
   }
 
-  /**
-* Adds a reusable Time ZoneId to the member area of the generated class.
-*/
-  def addReusableTimeZoneID(): String = {
 
 Review comment:
   I would like to remove dead code and introduce them when we need them. 
Furthermore, these code does't work, for example, it uses the same member field 
name with timeZone, it doesn't add code to member are. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9377: 
[FLINK-13561][table-planner-blink] Verify and correct time function's semantic 
for Blink planner
URL: https://github.com/apache/flink/pull/9377#discussion_r311839539
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/CodeGeneratorContext.scala
 ##
 @@ -518,17 +518,6 @@ class CodeGeneratorContext(val tableConfig: TableConfig) {
 DEFAULT_TIMEZONE_TERM
   }
 
-  /**
-* Adds a reusable Time ZoneId to the member area of the generated class.
-*/
-  def addReusableTimeZoneID(): String = {
 
 Review comment:
   I would like to remove dead code and introduce them when we need them. 
Furthermore, these code does't work, for example, it uses the same member field 
name with timeZone, it doesn't add code to member area. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-08-07 Thread Akshay Iyangar (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902614#comment-16902614
 ] 

Akshay Iyangar commented on FLINK-7289:
---

[~mikekap] - We too have the same issue when running Flink in Kubernetes. We 
tried to use the fix that you have to restrict the WriteBufferManger size but 
even then we see the memory for rocks continously increasing. 
{code:java}
Options.write_buffer_size: 67108864
Options.max_write_buffer_number: 2
Options.max_open_files: -1
{code}
We set the write buffer manager size to 8gb. 

We use 10Gb as the heap size per TM . 

Each node/TM we use is a 32Gb box. but as time proceeds our memory keeps 
increasing. Our JVM heap memory is pretty constant, we used a profiler to 
verify this, but for rocks i'm not sure what is the best way to find what is 
consuming all the memory.

I would appreciate if you can push me in the right direction to see what more 
can be tuned wrt rocks.

Attaching the complete conf for rocks

[^completeRocksdbConfig.txt]

 

 

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Stefan Richter
>Priority: Major
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-08-07 Thread Akshay Iyangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akshay Iyangar updated FLINK-7289:
--
Attachment: completeRocksdbConfig.txt

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Stefan Richter
>Priority: Major
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9178: Typo in `scala_api_quickstart.md`

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9178: Typo in `scala_api_quickstart.md`
URL: https://github.com/apache/flink/pull/9178#issuecomment-513163943
 
 
   ## CI report:
   
   * 6de11ffcff3e65f5c44a7365e0cd716405242c94 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119766079)
   * 75d450b0e04e8f9b0290497c1804d082cdcd4446 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122373016)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-08-07 Thread Akshay Iyangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akshay Iyangar updated FLINK-7289:
--
Attachment: (was: completeRocksdbConfig.txt)

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Stefan Richter
>Priority: Major
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-08-07 Thread Akshay Iyangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akshay Iyangar updated FLINK-7289:
--
Attachment: completeRocksdbConfig.txt

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Stefan Richter
>Priority: Major
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13644) Translate "State Backends" page into Chinese

2019-08-07 Thread fanrui (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902609#comment-16902609
 ] 

fanrui edited comment on FLINK-13644 at 8/8/19 2:56 AM:


Hi,[~jark], Can you assigned to me? I will completed in next week. Thanks.


was (Author: fanrui):
Hi,[~jark], Can you  assigned to me? I will completed in next week,

> Translate "State Backends" page into Chinese
> 
>
> Key: FLINK-13644
> URL: https://issues.apache.org/jira/browse/FLINK-13644
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: fanrui
>Priority: Major
> Fix For: 1.10.0
>
>
> 1、The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/stream/state/state_backends.html]
> The markdown file is located in "docs/dev/stream/state/state_backends.zh.md"
> 2、The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/state/state_backends.html]
> The markdown file is located in "docs/ops/state/state_backends.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13644) Translate "State Backends" page into Chinese

2019-08-07 Thread fanrui (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902609#comment-16902609
 ] 

fanrui commented on FLINK-13644:


Hi,[~jark], Can you  assigned to me? I will completed in next week,

> Translate "State Backends" page into Chinese
> 
>
> Key: FLINK-13644
> URL: https://issues.apache.org/jira/browse/FLINK-13644
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: fanrui
>Priority: Major
> Fix For: 1.10.0
>
>
> 1、The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/stream/state/state_backends.html]
> The markdown file is located in "docs/dev/stream/state/state_backends.zh.md"
> 2、The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/state/state_backends.html]
> The markdown file is located in "docs/ops/state/state_backends.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13639) Consider refactoring of IntermediateResultPartitionID to consist of IntermediateDataSetID and partitionIndex

2019-08-07 Thread Zhu Zhu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902606#comment-16902606
 ] 

Zhu Zhu commented on FLINK-13639:
-

+1 for this proposal. 

A structured ID system would be helpful for debugging and further 
development(e.g. fast lookup).

Maybe later we should even refactor the ExecutionAttemptID as well, to make it 
(ExecutionVertexID, attemptNumber) ? (ExecutionVertexID consists of JobVertexID 
and subtaskIndex).

> Consider refactoring of IntermediateResultPartitionID to consist of 
> IntermediateDataSetID and partitionIndex
> 
>
> Key: FLINK-13639
> URL: https://issues.apache.org/jira/browse/FLINK-13639
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Andrey Zagrebin
>Priority: Minor
>
> suggested in [https://github.com/apache/flink/pull/8362#discussion_r285519348]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13639) Consider refactoring of IntermediateResultPartitionID to consist of IntermediateDataSetID and partitionIndex

2019-08-07 Thread Zhu Zhu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902606#comment-16902606
 ] 

Zhu Zhu edited comment on FLINK-13639 at 8/8/19 2:52 AM:
-

+1 for this proposal. 

A structured ID system would be helpful for debugging and future 
development(e.g. fast lookup).

Maybe later we should even refactor the ExecutionAttemptID as well, to make it 
(ExecutionVertexID, attemptNumber) ? (ExecutionVertexID consists of JobVertexID 
and subtaskIndex).


was (Author: zhuzh):
+1 for this proposal. 

A structured ID system would be helpful for debugging and further 
development(e.g. fast lookup).

Maybe later we should even refactor the ExecutionAttemptID as well, to make it 
(ExecutionVertexID, attemptNumber) ? (ExecutionVertexID consists of JobVertexID 
and subtaskIndex).

> Consider refactoring of IntermediateResultPartitionID to consist of 
> IntermediateDataSetID and partitionIndex
> 
>
> Key: FLINK-13639
> URL: https://issues.apache.org/jira/browse/FLINK-13639
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Andrey Zagrebin
>Priority: Minor
>
> suggested in [https://github.com/apache/flink/pull/8362#discussion_r285519348]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9178: Typo in `scala_api_quickstart.md`

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9178: Typo in `scala_api_quickstart.md`
URL: https://github.com/apache/flink/pull/9178#issuecomment-513124417
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 75d450b0e04e8f9b0290497c1804d082cdcd4446 (Thu Aug 08 
02:51:39 UTC 2019)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13644) Translate "State Backends" page into Chinese

2019-08-07 Thread fanrui (JIRA)
fanrui created FLINK-13644:
--

 Summary: Translate "State Backends" page into Chinese
 Key: FLINK-13644
 URL: https://issues.apache.org/jira/browse/FLINK-13644
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Affects Versions: 1.10.0
Reporter: fanrui
 Fix For: 1.10.0


1、The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/stream/state/state_backends.html]

The markdown file is located in "docs/dev/stream/state/state_backends.zh.md"

2、The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/state/state_backends.html]

The markdown file is located in "docs/ops/state/state_backends.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9308: [FLINK-13517][docs][hive] Restructure Hive Catalog documentation

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9308: [FLINK-13517][docs][hive] Restructure 
Hive Catalog documentation
URL: https://github.com/apache/flink/pull/9308#issuecomment-517022941
 
 
   ## CI report:
   
   * 59516a085b93d9ae505a251c12a8dbaccce0af1d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121484849)
   * 25d16b29b1c85bcd39f2135a74e83c9f5d9e1fce : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122334363)
   * 31a265d217a975c881e6a3d05aeb3754796fa90a : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122335185)
   * 95995351452af848c38d308479d160f416537018 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122336803)
   * 2a1f9b7f3ff704373fd1a649235333caa5643f9d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122366890)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation 
of Hive source/sink
URL: https://github.com/apache/flink/pull/9217#issuecomment-514589043
 
 
   ## CI report:
   
   * 516e655f7f0853d6585ae5de2fbecc438d57e474 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120432519)
   * fee6f2df235f113b7757ce436ee127711b0094e6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121184693)
   * 61c360e0902ded2939ba3c8b9662a1b58074e4d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121348454)
   * 7dafc731904fb3ae9dcee24f851803fddf87b551 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122371437)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9281: [hotfix][table]Rename TableAggFunctionCallVisitor to TableAggFunctionCallResolver

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9281: [hotfix][table]Rename 
TableAggFunctionCallVisitor to TableAggFunctionCallResolver
URL: https://github.com/apache/flink/pull/9281#issuecomment-516440424
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit d1d56e2c854c5d1324877e14edd4fb887278e090 (Thu Aug 08 
02:13:00 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @hequn8128
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @hequn8128
   * ❓ 3. Needs [attention] from.
   * ✅ 4. The change fits into the overall [architecture].
   - Approved by @hequn8128
   * ✅ 5. Overall code [quality] is good.
   - Approved by @hequn8128
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #9281: [hotfix][table]Rename TableAggFunctionCallVisitor to TableAggFunctionCallResolver

2019-08-07 Thread GitBox
hequn8128 commented on issue #9281: [hotfix][table]Rename 
TableAggFunctionCallVisitor to TableAggFunctionCallResolver
URL: https://github.com/apache/flink/pull/9281#issuecomment-519334844
 
 
   @flinkbot approve all


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9384: [FLINK-13637][docs] Fix problems of anchors in document(building.md, common.md, queryable_state.md)

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9384: [FLINK-13637][docs] Fix problems of 
anchors in document(building.md, common.md, queryable_state.md)
URL: https://github.com/apache/flink/pull/9384#issuecomment-519142646
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 5aa026a528486c425684aa633cbfe8e99fe01581 (Thu Aug 08 
02:08:57 UTC 2019)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @hequn8128
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @hequn8128
   * ❓ 3. Needs [attention] from.
   * ✅ 4. The change fits into the overall [architecture].
   - Approved by @hequn8128
   * ✅ 5. Overall code [quality] is good.
   - Approved by @hequn8128
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9384: [FLINK-13637][docs] Fix problems of anchors in document(building.md, common.md, queryable_state.md)

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9384: [FLINK-13637][docs] Fix problems of 
anchors in document(building.md, common.md, queryable_state.md)
URL: https://github.com/apache/flink/pull/9384#issuecomment-519142646
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 5aa026a528486c425684aa633cbfe8e99fe01581 (Thu Aug 08 
02:07:56 UTC 2019)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @hequn8128
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @hequn8128
   * ❓ 3. Needs [attention] from.
   * ✅ 4. The change fits into the overall [architecture].
   - Approved by @hequn8128
   * ✅ 5. Overall code [quality] is good.
   - Approved by @hequn8128
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #9384: [FLINK-13637][docs] Fix problems of anchors in document(building.md, common.md, queryable_state.md)

2019-08-07 Thread GitBox
hequn8128 commented on issue #9384: [FLINK-13637][docs] Fix problems of anchors 
in document(building.md, common.md, queryable_state.md)
URL: https://github.com/apache/flink/pull/9384#issuecomment-519334166
 
 
   @dianfu Thank you. I'll merge this...
   @flinkbot approve all


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on a change in pull request #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-07 Thread GitBox
docete commented on a change in pull request #9331: 
[FLINK-13523][table-planner-blink] Verify and correct arithmetic function's 
semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#discussion_r311830391
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/SplitAggregateRule.scala
 ##
 @@ -303,7 +304,19 @@ class SplitAggregateRule extends RelOptRule(
 aggGroupCount + index + avgAggCount + 1,
 finalAggregate.getRowType)
   avgAggCount += 1
-  relBuilder.call(FlinkSqlOperatorTable.DIVIDE, sumInputRef, 
countInputRef)
+  // TODO
+  val equals = relBuilder.call(
+FlinkSqlOperatorTable.EQUALS,
+countInputRef,
+relBuilder.getRexBuilder.makeBigintLiteral(JBigDecimal.valueOf(0)))
+  val falseT = relBuilder.call(FlinkSqlOperatorTable.DIVIDE, 
sumInputRef, countInputRef)
+  val trueT = relBuilder.cast(
+relBuilder.getRexBuilder.constantNull(), 
aggCall.`type`.getSqlTypeName)
+  relBuilder.call(
+FlinkSqlOperatorTable.IF,
 
 Review comment:
   In SQL 2013 Part 2 Section 6.27:
   The dyadic arithmetic operators , , , and 
 (+, –, *, and /,
   respectively) specify addition, subtraction, multiplication, and division, 
respectively. **If the value of a
   divisor is zero, then an exception condition is raised: data exception — 
division by zero.**
   
   And I think it's THE SplitAggretateRule owns the responsibility of return 
null(not throw exception). 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu commented on issue #9384: [FLINK-13637][docs] Fix problems of anchors in document(building.md, common.md, queryable_state.md)

2019-08-07 Thread GitBox
dianfu commented on issue #9384: [FLINK-13637][docs] Fix problems of anchors in 
document(building.md, common.md, queryable_state.md)
URL: https://github.com/apache/flink/pull/9384#issuecomment-519333866
 
 
   Good catch! +1 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] Fix some operator names are not set in blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] 
Fix some operator names are not set in blink planner
URL: https://github.com/apache/flink/pull/9363#issuecomment-518265997
 
 
   ## CI report:
   
   * 1fe6c332279c34546ec3db24a574dfd53500d20b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121971735)
   * fa3e7406f9664a59efcb448748511b656474e74c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122089384)
   * 28175449cb1d5eb8f318359090ea87e5b2af42d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122102858)
   * 81593a4dcb3573843c1c02cba0cb17abe1693065 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122290876)
   * 829830f8df1eb814ac44716f230c6aedcfaa5128 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122369808)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix 
DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
URL: https://github.com/apache/flink/pull/9342#issuecomment-517770642
 
 
   ## CI report:
   
   * 76704f271662b57cbe36679d3d249bcdd7fdf66a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121784366)
   * 7b4a9226cfffc1ea505c8d20b5b5f9ce8c5d2113 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122239651)
   * ec81369c4e332d9290a2b42e386f9be724d8e2ad : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122307651)
   * b2d4875b20874041f90db3473010cf454a2cba66 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122365586)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
flinkbot edited a comment on issue #9377: [FLINK-13561][table-planner-blink] 
Verify and correct time function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9377#issuecomment-518973162
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 84be6c933af6b8a960df17f6767d620db7f3a59f (Thu Aug 08 
01:54:42 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9377: 
[FLINK-13561][table-planner-blink] Verify and correct time function's semantic 
for Blink planner
URL: https://github.com/apache/flink/pull/9377#discussion_r311827891
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
 ##
 @@ -168,8 +168,15 @@ object ScalarOperatorGens {
 generateBinaryArithmeticOperator(ctx, op, left.resultType, left, right)
 
   case (DATE, INTERVAL_DAY_TIME) =>
-generateOperatorIfNotNull(ctx, new DateType(), left, right) {
-  (l, r) => s"$l $op ((int) ($r / ${MILLIS_PER_DAY}L))"
+resultType.getTypeRoot match {
 
 Review comment:
   It is in order to fix `timestampadd(hour, 1, date '2019-08-08')` which will 
get `2019-08-08` as result before the change.
   
   I'm sorry I messed it in `to_timestamp` commit. I should split it out. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9377: [FLINK-13561][table-planner-blink] Verify and correct time function's semantic for Blink planner

2019-08-07 Thread GitBox
wuchong commented on a change in pull request #9377: 
[FLINK-13561][table-planner-blink] Verify and correct time function's semantic 
for Blink planner
URL: https://github.com/apache/flink/pull/9377#discussion_r311827891
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
 ##
 @@ -168,8 +168,15 @@ object ScalarOperatorGens {
 generateBinaryArithmeticOperator(ctx, op, left.resultType, left, right)
 
   case (DATE, INTERVAL_DAY_TIME) =>
-generateOperatorIfNotNull(ctx, new DateType(), left, right) {
-  (l, r) => s"$l $op ((int) ($r / ${MILLIS_PER_DAY}L))"
+resultType.getTypeRoot match {
 
 Review comment:
   It is in order to fix `timestampadd(hour, 1, date '2019-08-08')` which will 
get `2019-08-08` as result.
   
   I'm sorry I messed it in `to_timestamp` commit. I should split it out. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   7   >