[GitHub] [flink] flinkbot edited a comment on issue #8906: [FLINK-13008]fix the findbugs warning in AggregationsFunctio.scala

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #8906: [FLINK-13008]fix the findbugs warning 
in AggregationsFunctio.scala
URL: https://github.com/apache/flink/pull/8906#issuecomment-519474213
 
 
   ## CI report:
   
   * bea9f0b86c27d30c763e8578edc5ef46a7d76e9e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122421031)
   * a31d440b058e9fbe2d0a908c4a4fb63750685340 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122532295)
   * d9fdee4191a92ea21dd1eb5d400ed54a627750b3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122535437)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9349: [FLINK-13564] [table-planner-blink] throw exception if constant with YEAR TO MONTH resolution was used for group windows

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9349: [FLINK-13564] [table-planner-blink] 
throw exception if constant with YEAR TO MONTH resolution was used for group 
windows
URL: https://github.com/apache/flink/pull/9349#issuecomment-517912973
 
 
   ## CI report:
   
   * 6b1ab58d9d153f44f5cbdaee4804bcd5e27544db : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121837224)
   * 6bc0849d18697d9a6e4e2899d05ab6e2e160df46 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122534498)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #9399: [FLINK-13526][sql-client] Switching to a non existing catalog or data…

2019-08-08 Thread GitBox
lirui-apache commented on a change in pull request #9399: 
[FLINK-13526][sql-client] Switching to a non existing catalog or data…
URL: https://github.com/apache/flink/pull/9399#discussion_r312335044
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/gateway/local/LocalExecutorITCase.java
 ##
 @@ -506,6 +506,30 @@ public void testUseCatalogAndUseDatabase() throws 
Exception {
}
}
 
+   @Test
+   public void testUseNonExistingDatabase() throws Exception {
+   final Executor executor = createDefaultExecutor(clusterClient);
+   final SessionContext session = new 
SessionContext("test-session", new Environment());
+
+   try {
+   executor.useDatabase(session, "nonexistingdb");
+   } catch (SqlExecutionException e) {
 
 Review comment:
   It's following the pattern in `testValidateSession`. But I guess it's also 
fine to use `ExpectedException`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9342: [FLINK-13438][hive] Fix 
DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
URL: https://github.com/apache/flink/pull/9342#issuecomment-517770642
 
 
   ## CI report:
   
   * 76704f271662b57cbe36679d3d249bcdd7fdf66a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121784366)
   * 7b4a9226cfffc1ea505c8d20b5b5f9ce8c5d2113 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122239651)
   * ec81369c4e332d9290a2b42e386f9be724d8e2ad : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122307651)
   * b2d4875b20874041f90db3473010cf454a2cba66 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122365586)
   * 83860f4cb617d777093dce251e4145c5e8f79e7f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122423996)
   * 8e2e5fee6859c7cfa1e3cfcc6f5d3bfe0dd8edbc : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122540373)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-6962) Add a create table SQL DDL

2019-08-08 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903577#comment-16903577
 ] 

Jark Wu commented on FLINK-6962:


Yes [~twalthr], I just created an issue to track this effort: FLINK-13661

> Add a create table SQL DDL
> --
>
> Key: FLINK-6962
> URL: https://issues.apache.org/jira/browse/FLINK-6962
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Shaoxuan Wang
>Assignee: Danny Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira adds support to allow user define the DDL for source and sink 
> tables, including the waterMark(on source table) and emit SLA (on result 
> table). The detailed design doc will be attached soon.
> This issue covered adding batch DDL support. Streaming-specific DDL support 
> will be added later.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13661) Add a stream specific CREATE TABLE SQL DDL

2019-08-08 Thread Jark Wu (JIRA)
Jark Wu created FLINK-13661:
---

 Summary: Add a stream specific CREATE TABLE SQL DDL
 Key: FLINK-13661
 URL: https://issues.apache.org/jira/browse/FLINK-13661
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Jark Wu
 Fix For: 1.10.0


FLINK-6962 has introduced a basic SQL DDL to create a table. However, it 
doesn't support stream specific features, for example, watermark definition, 
changeflag definition, computed columns, primary keys and so on. 

We started a FLIP design doc[1] to discuss the concepts of source and sink to 
help us have a well-defined DDL. Once the FLIP is accepted, we can start the 
work.

[1]: 
https://docs.google.com/document/d/1yrKXEIRATfxHJJ0K3t6wUgXAtZq8D-XgvEnvl2uUcr0/edit#heading=h.c05t427gfgxa



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] TsReaper commented on a change in pull request #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors

2019-08-08 Thread GitBox
TsReaper commented on a change in pull request #9342: [FLINK-13438][hive] Fix 
DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
URL: https://github.com/apache/flink/pull/9342#discussion_r312330806
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTypeUtil.java
 ##
 @@ -256,9 +258,9 @@ private static DataType 
toFlinkPrimitiveType(PrimitiveTypeInfo hiveType) {
case DOUBLE:
return DataTypes.DOUBLE();
case DATE:
-   return DataTypes.DATE();
+   return DataTypes.DATE().bridgedTo(Date.class);
case TIMESTAMP:
-   return DataTypes.TIMESTAMP();
+   return 
DataTypes.TIMESTAMP(3).bridgedTo(Timestamp.class);
 
 Review comment:
   Thanks for the discussion on the precision issue. As the default precision 
in hive is 9, the connector will be literally not functional if we directly 
throws an exception. So I think it will be better to log a warning message and 
add in the connector's document that timestamp precision may be lost.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] Fix some operator names are not set in blink planner

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] 
Fix some operator names are not set in blink planner
URL: https://github.com/apache/flink/pull/9363#issuecomment-518265997
 
 
   ## CI report:
   
   * 1fe6c332279c34546ec3db24a574dfd53500d20b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121971735)
   * fa3e7406f9664a59efcb448748511b656474e74c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122089384)
   * 28175449cb1d5eb8f318359090ea87e5b2af42d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122102858)
   * 81593a4dcb3573843c1c02cba0cb17abe1693065 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122290876)
   * 829830f8df1eb814ac44716f230c6aedcfaa5128 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122369808)
   * 0dcba7f669a5702e5730aac0273eb4a58fce2be9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122460702)
   * a5052f45f08b0760cdd4a163a87c0be7d6383c42 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122479843)
   * e8a680f4d832f794eb80d6e569d323babc4da518 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122530453)
   * bf37130d34d762d4ebbfc594ad1a52e63feae71a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122534827)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #9399: [FLINK-13526][sql-client] Switching to a non existing catalog or data…

2019-08-08 Thread GitBox
bowenli86 commented on a change in pull request #9399: 
[FLINK-13526][sql-client] Switching to a non existing catalog or data…
URL: https://github.com/apache/flink/pull/9399#discussion_r312327916
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/gateway/local/LocalExecutorITCase.java
 ##
 @@ -506,6 +506,30 @@ public void testUseCatalogAndUseDatabase() throws 
Exception {
}
}
 
+   @Test
+   public void testUseNonExistingDatabase() throws Exception {
+   final Executor executor = createDefaultExecutor(clusterClient);
+   final SessionContext session = new 
SessionContext("test-session", new Environment());
+
+   try {
+   executor.useDatabase(session, "nonexistingdb");
+   } catch (SqlExecutionException e) {
+   // expected
+   }
+   }
+
+   @Test
+   public void testUseNonExistingCatalog() throws Exception {
+   final Executor executor = createDefaultExecutor(clusterClient);
+   final SessionContext session = new 
SessionContext("test-session", new Environment());
+
+   try {
+   executor.useCatalog(session, "nonexistingcatalog");
+   } catch (SqlExecutionException e) {
 
 Review comment:
   shall we just assert the exception is expected instead of try-catch-ignore?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #9399: [FLINK-13526][sql-client] Switching to a non existing catalog or data…

2019-08-08 Thread GitBox
bowenli86 commented on a change in pull request #9399: 
[FLINK-13526][sql-client] Switching to a non existing catalog or data…
URL: https://github.com/apache/flink/pull/9399#discussion_r312327896
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/gateway/local/LocalExecutorITCase.java
 ##
 @@ -506,6 +506,30 @@ public void testUseCatalogAndUseDatabase() throws 
Exception {
}
}
 
+   @Test
+   public void testUseNonExistingDatabase() throws Exception {
+   final Executor executor = createDefaultExecutor(clusterClient);
+   final SessionContext session = new 
SessionContext("test-session", new Environment());
+
+   try {
+   executor.useDatabase(session, "nonexistingdb");
+   } catch (SqlExecutionException e) {
 
 Review comment:
   shall we just assert the exception is expected instead of try-catch-ignore?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13405) Translate "Basic API Concepts" page into Chinese

2019-08-08 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903554#comment-16903554
 ] 

Jark Wu commented on FLINK-13405:
-

Did you send email to dev-subsr...@flink.apache.org to subscribe this ML? You 
can check whether they are in the SPAM?


> Translate "Basic API Concepts" page into Chinese
> 
>
> Key: FLINK-13405
> URL: https://issues.apache.org/jira/browse/FLINK-13405
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: WangHengWei
>Assignee: WangHengWei
>Priority: Major
>  Labels: documentation, pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The page url is 
> [https://github.com/apache/flink/blob/master/docs/dev/api_concepts.zh.md]
> The markdown file is located in flink/docs/dev/api_concepts.zh.md



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9400: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner (release-1.9)

2019-08-08 Thread GitBox
flinkbot commented on issue #9400: [FLINK-13523][table-planner-blink] Verify 
and correct arithmetic function's semantic for Blink planner (release-1.9)
URL: https://github.com/apache/flink/pull/9400#issuecomment-519772402
 
 
   ## CI report:
   
   * 2fef590f19082d27f35d60a1dc87e802f673d777 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122537159)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9400: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner (release-1.9)

2019-08-08 Thread GitBox
flinkbot commented on issue #9400: [FLINK-13523][table-planner-blink] Verify 
and correct arithmetic function's semantic for Blink planner (release-1.9)
URL: https://github.com/apache/flink/pull/9400#issuecomment-519771498
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2fef590f19082d27f35d60a1dc87e802f673d777 (Fri Aug 09 
04:21:05 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong opened a new pull request #9400: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner (release-1.9)

2019-08-08 Thread GitBox
wuchong opened a new pull request #9400: [FLINK-13523][table-planner-blink] 
Verify and correct arithmetic function's semantic for Blink planner 
(release-1.9)
URL: https://github.com/apache/flink/pull/9400
 
 
   This is a cherry pick from master to run travis. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9394: [FLINK-13547][table-planner-blink] Verify and correct string function's semantic for Blink planner

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9394: [FLINK-13547][table-planner-blink] 
Verify and correct string function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9394#issuecomment-519467299
 
 
   ## CI report:
   
   * 861d120d83964d267c95d024993b82186b1e6e7a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122417819)
   * a423b9b46d10f61091bd70aa0ea1bd2367fd25a0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122531452)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13473) Add GroupWindowed FlatAggregate support to stream Table API(blink planner), i.e, align with flink planner

2019-08-08 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13473:

Component/s: (was: Table SQL / API)
 Table SQL / Planner

> Add GroupWindowed FlatAggregate support to stream Table API(blink planner), 
> i.e, align with flink planner
> -
>
> Key: FLINK-13473
> URL: https://issues.apache.org/jira/browse/FLINK-13473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add GroupWindowed FlatAggregate support to stream Table API(blink planner), 
> i.e, align with flink planner.
> The API looks like:
> {code}
>  TableAggregateFunction tableAggFunc = new MyTableAggregateFunction();
>  tableEnv.registerFunction("tableAggFunc", tableAggFunc);
>  windowGroupedTable
>   .flatAggregate("tableAggFunc(a, b) as (x, y, z)")
>   .select("key, window.start, x, y, z")
> {code}
> The detail can be found in 
> [Flip-29|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9396: [FLINK-13473][table] Add stream Windowed FlatAggregate support for blink planner

2019-08-08 Thread GitBox
wuchong commented on issue #9396: [FLINK-13473][table] Add stream Windowed 
FlatAggregate support for blink planner
URL: https://github.com/apache/flink/pull/9396#issuecomment-519769654
 
 
   I would suggest to modify the component name in commit message a bit. How 
about:
   
   [FLINK-13473][table-planner-blink] Support windowed TableAggregate in some 
MetadataHandle
   [FLINK-13473][table-runtime-blink] Add tests for window operator
   [FLINK-13473][table-blink] Add runtime support for windowed flatAggregat on 
blink planner
   [FLINK-13473][table-planner-blink] Add plan support for windowed 
flatAggregate on blink planner
   
   And for the pull request title, I would suggest to use "[table-blink]" 
because it doesn't contain API changes. 
   
   What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12481) Make processing time timer trigger run via the mailbox

2019-08-08 Thread Biao Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903548#comment-16903548
 ] 

Biao Liu commented on FLINK-12481:
--

Hi  [~1u0], thanks for feedback. :)

> Make processing time timer trigger run via the mailbox
> --
>
> Key: FLINK-12481
> URL: https://issues.apache.org/jira/browse/FLINK-12481
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Stefan Richter
>Assignee: Alex
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This sub-task integrates the mailbox with processing time timer triggering. 
> Those triggers should now be enqueued as mailbox events and picked up by the 
> stream task's main thread for processing.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9399: [FLINK-13526][sql-client] Switching to a non existing catalog or data…

2019-08-08 Thread GitBox
flinkbot commented on issue #9399: [FLINK-13526][sql-client] Switching to a non 
existing catalog or data…
URL: https://github.com/apache/flink/pull/9399#issuecomment-519768509
 
 
   ## CI report:
   
   * 3fbe05bd67589e1f46d05df3e6db7cc79281b450 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122536079)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9349: [FLINK-13564] [table-planner-blink] throw exception if constant with YEAR TO MONTH resolution was used for group windows

2019-08-08 Thread GitBox
wuchong commented on a change in pull request #9349: [FLINK-13564] 
[table-planner-blink] throw exception if constant with YEAR TO MONTH resolution 
was used for group windows
URL: https://github.com/apache/flink/pull/9349#discussion_r312321622
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/BatchLogicalWindowAggregateRule.scala
 ##
 @@ -73,6 +76,12 @@ class BatchLogicalWindowAggregateRule
   ref.getIndex)
 }
   }
+
+  def getOperandAsLong(call: RexCall, idx: Int): Long =
+call.getOperands.get(idx) match {
+  case v: RexLiteral => v.getValue.asInstanceOf[JBigDecimal].longValue()
+  case _ => throw new TableException("Only constant window descriptors are 
supported")
 
 Review comment:
   Should we also update the exception message to align with Stream Window 
Aggregate? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #9390: [FLINK-13534][hive] Unable to query Hive table with decimal column

2019-08-08 Thread GitBox
lirui-apache commented on a change in pull request #9390: [FLINK-13534][hive] 
Unable to query Hive table with decimal column
URL: https://github.com/apache/flink/pull/9390#discussion_r312321615
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/TableEnvHiveConnectorTest.java
 ##
 @@ -138,6 +138,34 @@ private void readWriteFormat(String format) throws 
Exception {
hiveShell.execute("drop database db1 cascade");
}
 
+   @Test
+   public void testDecimal() throws Exception {
+   hiveShell.execute("create database db1");
+   try {
+   // Hive's default decimal is decimal(10, 0)
+   hiveShell.execute("create table db1.src1 (x decimal)");
+   hiveShell.execute("create table db1.src2 (x decimal)");
+   hiveShell.execute("create table db1.dest (x decimal)");
+   // populate src1 from Hive
+   hiveShell.execute("insert into db1.src1 values 
(1),(2.0),(5.4),(5.5),(123456789123)");
+
+   TableEnvironment tableEnv = 
getTableEnvWithHiveCatalog();
+   // populate src2 with same data from Flink
+   tableEnv.sqlUpdate("insert into db1.src2 values (cast(1 
as decimal(10,0))), (cast(2.0 as decimal(10,0))), " +
 
 Review comment:
   Do you mean insert some non-zero-scale decimals into zero-scale decimal 
column? It's not allowed by the planner.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9347: [FLINK-13563] [table-planner-blink] TumblingGroupWindow should implement toString method to explain more info

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9347: [FLINK-13563] [table-planner-blink] 
TumblingGroupWindow should implement toString method to explain more info
URL: https://github.com/apache/flink/pull/9347#issuecomment-517895998
 
 
   ## CI report:
   
   * 212cd6c24fdb696aa13bed1cfff875f9bfc01d09 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121831822)
   * fa92c963b1f41d75b1399d41708d88cd47a632e9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122530462)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9394: [FLINK-13547][table-planner-blink] Verify and correct string function's semantic for Blink planner

2019-08-08 Thread GitBox
wuchong commented on a change in pull request #9394: 
[FLINK-13547][table-planner-blink] Verify and correct string function's 
semantic for Blink planner
URL: https://github.com/apache/flink/pull/9394#discussion_r312319794
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/functions/sql/FlinkSqlOperatorTable.java
 ##
 @@ -502,69 +482,56 @@ public void lookupOperatorOverloads(
SqlKind.OTHER_FUNCTION,
VARCHAR_2000_NULLABLE,
null,
-   OperandTypes.or(
-   OperandTypes.family(SqlTypeFamily.STRING),
-   OperandTypes.family(SqlTypeFamily.STRING, 
SqlTypeFamily.STRING)),
+   OperandTypes.family(SqlTypeFamily.STRING),
SqlFunctionCategory.STRING);
 
public static final SqlFunction SHA1 = new SqlFunction(
"SHA1",
SqlKind.OTHER_FUNCTION,
VARCHAR_2000_NULLABLE,
null,
-   OperandTypes.or(
-   OperandTypes.family(SqlTypeFamily.STRING),
-   OperandTypes.family(SqlTypeFamily.STRING, 
SqlTypeFamily.STRING)),
+   OperandTypes.family(SqlTypeFamily.STRING),
SqlFunctionCategory.STRING);
 
public static final SqlFunction SHA224 = new SqlFunction(
"SHA224",
SqlKind.OTHER_FUNCTION,
VARCHAR_2000_NULLABLE,
null,
-   OperandTypes.or(
-   OperandTypes.family(SqlTypeFamily.STRING),
-   OperandTypes.family(SqlTypeFamily.STRING, 
SqlTypeFamily.STRING)),
+   OperandTypes.family(SqlTypeFamily.STRING),
SqlFunctionCategory.STRING);
 
public static final SqlFunction SHA256 = new SqlFunction(
"SHA256",
SqlKind.OTHER_FUNCTION,
VARCHAR_2000_NULLABLE,
null,
-   OperandTypes.or(
-   OperandTypes.family(SqlTypeFamily.STRING),
-   OperandTypes.family(SqlTypeFamily.STRING, 
SqlTypeFamily.STRING)),
+   OperandTypes.family(SqlTypeFamily.STRING),
SqlFunctionCategory.STRING);
 
public static final SqlFunction SHA384 = new SqlFunction(
"SHA384",
SqlKind.OTHER_FUNCTION,
VARCHAR_2000_NULLABLE,
null,
-   OperandTypes.or(
-   OperandTypes.family(SqlTypeFamily.STRING),
-   OperandTypes.family(SqlTypeFamily.STRING, 
SqlTypeFamily.STRING)),
+   OperandTypes.family(SqlTypeFamily.STRING),
SqlFunctionCategory.STRING);
 
public static final SqlFunction SHA512 = new SqlFunction(
"SHA512",
SqlKind.OTHER_FUNCTION,
VARCHAR_2000_NULLABLE,
null,
-   OperandTypes.or(
-   OperandTypes.family(SqlTypeFamily.STRING),
-   OperandTypes.family(SqlTypeFamily.STRING, 
SqlTypeFamily.STRING)),
+   OperandTypes.family(SqlTypeFamily.STRING),
SqlFunctionCategory.STRING);
 
public static final SqlFunction SHA2 = new SqlFunction(
"SHA2",
SqlKind.OTHER_FUNCTION,
VARCHAR_2000_NULLABLE,
null,
-   OperandTypes.or(
-   OperandTypes.family(SqlTypeFamily.STRING, 
SqlTypeFamily.INTEGER),
-   OperandTypes.family(SqlTypeFamily.STRING, 
SqlTypeFamily.STRING, SqlTypeFamily.INTEGER)),
+   OperandTypes.sequence("'(DATA, HASH_LENGTH)'",
 
 Review comment:
   ```suggestion
OperandTypes.sequence("'SHA2(, )'",
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9394: [FLINK-13547][table-planner-blink] Verify and correct string function's semantic for Blink planner

2019-08-08 Thread GitBox
wuchong commented on a change in pull request #9394: 
[FLINK-13547][table-planner-blink] Verify and correct string function's 
semantic for Blink planner
URL: https://github.com/apache/flink/pull/9394#discussion_r312318835
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/BinaryStringUtil.java
 ##
 @@ -749,21 +749,21 @@ public static BinaryString concat(Iterable 
inputs) {
}
 
/**
-* Concatenates input strings together into a single string using the 
separator.
-* A null input is skipped. For example, concat(",", "a", null, "c") 
would yield "a,c".
+* Concatenates input strings together into a single string using 
the separator.
+* Returns NULL If the separator is NULL.
+*
+* Note: CONCAT_WS() does not skip any empty strings, however it 
does skip any NULL values after
+* the separator. For example, concat(",", "a", null, "c") would yield 
"a,c".
 
 Review comment:
   ```suggestion
 * the separator. For example, concat_ws(",", "a", null, "c") would 
yield "a,c".
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #9399: [FLINK-13526][sql-client] Switching to a non existing catalog or data…

2019-08-08 Thread GitBox
lirui-apache commented on issue #9399: [FLINK-13526][sql-client] Switching to a 
non existing catalog or data…
URL: https://github.com/apache/flink/pull/9399#issuecomment-519767241
 
 
   @xuefuz @bowenli86 @zjuwangg please have a look. I suppose it's trivial 
change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9399: [FLINK-13526][sql-client] Switching to a non existing catalog or data…

2019-08-08 Thread GitBox
flinkbot commented on issue #9399: [FLINK-13526][sql-client] Switching to a non 
existing catalog or data…
URL: https://github.com/apache/flink/pull/9399#issuecomment-519767122
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3fbe05bd67589e1f46d05df3e6db7cc79281b450 (Fri Aug 09 
03:52:17 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13526).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13660) Cannot submit job on Flink session cluster on kubernetes with multiple JM pods (zk HA) through web frontend

2019-08-08 Thread MalcolmSanders (JIRA)
MalcolmSanders created FLINK-13660:
--

 Summary: Cannot submit job on Flink session cluster on kubernetes 
with multiple JM pods (zk HA) through web frontend
 Key: FLINK-13660
 URL: https://issues.apache.org/jira/browse/FLINK-13660
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination, Runtime / Web Frontend
Affects Versions: 1.9.0
Reporter: MalcolmSanders


Hi, all,

Previously I'm testing HighAvailabilityService of Flink 1.9 on k8s. When 
testing Flink session cluster with 3 JM pods deployed on k8s, I find the jar I 
previously uploaded to the web frontend will continuously dispear in "Uploaded 
Jars" web page. As a result, it's hard to submit the job.

After investigation, I find that it has something to do with (1) the 
implementation of method "handleRequest" of "JarListHandler" and 
"JarUploadHandler" RestHandlers along with (2) the routing mechanism of k8s 
service.

(1) It seem to me that "handleRequest" method should dispatch the message 
through "DispatcherGateway gateway" to the leader JM. While the two RestHanders 
don't use the gateway and just do things locally. That is to say if a "upload 
jar" request or "list loaded jars" request is sent to any of the 3 JMs, the web 
frontend will only storage or fetch jars from local directory.

(2) I use k8s service to open a flink web page, the URL pattern is (PS: start 
"kubectl proxy" locally): 
http://127.0.0.1:8001/api/v1/namespaces/${my_ns}/services/${my_session_cluster_service}:ui/proxy/#/submit
Since there a 3 endpoints (3 JMs) of this k8s service, the k8s routing 
mechanism will randomly choose which endpoint (JM) a REST message sends to.

As a result of the two factors, Flink session cluster previously cannot be 
deployed with multiple JMs using HighAvailablityService on k8s.

Proposals:
(1) redirect jar related REST messages to the leader JM
(2) (along with proposal(1)) synchronize jar files with the standby JMs incase 
of standby JM taking the leadership
(3) support upload jars to global filesystem (etc. dfs)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] lirui-apache opened a new pull request #9399: [FLINK-13526][sql-client] Switching to a non existing catalog or data…

2019-08-08 Thread GitBox
lirui-apache opened a new pull request #9399: [FLINK-13526][sql-client] 
Switching to a non existing catalog or data…
URL: https://github.com/apache/flink/pull/9399
 
 
   …base crashes sql-client
   
   
   
   ## What is the purpose of the change
   
   Avoid crashing sql-client when switching to non-existing catalog or database.
   
   
   ## Brief change log
   
 - Catch `CatalogException` thrown by TableEnvironment and throw a 
`SqlExecutionException` instead.
 - Add test cases.
   
   
   ## Verifying this change
   
   New test cases.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? NA
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13526) Switching to a non existing catalog or database crashes sql-client

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13526:
---
Labels: pull-request-available  (was: )

> Switching to a non existing catalog or database crashes sql-client
> --
>
> Key: FLINK-13526
> URL: https://issues.apache.org/jira/browse/FLINK-13526
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Rui Li
>Priority: Major
>  Labels: pull-request-available
>
> sql-client crashes if user tries to switch to a non-existing DB:
> {noformat}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>   at org.apache.flink.table.client.SqlClient.main(SqlClient.java:206)
> Caused by: org.apache.flink.table.catalog.exceptions.CatalogException: A 
> database with name [foo] does not exist in the catalog: [myhive].
>   at 
> org.apache.flink.table.catalog.CatalogManager.setCurrentDatabase(CatalogManager.java:286)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.useDatabase(TableEnvironmentImpl.java:398)
>   at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$useDatabase$5(LocalExecutor.java:258)
>   at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:216)
>   at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.useDatabase(LocalExecutor.java:256)
>   at 
> org.apache.flink.table.client.cli.CliClient.callUseDatabase(CliClient.java:434)
>   at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:282)
>   at java.util.Optional.ifPresent(Optional.java:159)
>   at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:200)
>   at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:123)
>   at org.apache.flink.table.client.SqlClient.start(SqlClient.java:105)
>   at org.apache.flink.table.client.SqlClient.main(SqlClient.java:194)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8906: [FLINK-13008]fix the findbugs warning in AggregationsFunctio.scala

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #8906: [FLINK-13008]fix the findbugs warning 
in AggregationsFunctio.scala
URL: https://github.com/apache/flink/pull/8906#issuecomment-519474213
 
 
   ## CI report:
   
   * bea9f0b86c27d30c763e8578edc5ef46a7d76e9e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122421031)
   * a31d440b058e9fbe2d0a908c4a4fb63750685340 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122532295)
   * d9fdee4191a92ea21dd1eb5d400ed54a627750b3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122535437)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] Fix some operator names are not set in blink planner

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] 
Fix some operator names are not set in blink planner
URL: https://github.com/apache/flink/pull/9363#issuecomment-518265997
 
 
   ## CI report:
   
   * 1fe6c332279c34546ec3db24a574dfd53500d20b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121971735)
   * fa3e7406f9664a59efcb448748511b656474e74c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122089384)
   * 28175449cb1d5eb8f318359090ea87e5b2af42d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122102858)
   * 81593a4dcb3573843c1c02cba0cb17abe1693065 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122290876)
   * 829830f8df1eb814ac44716f230c6aedcfaa5128 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122369808)
   * 0dcba7f669a5702e5730aac0273eb4a58fce2be9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122460702)
   * a5052f45f08b0760cdd4a163a87c0be7d6383c42 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122479843)
   * e8a680f4d832f794eb80d6e569d323babc4da518 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122530453)
   * bf37130d34d762d4ebbfc594ad1a52e63feae71a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122534827)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13526) Switching to a non existing catalog or database crashes sql-client

2019-08-08 Thread Rui Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-13526:
---
Summary: Switching to a non existing catalog or database crashes sql-client 
 (was: Switching to a non existing database crashes sql-client)

> Switching to a non existing catalog or database crashes sql-client
> --
>
> Key: FLINK-13526
> URL: https://issues.apache.org/jira/browse/FLINK-13526
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Rui Li
>Priority: Major
>
> sql-client crashes if user tries to switch to a non-existing DB:
> {noformat}
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>   at org.apache.flink.table.client.SqlClient.main(SqlClient.java:206)
> Caused by: org.apache.flink.table.catalog.exceptions.CatalogException: A 
> database with name [foo] does not exist in the catalog: [myhive].
>   at 
> org.apache.flink.table.catalog.CatalogManager.setCurrentDatabase(CatalogManager.java:286)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.useDatabase(TableEnvironmentImpl.java:398)
>   at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$useDatabase$5(LocalExecutor.java:258)
>   at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:216)
>   at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.useDatabase(LocalExecutor.java:256)
>   at 
> org.apache.flink.table.client.cli.CliClient.callUseDatabase(CliClient.java:434)
>   at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:282)
>   at java.util.Optional.ifPresent(Optional.java:159)
>   at org.apache.flink.table.client.cli.CliClient.open(CliClient.java:200)
>   at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:123)
>   at org.apache.flink.table.client.SqlClient.start(SqlClient.java:105)
>   at org.apache.flink.table.client.SqlClient.main(SqlClient.java:194)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9349: [FLINK-13564] [table-planner-blink] throw exception if constant with YEAR TO MONTH resolution was used for group windows

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9349: [FLINK-13564] [table-planner-blink] 
throw exception if constant with YEAR TO MONTH resolution was used for group 
windows
URL: https://github.com/apache/flink/pull/9349#issuecomment-517912973
 
 
   ## CI report:
   
   * 6b1ab58d9d153f44f5cbdaee4804bcd5e27544db : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121837224)
   * 6bc0849d18697d9a6e4e2899d05ab6e2e160df46 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122534498)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13659) Add method listDatabases(catalog) and listTables(catalog, database) in TableEnvironment

2019-08-08 Thread Jeff Zhang (JIRA)
Jeff Zhang created FLINK-13659:
--

 Summary: Add method listDatabases(catalog) and listTables(catalog, 
database) in TableEnvironment
 Key: FLINK-13659
 URL: https://issues.apache.org/jira/browse/FLINK-13659
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.9.0
Reporter: Jeff Zhang


It would be nice to add method listDatabases(catalog) and listTables(catalog, 
database) in TableEnvironment. So that I can listDatabases with specified 
catalog and listTables with specified catalog and databases.
Otherwise I would always need to call useCatalog and useDatabase beforehand. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13523) Verify and correct arithmetic function's semantic for Blink planner

2019-08-08 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903530#comment-16903530
 ] 

Jark Wu commented on FLINK-13523:
-

[table-planner-blink] Refactor AVG aggregate function to keep it compatible 
with old planner
 - master: 421f0a559a3038d2f9f56ba2cbab8e6d1832812a
 - 1.9: 
 
[table-planner-blink] Remove non-standard bitwise scalar function and DIV(), 
DIV_INT() function from blink planner
 - master: fbd3225e9d63a10eb6e55fca136d2acfcc777250
 - 1.9: 
 
[table-planner-blink] Refactor DIVIDE function to keep it compatible with old 
planner
 - master: 4d560df50d08f22b68e8b8c9a0e2086f45b5f4b4
 - 1.9: 
 

> Verify and correct arithmetic function's semantic for Blink planner
> ---
>
> Key: FLINK-13523
> URL: https://issues.apache.org/jira/browse/FLINK-13523
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] Fix some operator names are not set in blink planner

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] 
Fix some operator names are not set in blink planner
URL: https://github.com/apache/flink/pull/9363#issuecomment-518265997
 
 
   ## CI report:
   
   * 1fe6c332279c34546ec3db24a574dfd53500d20b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121971735)
   * fa3e7406f9664a59efcb448748511b656474e74c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122089384)
   * 28175449cb1d5eb8f318359090ea87e5b2af42d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122102858)
   * 81593a4dcb3573843c1c02cba0cb17abe1693065 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122290876)
   * 829830f8df1eb814ac44716f230c6aedcfaa5128 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122369808)
   * 0dcba7f669a5702e5730aac0273eb4a58fce2be9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122460702)
   * a5052f45f08b0760cdd4a163a87c0be7d6383c42 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122479843)
   * e8a680f4d832f794eb80d6e569d323babc4da518 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122530453)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-08 Thread GitBox
wuchong commented on issue #9331: [FLINK-13523][table-planner-blink] Verify and 
correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331#issuecomment-519762758
 
 
   Merged with
   
   4d560df50d08f22b68e8b8c9a0e2086f45b5f4b4
   fbd3225e9d63a10eb6e55fca136d2acfcc777250
   421f0a559a3038d2f9f56ba2cbab8e6d1832812a
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong closed pull request #9331: [FLINK-13523][table-planner-blink] Verify and correct arithmetic function's semantic for Blink planner

2019-08-08 Thread GitBox
wuchong closed pull request #9331: [FLINK-13523][table-planner-blink] Verify 
and correct arithmetic function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9331
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on issue #9349: [FLINK-13564] [table-planner-blink] throw exception if constant with YEAR TO MONTH resolution was used for group windows

2019-08-08 Thread GitBox
godfreyhe commented on issue #9349: [FLINK-13564] [table-planner-blink] throw 
exception if constant with YEAR TO MONTH resolution was used for group windows
URL: https://github.com/apache/flink/pull/9349#issuecomment-519762649
 
 
   thanks @wuchong, i have updated


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13653) ResultStore should avoid using RowTypeInfo when creating a result

2019-08-08 Thread Rui Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903527#comment-16903527
 ] 

Rui Li commented on FLINK-13653:


I thought we should just use the new type system for {{CollectBatchTableSink}} 
and {{CollectStreamTableSink}}. However, according to the JavaDoc of 
{{TableSink.getOutputType()}} and {{TableSource.getReturnType()}}, user should 
"_use either the old or the new type system consistently to avoid unintended 
behavior_". And if the table sinks created in SQL client need to support table 
sources that may use new or old type systems, I'm not sure whether we have to 
create different sinks for new and old type systems respectively?

> ResultStore should avoid using RowTypeInfo when creating a result
> -
>
> Key: FLINK-13653
> URL: https://issues.apache.org/jira/browse/FLINK-13653
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Rui Li
>Priority: Major
>
> Creating a RowTypeInfo from a TableSchema can lose type parameters. As a 
> result, querying a Hive table with decimal column from SQL CLI will hit the 
> following exception:
> {noformat}
> Caused by: org.apache.flink.table.api.ValidationException: Field types of 
> query result and registered TableSink [default_catalog, default_database, 
> default: select * from foo] do not match.
> Query result schema: [x: BigDecimal]
> TableSink schema:[x: BigDecimal]
> at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:69)
> at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:179)
> at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:178)
> at scala.Option.map(Option.scala:146)
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:178)
> at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:146)
> at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:146)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:146)
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:439)
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:327)
> at 
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428)
> at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryInternal$10(LocalExecutor.java:477)
> at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:216)
> at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:475)
> ... 8 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] godfreyhe commented on a change in pull request #9349: [FLINK-13564] [table-planner-blink] throw exception if constant with YEAR TO MONTH resolution was used for group windows

2019-08-08 Thread GitBox
godfreyhe commented on a change in pull request #9349: [FLINK-13564] 
[table-planner-blink] throw exception if constant with YEAR TO MONTH resolution 
was used for group windows
URL: https://github.com/apache/flink/pull/9349#discussion_r312316102
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/agg/WindowAggregateTest.scala
 ##
 @@ -85,6 +85,18 @@ class WindowAggregateTest extends TableTestBase {
 util.verifyPlanNotExpected(sql, "TUMBLE(rowtime")
   }
 
+  @Test
+  def testWindowWrongWindowParameter(): Unit = {
+expectedException.expect(classOf[TableException])
+expectedException.expectMessage(
+  "Only constant window intervals with millisecond resolution are 
supported")
+
+val sqlQuery =
+  "SELECT COUNT(*) FROM MyTable GROUP BY TUMBLE(proctime, INTERVAL '2-10' 
YEAR TO MONTH)"
 
 Review comment:
   > Add one more test `INTERVAL '35' DAYS` which should work?
   
   yes, it's valid


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13405) Translate "Basic API Concepts" page into Chinese

2019-08-08 Thread WangHengWei (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903515#comment-16903515
 ] 

WangHengWei commented on FLINK-13405:
-

[~jark] [~xccui], thank you very much. I think "数据汇" is a good translation but 
since it's not a general term I choose not to translate it.

BTW, my qq email can not receive any message from Jira or 
d...@flink.apache.org, do you know what's the problem and how to fix it?

> Translate "Basic API Concepts" page into Chinese
> 
>
> Key: FLINK-13405
> URL: https://issues.apache.org/jira/browse/FLINK-13405
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: WangHengWei
>Assignee: WangHengWei
>Priority: Major
>  Labels: documentation, pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The page url is 
> [https://github.com/apache/flink/blob/master/docs/dev/api_concepts.zh.md]
> The markdown file is located in flink/docs/dev/api_concepts.zh.md



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13655) Caused by: java.io.IOException: Thread 'SortMerger spilling thread' terminated due to an exception

2019-08-08 Thread LiJun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiJun updated FLINK-13655:
--
Labels: KryoSerializer  (was: )

> Caused by: java.io.IOException: Thread 'SortMerger spilling thread' 
> terminated due to an exception
> --
>
> Key: FLINK-13655
> URL: https://issues.apache.org/jira/browse/FLINK-13655
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.6.3
> Environment: {color:#e8bf6a}
> {color}{color:#e8bf6a} 
> {color}UTF-8{color:#e8bf6a}
> {color}{color:#e8bf6a} 
> {color}1.5.6{color:#e8bf6a}
> {color}{color:#e8bf6a} 
> {color}1.7.7{color:#e8bf6a}
> {color}{color:#e8bf6a} 
> {color}1.2.17{color:#e8bf6a}
> {color}{color:#e8bf6a} 
> {color}2.11{color:#e8bf6a}
> {color}{color:#e8bf6a} 
> {color}2.11.12{color:#e8bf6a}
> {color}{color:#e8bf6a}{color}
> {color:#e8bf6a}_parameters.setBoolean("recursive.file_{color:#33}.enumeration",true){color}{color}
>Reporter: LiJun
>Priority: Minor
>  Labels: KryoSerializer
>
> Symptom:
> flink program can sucessfully read and process single ORC file from 
> HDFS,whatever given reading path is file's parent folder or specific file 
> path. However,I put them together in the same folder and program read that 
> folder, the following error always occurs.
> {color:#cc7832}val {color}configHadoop = {color:#cc7832}new 
> {color}org.apache.hadoop.conf.Configuration()
>  configHadoop.set({color:#6a8759}"HADOOP_USER_NAME"{color}{color:#cc7832}, 
> {color}{color:#6a8759}"user"{color})
>  configHadoop.set({color:#6a8759}"fs.defaultFS"{color}{color:#cc7832}, 
> {color}{color:#6a8759}"xx.xxx.xx.xx"{color})
>  {color:#cc7832}val {color}env = ExecutionEnvironment.getExecutionEnvironment
>  {color:#cc7832}val {color}bTableEnv = 
> TableEnvironment.getTableEnvironment(env)
> {color:#cc7832}val {color}orcTableSource = OrcTableSource.builder()
>  {color:#808080}// path to ORC file(s). NOTE: By default, directories are 
> recursively scanned.{color} .path(inPath)
>  {color:#808080}// schema of ORC files{color} 
> .forOrcSchema({color:#6a8759}"struct"{color})
>  {color:#808080}// Hadoop configuration{color} 
> .withConfiguration(configHadoop)
>  {color:#808080}// build OrcTableSource{color} .build()
>  
> The following is stack info 
> Root exception
>  Timestamp: 2019-08-08, 20:15:05
>  java.lang.Exception: The data preparation for task 'CHAIN GroupReduce 
> (GroupReduce at 
> com.jd.risk.flink.analysis.framework.core.EventOfflineServiceFrameWork.startService(EventOfflineServiceFrameWork.scala:41))
>  -> Map (Key Extractor)' , caused an error: Error obtaining the sorted input: 
> Thread 'SortMerger spilling thread' terminated due to an exception: Index: 
> 97, Size: 17
>   at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:479)
>   at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:712)
>   at java.lang.Thread.run(Thread.java:748)
>  Caused by: java.lang.RuntimeException: Error obtaining the sorted input: 
> Thread 'SortMerger spilling thread' terminated due to an exception: Index: 
> 97, Size: 17
>   at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:650)
>   at 
> org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1108)
>   at 
> org.apache.flink.runtime.operators.GroupReduceDriver.prepare(GroupReduceDriver.java:99)
>   at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:473)
>   ... 3 more
>  Caused by: java.io.IOException: Thread 'SortMerger spilling thread' 
> terminated due to an exception: Index: 97, Size: 17
>   at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:831)
>  Caused by: java.lang.IndexOutOfBoundsException: Index: 97, Size: 17
>   at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>   at java.util.ArrayList.get(ArrayList.java:429)
>   at 
> com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
>   at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:759)
>   at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:315)
>   at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:335)
>   at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:350)
>   at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:99)
>   at 
> 

[jira] [Updated] (FLINK-13655) Caused by: java.io.IOException: Thread 'SortMerger spilling thread' terminated due to an exception

2019-08-08 Thread LiJun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiJun updated FLINK-13655:
--
Description: 
Symptom:

flink program can sucessfully read and process single ORC file from 
HDFS,whatever given reading path is file's parent folder or specific file path. 
However,I put them together in the same folder and program read that folder, 
the following error always occurs.

{color:#cc7832}val {color}configHadoop = {color:#cc7832}new 
{color}org.apache.hadoop.conf.Configuration()
 configHadoop.set({color:#6a8759}"HADOOP_USER_NAME"{color}{color:#cc7832}, 
{color}{color:#6a8759}"user"{color})
 configHadoop.set({color:#6a8759}"fs.defaultFS"{color}{color:#cc7832}, 
{color}{color:#6a8759}"xx.xxx.xx.xx"{color})
 {color:#cc7832}val {color}env = ExecutionEnvironment.getExecutionEnvironment
 {color:#cc7832}val {color}bTableEnv = TableEnvironment.getTableEnvironment(env)

{color:#cc7832}val {color}orcTableSource = OrcTableSource.builder()
 {color:#808080}// path to ORC file(s). NOTE: By default, directories are 
recursively scanned.{color} .path(inPath)
 {color:#808080}// schema of ORC files{color} 
.forOrcSchema({color:#6a8759}"struct"{color})
 {color:#808080}// Hadoop configuration{color} .withConfiguration(configHadoop)
 {color:#808080}// build OrcTableSource{color} .build()

 

The following is stack info 

Root exception
 Timestamp: 2019-08-08, 20:15:05
 java.lang.Exception: The data preparation for task 'CHAIN GroupReduce 
(GroupReduce at 
com.jd.risk.flink.analysis.framework.core.EventOfflineServiceFrameWork.startService(EventOfflineServiceFrameWork.scala:41))
 -> Map (Key Extractor)' , caused an error: Error obtaining the sorted input: 
Thread 'SortMerger spilling thread' terminated due to an exception: Index: 97, 
Size: 17
  at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:479)
  at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:712)
  at java.lang.Thread.run(Thread.java:748)
 Caused by: java.lang.RuntimeException: Error obtaining the sorted input: 
Thread 'SortMerger spilling thread' terminated due to an exception: Index: 97, 
Size: 17
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:650)
  at org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1108)
  at 
org.apache.flink.runtime.operators.GroupReduceDriver.prepare(GroupReduceDriver.java:99)
  at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:473)
  ... 3 more
 Caused by: java.io.IOException: Thread 'SortMerger spilling thread' terminated 
due to an exception: Index: 97, Size: 17
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:831)
 Caused by: java.lang.IndexOutOfBoundsException: Index: 97, Size: 17
  at java.util.ArrayList.rangeCheck(ArrayList.java:653)
  at java.util.ArrayList.get(ArrayList.java:429)
  at 
com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
  at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:759)
  at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:315)
  at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:335)
  at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:350)
  at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:99)
  at 
org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:84)
  at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:99)
  at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:99)
  at 
org.apache.flink.runtime.operators.sort.NormalizedKeySorter.writeToOutput(NormalizedKeySorter.java:519)
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger$SpillingThread.go(UnilateralSortMerger.java:1375)
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:827)
 CHAIN GroupReduce (GroupReduce at 
com.jd.risk.flink.analysis.framework.core.EventOfflineServiceFrameWork.startService(EventOfflineServiceFrameWork.scala:41))
 -> Map (Key Extractor) (305/480)
 Timestamp: 2019-08-08, 20:15:05 Location: 
LF-BCC-POD0-172-21-60-234.hadoop.jd.local:15837
 java.lang.Exception: The data preparation for task 'CHAIN GroupReduce 
(GroupReduce at 
com.jd.risk.flink.analysis.framework.core.EventOfflineServiceFrameWork.startService(EventOfflineServiceFrameWork.scala:41))
 -> Map (Key Extractor)' , caused an error: Error obtaining the sorted input: 
Thread 'SortMerger 

[jira] [Updated] (FLINK-13655) Caused by: java.io.IOException: Thread 'SortMerger spilling thread' terminated due to an exception

2019-08-08 Thread LiJun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiJun updated FLINK-13655:
--
Description: 
Symptom:

flink program can sucessfully read and process single ORC file from 
HDFS,whatever given reading path is file's parent folder or specific file path. 
However,I put them together in the same folder and program reads that folder, 
the following error always occurs.

{color:#cc7832}val {color}configHadoop = {color:#cc7832}new 
{color}org.apache.hadoop.conf.Configuration()
 configHadoop.set({color:#6a8759}"HADOOP_USER_NAME"{color}{color:#cc7832}, 
{color}{color:#6a8759}"user"{color})
 configHadoop.set({color:#6a8759}"fs.defaultFS"{color}{color:#cc7832}, 
{color}{color:#6a8759}"xx.xxx.xx.xx"{color})
 {color:#cc7832}val {color}env = ExecutionEnvironment.getExecutionEnvironment
 {color:#cc7832}val {color}bTableEnv = TableEnvironment.getTableEnvironment(env)

{color:#cc7832}val {color}orcTableSource = OrcTableSource.builder()
 {color:#808080}// path to ORC file(s). NOTE: By default, directories are 
recursively scanned.{color} .path(inPath)
 {color:#808080}// schema of ORC files{color} 
.forOrcSchema({color:#6a8759}"struct"{color})
 {color:#808080}// Hadoop configuration{color} .withConfiguration(configHadoop)
 {color:#808080}// build OrcTableSource{color} .build()

 

The following is stack info 

Root exception
 Timestamp: 2019-08-08, 20:15:05
 java.lang.Exception: The data preparation for task 'CHAIN GroupReduce 
(GroupReduce at 
com.jd.risk.flink.analysis.framework.core.EventOfflineServiceFrameWork.startService(EventOfflineServiceFrameWork.scala:41))
 -> Map (Key Extractor)' , caused an error: Error obtaining the sorted input: 
Thread 'SortMerger spilling thread' terminated due to an exception: Index: 97, 
Size: 17
  at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:479)
  at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:712)
  at java.lang.Thread.run(Thread.java:748)
 Caused by: java.lang.RuntimeException: Error obtaining the sorted input: 
Thread 'SortMerger spilling thread' terminated due to an exception: Index: 97, 
Size: 17
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:650)
  at org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1108)
  at 
org.apache.flink.runtime.operators.GroupReduceDriver.prepare(GroupReduceDriver.java:99)
  at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:473)
  ... 3 more
 Caused by: java.io.IOException: Thread 'SortMerger spilling thread' terminated 
due to an exception: Index: 97, Size: 17
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:831)
 Caused by: java.lang.IndexOutOfBoundsException: Index: 97, Size: 17
  at java.util.ArrayList.rangeCheck(ArrayList.java:653)
  at java.util.ArrayList.get(ArrayList.java:429)
  at 
com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
  at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:759)
  at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:315)
  at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:335)
  at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:350)
  at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:99)
  at 
org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:84)
  at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:99)
  at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:99)
  at 
org.apache.flink.runtime.operators.sort.NormalizedKeySorter.writeToOutput(NormalizedKeySorter.java:519)
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger$SpillingThread.go(UnilateralSortMerger.java:1375)
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:827)
 CHAIN GroupReduce (GroupReduce at 
com.jd.risk.flink.analysis.framework.core.EventOfflineServiceFrameWork.startService(EventOfflineServiceFrameWork.scala:41))
 -> Map (Key Extractor) (305/480)
 Timestamp: 2019-08-08, 20:15:05 Location: 
LF-BCC-POD0-172-21-60-234.hadoop.jd.local:15837
 java.lang.Exception: The data preparation for task 'CHAIN GroupReduce 
(GroupReduce at 
com.jd.risk.flink.analysis.framework.core.EventOfflineServiceFrameWork.startService(EventOfflineServiceFrameWork.scala:41))
 -> Map (Key Extractor)' , caused an error: Error obtaining the sorted input: 
Thread 'SortMerger 

[GitHub] [flink] flinkbot edited a comment on issue #8906: [FLINK-13008]fix the findbugs warning in AggregationsFunctio.scala

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #8906: [FLINK-13008]fix the findbugs warning 
in AggregationsFunctio.scala
URL: https://github.com/apache/flink/pull/8906#issuecomment-519474213
 
 
   ## CI report:
   
   * bea9f0b86c27d30c763e8578edc5ef46a7d76e9e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122421031)
   * a31d440b058e9fbe2d0a908c4a4fb63750685340 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122532295)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangpeibin713 removed a comment on issue #8906: [FLINK-13008]fix the findbugs warning in AggregationsFunctio.scala

2019-08-08 Thread GitBox
wangpeibin713 removed a comment on issue #8906: [FLINK-13008]fix the findbugs 
warning in AggregationsFunctio.scala
URL: https://github.com/apache/flink/pull/8906#issuecomment-519753292
 
 
   @flinkbot restart travis-ci


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangpeibin713 commented on issue #8906: [FLINK-13008]fix the findbugs warning in AggregationsFunctio.scala

2019-08-08 Thread GitBox
wangpeibin713 commented on issue #8906: [FLINK-13008]fix the findbugs warning 
in AggregationsFunctio.scala
URL: https://github.com/apache/flink/pull/8906#issuecomment-519753292
 
 
   @flinkbot restart travis-ci


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13510) Show fail attemp for subtask in timelime

2019-08-08 Thread lining (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903507#comment-16903507
 ] 

lining commented on FLINK-13510:


[~till.rohrmann], Now failover will cancle all tasks. After failover, can not  
cancled attempts of task  schedule timeline. If failover spent long time, it's 
difficult to find the reason. Show  all attempts of tasks timeline, we can 
analyze which attempts spend long time to cancle.

> Show fail attemp for subtask in timelime
> 
>
> Key: FLINK-13510
> URL: https://issues.apache.org/jira/browse/FLINK-13510
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: lining
>Priority: Major
>
> Now, user just can see subtask current attempt in timeline. If job failover, 
> can not see some has cancled task timeline.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9394: [FLINK-13547][table-planner-blink] Verify and correct string function's semantic for Blink planner

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9394: [FLINK-13547][table-planner-blink] 
Verify and correct string function's semantic for Blink planner
URL: https://github.com/apache/flink/pull/9394#issuecomment-519467299
 
 
   ## CI report:
   
   * 861d120d83964d267c95d024993b82186b1e6e7a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122417819)
   * a423b9b46d10f61091bd70aa0ea1bd2367fd25a0 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122531452)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangpeibin713 opened a new pull request #8906: [FLINK-13008]fix the findbugs warning in AggregationsFunctio.scala

2019-08-08 Thread GitBox
wangpeibin713 opened a new pull request #8906: [FLINK-13008]fix the findbugs 
warning in AggregationsFunctio.scala
URL: https://github.com/apache/flink/pull/8906
 
 
   
   
   ## What is the purpose of the change
   
   - The goal is to fix the findbugs warning in AggregationsFunctio.scala
   https://issues.apache.org/jira/browse/FLINK-13008
   
   ## Brief change log
   
 - *throw the unsupportedOperationException rather than return it*

   ## Verifying this change
   
   * This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangpeibin713 closed pull request #8906: [FLINK-13008]fix the findbugs warning in AggregationsFunctio.scala

2019-08-08 Thread GitBox
wangpeibin713 closed pull request #8906: [FLINK-13008]fix the findbugs warning 
in AggregationsFunctio.scala
URL: https://github.com/apache/flink/pull/8906
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9347: [FLINK-13563] [table-planner-blink] TumblingGroupWindow should implement toString method to explain more info

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9347: [FLINK-13563] [table-planner-blink] 
TumblingGroupWindow should implement toString method to explain more info
URL: https://github.com/apache/flink/pull/9347#issuecomment-517895998
 
 
   ## CI report:
   
   * 212cd6c24fdb696aa13bed1cfff875f9bfc01d09 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121831822)
   * fa92c963b1f41d75b1399d41708d88cd47a632e9 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122530462)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent failing the wrong execution attempt in CheckpointFailureManager

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent 
failing the wrong execution attempt in CheckpointFailureManager
URL: https://github.com/apache/flink/pull/9364#issuecomment-518519629
 
 
   ## CI report:
   
   * db9e30b162a1f8f7958a3e1a5ff61214f205de1c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122057050)
   * df05d6d8946ec764655082180d893918bfc2a290 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122180572)
   * d35d5392868ee76b9869a2f62fb23ddc443498fa : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122472526)
   * efbf1541fd5e495af8316cd19af24d7e93be78ac : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122525773)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13340) Add more Kafka topic option of flink-connector-kafka

2019-08-08 Thread DuBin (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903496#comment-16903496
 ] 

DuBin commented on FLINK-13340:
---

hi [~twalthr], can you please help review this PR?

> Add more Kafka topic option of flink-connector-kafka
> 
>
> Key: FLINK-13340
> URL: https://issues.apache.org/jira/browse/FLINK-13340
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.8.1
>Reporter: DuBin
>Assignee: DuBin
>Priority: Major
>  Labels: features, pull-request-available
>   Original Estimate: 48h
>  Time Spent: 10m
>  Remaining Estimate: 47h 50m
>
> Currently, only 'topic' option implemented in the Kafka Connector Descriptor, 
> we can only use it like :
> {code:java}
> val env = StreamExecutionEnvironment.getExecutionEnvironment
> val tableEnv = StreamTableEnvironment.create(env)
> tableEnv
>   .connect(
> new Kafka()
>   .version("0.11")
>   .topic("test-flink-1")
>   .startFromEarliest()
>   .property("zookeeper.connect", "localhost:2181")
>   .property("bootstrap.servers", "localhost:9092"))
>   .withFormat(
> new Json()
>   .deriveSchema()
>   )
>   .withSchema(
> new Schema()
>   .field("name", Types.STRING)
>   .field("age", Types.STRING)
>   ){code}
> but we cannot consume multiple topics or a topic regex pattern. 
> Here is my thoughts:
> {code:java}
>   .topic("test-flink-1") 
>   //.topics("test-flink-1,test-flink-2") or topics(List 
> topics)
>   //.subscriptionPattern("test-flink-.*") or 
> subscriptionPattern(Pattern pattern)
> {code}
> I already implement the code on my local env with help of the 
> FlinkKafkaConsumer, and it works.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] Fix some operator names are not set in blink planner

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] 
Fix some operator names are not set in blink planner
URL: https://github.com/apache/flink/pull/9363#issuecomment-518265997
 
 
   ## CI report:
   
   * 1fe6c332279c34546ec3db24a574dfd53500d20b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121971735)
   * fa3e7406f9664a59efcb448748511b656474e74c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122089384)
   * 28175449cb1d5eb8f318359090ea87e5b2af42d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122102858)
   * 81593a4dcb3573843c1c02cba0cb17abe1693065 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122290876)
   * 829830f8df1eb814ac44716f230c6aedcfaa5128 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122369808)
   * 0dcba7f669a5702e5730aac0273eb4a58fce2be9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122460702)
   * a5052f45f08b0760cdd4a163a87c0be7d6383c42 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122479843)
   * e8a680f4d832f794eb80d6e569d323babc4da518 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122530453)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on a change in pull request #9346: [FLINK-13562] [table-planner-blink] fix incorrect input type for local stream group aggregate in FlinkRelMdColumnInterval

2019-08-08 Thread GitBox
godfreyhe commented on a change in pull request #9346: [FLINK-13562] 
[table-planner-blink] fix incorrect input type for local stream group aggregate 
in FlinkRelMdColumnInterval
URL: https://github.com/apache/flink/pull/9346#discussion_r312306197
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/agg/DistinctAggregateTest.scala
 ##
 @@ -117,6 +117,11 @@ class DistinctAggregateTest(
 util.verifyPlan(sqlQuery)
   }
 
+  @Test
+  def testTwoDistinctAggregateWithNonDistinctAgg(): Unit = {
+util.verifyPlan("SELECT c, SUM(DISTINCT a), SUM(a), COUNT(DISTINCT b) FROM 
MyTable GROUP BY c")
 
 Review comment:
   yes, to verify the bug


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9346: [FLINK-13562] [table-planner-blink] fix incorrect input type for local stream group aggregate in FlinkRelMdColumnInterval

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9346: [FLINK-13562] [table-planner-blink] 
fix incorrect input type for local stream group aggregate in 
FlinkRelMdColumnInterval
URL: https://github.com/apache/flink/pull/9346#issuecomment-517895087
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 255f2f78ad478b7e3cfb13a17af43872d6ad658f (Fri Aug 09 
02:01:15 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13562).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on issue #9347: [FLINK-13563] [table-planner-blink] TumblingGroupWindow should implement toString method to explain more info

2019-08-08 Thread GitBox
godfreyhe commented on issue #9347: [FLINK-13563] [table-planner-blink] 
TumblingGroupWindow should implement toString method to explain more info
URL: https://github.com/apache/flink/pull/9347#issuecomment-519748929
 
 
   thanks for reminding @wuchong. rebased


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] Fix some operator names are not set in blink planner

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] 
Fix some operator names are not set in blink planner
URL: https://github.com/apache/flink/pull/9363#issuecomment-518263446
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e8a680f4d832f794eb80d6e569d323babc4da518 (Fri Aug 09 
01:57:13 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9389: [FLINK-13645][table-planner] Error in code-gen when using blink planner in scala shell

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9389: [FLINK-13645][table-planner] Error in 
code-gen when using blink planner in scala shell
URL: https://github.com/apache/flink/pull/9389#issuecomment-519351757
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1e89a949f197d2feb5c56f737edfe67c250b346b (Fri Aug 09 
01:54:08 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13645).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on issue #9389: [FLINK-13645][table-planner] Error in code-gen when using blink planner in scala shell

2019-08-08 Thread GitBox
zjffdu commented on issue #9389: [FLINK-13645][table-planner] Error in code-gen 
when using blink planner in scala shell
URL: https://github.com/apache/flink/pull/9389#issuecomment-519747709
 
 
   @wuchong Could you help take a look at it ? Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9389: [FLINK-13645][table-planner] Error in code-gen when using blink planner in scala shell

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9389: [FLINK-13645][table-planner] Error in 
code-gen when using blink planner in scala shell
URL: https://github.com/apache/flink/pull/9389#issuecomment-519351757
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1e89a949f197d2feb5c56f737edfe67c250b346b (Fri Aug 09 
01:51:05 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13645).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9389: [FLINK-13645][table-planner] Error in code-gen when using blink planner in scala shell

2019-08-08 Thread GitBox
danny0405 commented on a change in pull request #9389: 
[FLINK-13645][table-planner] Error in code-gen when using blink planner in 
scala shell
URL: https://github.com/apache/flink/pull/9389#discussion_r312304638
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/CodeGeneratorContext.scala
 ##
 @@ -603,7 +603,7 @@ class CodeGeneratorContext(val tableConfig: TableConfig) {
 val byteArray = InstantiationUtil.serializeObject(obj)
 val objCopy: AnyRef = InstantiationUtil.deserializeObject(
   byteArray,
-  obj.getClass.getClassLoader)
+  Thread.currentThread().getContextClassLoader)
 references += objCopy
 
 Review comment:
   I'm fine with this, there seems some compile error.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13655) Caused by: java.io.IOException: Thread 'SortMerger spilling thread' terminated due to an exception

2019-08-08 Thread LiJun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiJun updated FLINK-13655:
--
Description: 
Symptom:

There are two files (about 600MB) with ORC format in HDFS.My flink program can 
read and process single file sucessfully,whatever reading path is given with 
that file's parent folder OR file's full path. However,I put them together in 
the same folder and program read that folder, the following error always occurs.

{color:#cc7832}val {color}configHadoop = {color:#cc7832}new 
{color}org.apache.hadoop.conf.Configuration()
 configHadoop.set({color:#6a8759}"HADOOP_USER_NAME"{color}{color:#cc7832}, 
{color}{color:#6a8759}"user"{color})
 configHadoop.set({color:#6a8759}"fs.defaultFS"{color}{color:#cc7832}, 
{color}{color:#6a8759}"xx.xxx.xx.xx"{color})
 {color:#cc7832}val {color}env = ExecutionEnvironment.getExecutionEnvironment
 {color:#cc7832}val {color}bTableEnv = TableEnvironment.getTableEnvironment(env)

{color:#cc7832}val {color}orcTableSource = OrcTableSource.builder()
 {color:#808080}// path to ORC file(s). NOTE: By default, directories are 
recursively scanned.{color} .path(inPath)
 {color:#808080}// schema of ORC files{color} 
.forOrcSchema({color:#6a8759}"struct"{color})
 {color:#808080}// Hadoop configuration{color} .withConfiguration(configHadoop)
 {color:#808080}// build OrcTableSource{color} .build()

 

The following is stack info 

Root exception
 Timestamp: 2019-08-08, 20:15:05
 java.lang.Exception: The data preparation for task 'CHAIN GroupReduce 
(GroupReduce at 
com.jd.risk.flink.analysis.framework.core.EventOfflineServiceFrameWork.startService(EventOfflineServiceFrameWork.scala:41))
 -> Map (Key Extractor)' , caused an error: Error obtaining the sorted input: 
Thread 'SortMerger spilling thread' terminated due to an exception: Index: 97, 
Size: 17
  at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:479)
  at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:712)
  at java.lang.Thread.run(Thread.java:748)
 Caused by: java.lang.RuntimeException: Error obtaining the sorted input: 
Thread 'SortMerger spilling thread' terminated due to an exception: Index: 97, 
Size: 17
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:650)
  at org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1108)
  at 
org.apache.flink.runtime.operators.GroupReduceDriver.prepare(GroupReduceDriver.java:99)
  at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:473)
  ... 3 more
 Caused by: java.io.IOException: Thread 'SortMerger spilling thread' terminated 
due to an exception: Index: 97, Size: 17
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:831)
 Caused by: java.lang.IndexOutOfBoundsException: Index: 97, Size: 17
  at java.util.ArrayList.rangeCheck(ArrayList.java:653)
  at java.util.ArrayList.get(ArrayList.java:429)
  at 
com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
  at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:759)
  at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:315)
  at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:335)
  at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:350)
  at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:99)
  at 
org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:84)
  at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:99)
  at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:99)
  at 
org.apache.flink.runtime.operators.sort.NormalizedKeySorter.writeToOutput(NormalizedKeySorter.java:519)
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger$SpillingThread.go(UnilateralSortMerger.java:1375)
  at 
org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:827)
 CHAIN GroupReduce (GroupReduce at 
com.jd.risk.flink.analysis.framework.core.EventOfflineServiceFrameWork.startService(EventOfflineServiceFrameWork.scala:41))
 -> Map (Key Extractor) (305/480)
 Timestamp: 2019-08-08, 20:15:05 Location: 
LF-BCC-POD0-172-21-60-234.hadoop.jd.local:15837
 java.lang.Exception: The data preparation for task 'CHAIN GroupReduce 
(GroupReduce at 
com.jd.risk.flink.analysis.framework.core.EventOfflineServiceFrameWork.startService(EventOfflineServiceFrameWork.scala:41))
 -> Map (Key Extractor)' , caused an error: 

[jira] [Commented] (FLINK-13603) Flink Table ApI not working with nested Json schema starting From 1.6.x

2019-08-08 Thread Rong Rong (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903441#comment-16903441
 ] 

Rong Rong commented on FLINK-13603:
---

to answer your questions [~jacky.du0...@gmail.com], any critical bug fixes 
should be back ported to older release branches ( at least 2 if I am not 
mistaken)

> Flink Table ApI not working with nested Json schema starting From 1.6.x
> ---
>
> Key: FLINK-13603
> URL: https://issues.apache.org/jira/browse/FLINK-13603
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.6.4, 1.7.2, 1.8.1
>Reporter: Yu Du
>Priority: Major
>  Labels: bug
> Attachments: FlinkTableBugCode, jsonSchema.json, jsonSchema2.json, 
> schema_mapping_error_screenshot .png
>
>
> starting from Flink 1.6.2 , some schema not working when have nested object .
> issue like :  Caused by: 
> org.apache.calcite.sql.validate.SqlValidatorException: Column 
> 'data.interaction.action_type' not found in table 
> Even we can see that column from Table Schema .
> And the same schema and query working on 1.5.2 , but not working for 1.6.x , 
> 1.7.x and 1.8.x
>  
> I tried to dive into the bug, and found the root cause is calcite library 
> doesn't mapping the column name with the correct Row type . 
> I checked Flink 1.6 using the same version of Calcite as Flink 1.5 .  Not 
> sure if Calcite is the root cause of this issue .
> Attached with the code sample and two issue json schemas . both examples give 
> column not found exception .
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13603) Flink Table ApI not working with nested Json schema starting From 1.6.x

2019-08-08 Thread Rong Rong (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903440#comment-16903440
 ] 

Rong Rong commented on FLINK-13603:
---

Based on what I saw. FLINK-12848 is labeled as improvement so I am not sure 
whether it can make it to any branch older than 1.9. My suggestion is to fix 
this and let FLINK-12848 continue as an improvement. Does anyone have any 
suggestions on this, CC [~twalthr], the original owner of FLINK-9444 ?

> Flink Table ApI not working with nested Json schema starting From 1.6.x
> ---
>
> Key: FLINK-13603
> URL: https://issues.apache.org/jira/browse/FLINK-13603
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.6.4, 1.7.2, 1.8.1
>Reporter: Yu Du
>Priority: Major
>  Labels: bug
> Attachments: FlinkTableBugCode, jsonSchema.json, jsonSchema2.json, 
> schema_mapping_error_screenshot .png
>
>
> starting from Flink 1.6.2 , some schema not working when have nested object .
> issue like :  Caused by: 
> org.apache.calcite.sql.validate.SqlValidatorException: Column 
> 'data.interaction.action_type' not found in table 
> Even we can see that column from Table Schema .
> And the same schema and query working on 1.5.2 , but not working for 1.6.x , 
> 1.7.x and 1.8.x
>  
> I tried to dive into the bug, and found the root cause is calcite library 
> doesn't mapping the column name with the correct Row type . 
> I checked Flink 1.6 using the same version of Calcite as Flink 1.5 .  Not 
> sure if Calcite is the root cause of this issue .
> Attached with the code sample and two issue json schemas . both examples give 
> column not found exception .
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent failing the wrong execution attempt in CheckpointFailureManager

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent 
failing the wrong execution attempt in CheckpointFailureManager
URL: https://github.com/apache/flink/pull/9364#issuecomment-518519629
 
 
   ## CI report:
   
   * db9e30b162a1f8f7958a3e1a5ff61214f205de1c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122057050)
   * df05d6d8946ec764655082180d893918bfc2a290 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122180572)
   * d35d5392868ee76b9869a2f62fb23ddc443498fa : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122472526)
   * efbf1541fd5e495af8316cd19af24d7e93be78ac : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122525773)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent failing the wrong execution attempt in CheckpointFailureManager

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent 
failing the wrong execution attempt in CheckpointFailureManager
URL: https://github.com/apache/flink/pull/9364#issuecomment-518518670
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit efbf1541fd5e495af8316cd19af24d7e93be78ac (Fri Aug 09 
00:25:36 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13603) Flink Table ApI not working with nested Json schema starting From 1.6.x

2019-08-08 Thread Yu Du (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903432#comment-16903432
 ] 

Yu Du commented on FLINK-13603:
---

hi, [~walterddr]

Thanks for reply .

yes,  I did a local flink-core build and change hashCode() do fix my issue  . I 
think those two issues are pretty similar as FlinkTypeFactory caching the 
TypeInfo and compare the hashcode() to check the if the TypeInfo in the cache.  
 if this change doesn't have side affect  to other Flink components will be 
happy to see next release fix .  

 

Will this change also be applied to Flink 1.6x and 1.7x ? 

 

> Flink Table ApI not working with nested Json schema starting From 1.6.x
> ---
>
> Key: FLINK-13603
> URL: https://issues.apache.org/jira/browse/FLINK-13603
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.6.4, 1.7.2, 1.8.1
>Reporter: Yu Du
>Priority: Major
>  Labels: bug
> Attachments: FlinkTableBugCode, jsonSchema.json, jsonSchema2.json, 
> schema_mapping_error_screenshot .png
>
>
> starting from Flink 1.6.2 , some schema not working when have nested object .
> issue like :  Caused by: 
> org.apache.calcite.sql.validate.SqlValidatorException: Column 
> 'data.interaction.action_type' not found in table 
> Even we can see that column from Table Schema .
> And the same schema and query working on 1.5.2 , but not working for 1.6.x , 
> 1.7.x and 1.8.x
>  
> I tried to dive into the bug, and found the root cause is calcite library 
> doesn't mapping the column name with the correct Row type . 
> I checked Flink 1.6 using the same version of Calcite as Flink 1.5 .  Not 
> sure if Calcite is the root cause of this issue .
> Attached with the code sample and two issue json schemas . both examples give 
> column not found exception .
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9398: [FLINK-13479] [flink-cassandra-connector] Fix for Deterministic ordering for prepared statement

2019-08-08 Thread GitBox
flinkbot commented on issue #9398: [FLINK-13479] [flink-cassandra-connector] 
Fix for Deterministic ordering for prepared statement
URL: https://github.com/apache/flink/pull/9398#issuecomment-519727123
 
 
   ## CI report:
   
   * 08e406e0565f2f945437c4baeebbf0a6c4489ff3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122523237)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13479) Cassandra POJO Sink - Prepared Statement query does not have deterministic ordering of columns - causing prepared statement cache overflow

2019-08-08 Thread Ronak Thakrar (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903417#comment-16903417
 ] 

Ronak Thakrar commented on FLINK-13479:
---

[~aljoscha] - I have created the pull request - can you please review the 
request and plan for merging them?

[https://github.com/apache/flink/pull/9398]

 

> Cassandra POJO Sink - Prepared Statement query does not have deterministic 
> ordering of columns - causing prepared statement cache overflow
> --
>
> Key: FLINK-13479
> URL: https://issues.apache.org/jira/browse/FLINK-13479
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Cassandra
>Affects Versions: 1.7.2
>Reporter: Ronak Thakrar
>Assignee: Ronak Thakrar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> While using Cassandra POJO Sink as part of Flink Jobs - prepared statements 
> query string which is automatically generated while inserting the data(using 
> Mapper.saveQuery method), Cassandra entity does not have deterministic 
> ordering enforced-so every time column position is changed a new prepared 
> statement is generated and used.  As an effect of that prepared statement 
> query cache is overflown because every time when insert statement query 
> string is generated by - columns are in random order. 
> Following is the detailed explanation for what happens inside the Datastax 
> java driver([https://datastax-oss.atlassian.net/browse/JAVA-1587]):
> The current Mapper uses random ordering of columns when it creates prepared 
> queries. This is fine when only 1 java client is accessing a cluster (and 
> assuming the application developer does the correct thing by re-using a 
> Mapper), since each Mapper will reused prepared statement. However when you 
> have many java clients accessing a cluster, they will each create their own 
> permutations of column ordering, and can thrash the prepared statement cache 
> on the cluster.
> I propose that the Mapper uses a TreeMap instead of a HashMap when it builds 
> its set of AliasedMappedProperty - sorted by the column name 
> (col.mappedProperty.getMappedName()). This would create a deterministic 
> ordering of columns, and all java processes accessing the same cluster would 
> end up with the same prepared queries for the same entities.
> This issue is already fixed in the Datastax java driver update version(3.3.1) 
> which is not used by Flink Cassandra connector (using 3.0.0).
> I upgraded the driver version to 3.3.1 locally in Flink Cassandra connector 
> and tested, it stopped creating new prepared statements with different 
> ordering of column for the same entity. I have the fix for this issue and 
> would like to contribute the change and will raise the PR request for the 
> same. 
> Flink Cassandra Connector Version: flink-connector-cassandra_2.11
> Flink Version: 1.7.1
> I am creating PR request for the same and which can be merged accordingly and 
> re released in new minor release or patch release as required.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9398: [FLINK-13479] [flink-cassandra-connector] Fix for Deterministic ordering for prepared statement

2019-08-08 Thread GitBox
flinkbot commented on issue #9398: [FLINK-13479] [flink-cassandra-connector] 
Fix for Deterministic ordering for prepared statement
URL: https://github.com/apache/flink/pull/9398#issuecomment-519726006
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 08e406e0565f2f945437c4baeebbf0a6c4489ff3 (Thu Aug 08 
23:44:18 UTC 2019)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13479) Cassandra POJO Sink - Prepared Statement query does not have deterministic ordering of columns - causing prepared statement cache overflow

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13479:
---
Labels: pull-request-available  (was: )

> Cassandra POJO Sink - Prepared Statement query does not have deterministic 
> ordering of columns - causing prepared statement cache overflow
> --
>
> Key: FLINK-13479
> URL: https://issues.apache.org/jira/browse/FLINK-13479
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Cassandra
>Affects Versions: 1.7.2
>Reporter: Ronak Thakrar
>Assignee: Ronak Thakrar
>Priority: Major
>  Labels: pull-request-available
>
> While using Cassandra POJO Sink as part of Flink Jobs - prepared statements 
> query string which is automatically generated while inserting the data(using 
> Mapper.saveQuery method), Cassandra entity does not have deterministic 
> ordering enforced-so every time column position is changed a new prepared 
> statement is generated and used.  As an effect of that prepared statement 
> query cache is overflown because every time when insert statement query 
> string is generated by - columns are in random order. 
> Following is the detailed explanation for what happens inside the Datastax 
> java driver([https://datastax-oss.atlassian.net/browse/JAVA-1587]):
> The current Mapper uses random ordering of columns when it creates prepared 
> queries. This is fine when only 1 java client is accessing a cluster (and 
> assuming the application developer does the correct thing by re-using a 
> Mapper), since each Mapper will reused prepared statement. However when you 
> have many java clients accessing a cluster, they will each create their own 
> permutations of column ordering, and can thrash the prepared statement cache 
> on the cluster.
> I propose that the Mapper uses a TreeMap instead of a HashMap when it builds 
> its set of AliasedMappedProperty - sorted by the column name 
> (col.mappedProperty.getMappedName()). This would create a deterministic 
> ordering of columns, and all java processes accessing the same cluster would 
> end up with the same prepared queries for the same entities.
> This issue is already fixed in the Datastax java driver update version(3.3.1) 
> which is not used by Flink Cassandra connector (using 3.0.0).
> I upgraded the driver version to 3.3.1 locally in Flink Cassandra connector 
> and tested, it stopped creating new prepared statements with different 
> ordering of column for the same entity. I have the fix for this issue and 
> would like to contribute the change and will raise the PR request for the 
> same. 
> Flink Cassandra Connector Version: flink-connector-cassandra_2.11
> Flink Version: 1.7.1
> I am creating PR request for the same and which can be merged accordingly and 
> re released in new minor release or patch release as required.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] ronakthakrar opened a new pull request #9398: [FLINK-13479] [flink-cassandra-connector] Fix for Deterministic ordering for prepared statement

2019-08-08 Thread GitBox
ronakthakrar opened a new pull request #9398: [FLINK-13479] 
[flink-cassandra-connector] Fix for Deterministic ordering for prepared 
statement
URL: https://github.com/apache/flink/pull/9398
 
 
   ## What is the purpose of the change
   This pull request fixes the issue with Cassandra Connector issue FLINK-13479 
where the deterministic order of columns in prepared statements is not working.
   
   ## Brief change log
   - Updated the cassandra driver version from 3.0.0 to 3.3.1 where the issue 
is fixed (https://datastax-oss.atlassian.net/browse/JAVA-1587)
   
   ## Verifying this change
   This is trivial configuration change - only changes the depenndent cassandra 
driver version.
   
   ## Does this pull request potentially affect one of the following parts:
 - Dependencies (does it add or upgrade a dependency): yes - cassandra 
driver upgraded from 3.0.0 to 3.3.1
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: No
 - The S3 file system connector: No
   
   ## Documentation
 - Does this pull request introduce a new feature? No
 - If yes, how is the feature documented? Not applicable


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13603) Flink Table ApI not working with nested Json schema starting From 1.6.x

2019-08-08 Thread Rong Rong (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903404#comment-16903404
 ] 

Rong Rong commented on FLINK-13603:
---

Hi [~jacky.du0...@gmail.com]. I wouldn't think they are necessarily the same. 
Does changing the {{hashCode()}} function resolve your issue? 

I just did a local change and run through the test on {{flink-core}} and it 
didn't affect any test committed with FLINK-9444. so I am assuming it is not 
necessary to change the haseCode function in that PR. This would be a much 
quicker fix (and I think easier to get this in 1.9 release) 

> Flink Table ApI not working with nested Json schema starting From 1.6.x
> ---
>
> Key: FLINK-13603
> URL: https://issues.apache.org/jira/browse/FLINK-13603
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.6.4, 1.7.2, 1.8.1
>Reporter: Yu Du
>Priority: Major
>  Labels: bug
> Attachments: FlinkTableBugCode, jsonSchema.json, jsonSchema2.json, 
> schema_mapping_error_screenshot .png
>
>
> starting from Flink 1.6.2 , some schema not working when have nested object .
> issue like :  Caused by: 
> org.apache.calcite.sql.validate.SqlValidatorException: Column 
> 'data.interaction.action_type' not found in table 
> Even we can see that column from Table Schema .
> And the same schema and query working on 1.5.2 , but not working for 1.6.x , 
> 1.7.x and 1.8.x
>  
> I tried to dive into the bug, and found the root cause is calcite library 
> doesn't mapping the column name with the correct Row type . 
> I checked Flink 1.6 using the same version of Calcite as Flink 1.5 .  Not 
> sure if Calcite is the root cause of this issue .
> Attached with the code sample and two issue json schemas . both examples give 
> column not found exception .
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9397: [FLINK-13658] [API / DataStream] Combine two triggers into one

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9397: [FLINK-13658] [API / DataStream] 
Combine two triggers into one
URL: https://github.com/apache/flink/pull/9397#issuecomment-519695998
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 8c5feb16a2c3814e378dcc67245c554dd0b67eaa (Thu Aug 08 
22:35:44 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13658).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] winitzki commented on issue #9397: [FLINK-13658] [API / DataStream] Combine two triggers into one

2019-08-08 Thread GitBox
winitzki commented on issue #9397: [FLINK-13658] [API / DataStream] Combine two 
triggers into one
URL: https://github.com/apache/flink/pull/9397#issuecomment-519711728
 
 
   The CI error is `21:38:13.853 [ERROR] Failed to execute goal 
org.apache.rat:apache-rat-plugin:0.12:check (default) on project flink-parent: 
Too many files with unapproved license: 1 See RAT report in: 
/home/travis/build/flink-ci/flink/target/rat.txt -> [Help 1]
   21:38:13.854 [ERROR] ` and I have no idea what that means.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9397: [FLINK-13658] [API / DataStream] Combine two triggers into one

2019-08-08 Thread GitBox
flinkbot commented on issue #9397: [FLINK-13658] [API / DataStream] Combine two 
triggers into one
URL: https://github.com/apache/flink/pull/9397#issuecomment-519697931
 
 
   ## CI report:
   
   * 8c5feb16a2c3814e378dcc67245c554dd0b67eaa : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122512074)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9397: [FLINK-13658] [API / DataStream] Combine two triggers into one

2019-08-08 Thread GitBox
flinkbot commented on issue #9397: [FLINK-13658] [API / DataStream] Combine two 
triggers into one
URL: https://github.com/apache/flink/pull/9397#issuecomment-519695998
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 8c5feb16a2c3814e378dcc67245c554dd0b67eaa (Thu Aug 08 
21:34:05 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13658).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13658) Combine two triggers into one (for streaming windows)

2019-08-08 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13658:
---
Labels: pull-request-available  (was: )

> Combine two triggers into one (for streaming windows)
> -
>
> Key: FLINK-13658
> URL: https://issues.apache.org/jira/browse/FLINK-13658
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: Sergei Winitzki
>Priority: Major
>  Labels: pull-request-available
>
> Combine two `Trigger`s into one. This allows users to write in one line a 
> windowed stream whose windows are defined by max element count together with 
> a max time delay between windows.
>  
> Presently, Flink documentation and Stack Overflow discussions tell users to 
> implement such triggers manually as custom triggers. However, the 
> `TriggerResult` enumeration type can be defined as a monoid, and so two 
> results can be naturally combined into one. This allows users to combine two 
> or more triggers automatically.
>  
> This implementation is a Scala-only prototype. I am new to Flink and may not 
> be able to contribute a fully compliant PR.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] winitzki opened a new pull request #9397: [FLINK-13658] [API / DataStream] Combine two triggers into one

2019-08-08 Thread GitBox
winitzki opened a new pull request #9397: [FLINK-13658] [API / DataStream] 
Combine two triggers into one
URL: https://github.com/apache/flink/pull/9397
 
 
   This is a new feature: combine two triggers into one. A new case class 
`TriggerOf[T, W]` provides this functionality.
   
   Sample (working) code:
   
   ```scala
   val combinedTrigger = 
TriggerOf(PurgingTrigger.of(CountTrigger.of[TimeWindow](windowSize)), 
PurgingTrigger.of(ProcessingTimeTrigger.create()))
   
   env.addSource(...)
 . keyBy(element => ...)
 .timeWindow(Time.milliseconds(windowTimeoutMs))
 .trigger(combinedTrigger)
.process(...)
.addSink(...)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13658) Combine two triggers into one (for streaming windows)

2019-08-08 Thread Sergei Winitzki (JIRA)
Sergei Winitzki created FLINK-13658:
---

 Summary: Combine two triggers into one (for streaming windows)
 Key: FLINK-13658
 URL: https://issues.apache.org/jira/browse/FLINK-13658
 Project: Flink
  Issue Type: New Feature
  Components: API / DataStream
Reporter: Sergei Winitzki


Combine two `Trigger`s into one. This allows users to write in one line a 
windowed stream whose windows are defined by max element count together with a 
max time delay between windows.

 

Presently, Flink documentation and Stack Overflow discussions tell users to 
implement such triggers manually as custom triggers. However, the 
`TriggerResult` enumeration type can be defined as a monoid, and so two results 
can be naturally combined into one. This allows users to combine two or more 
triggers automatically.

 

This implementation is a Scala-only prototype. I am new to Flink and may not be 
able to contribute a fully compliant PR.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13658) Combine two triggers into one (for streaming windows)

2019-08-08 Thread Sergei Winitzki (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903350#comment-16903350
 ] 

Sergei Winitzki commented on FLINK-13658:
-

{code:java}
package org.apache.flink.streaming.api.windowing.triggers

import org.apache.flink.streaming.api.windowing.triggers.{Trigger, 
TriggerResult}
import org.apache.flink.streaming.api.windowing.windows.Window

object TriggerOf {

import org.apache.flink.streaming.api.windowing.triggers.TriggerResult._

/** Combine two [[TriggerResult]] values. This is a monoidal operation.
*
* @param r1 The first [[TriggerResult]] value.
* @param r2 The second [[TriggerResult]] value.
* @return A new [[TriggerResult]] value that combines the two values.
*/
def or(r1: TriggerResult, r2: TriggerResult): TriggerResult = (r1, r2) match {
case (CONTINUE, x) ⇒ x
case (x, CONTINUE) ⇒ x
case (_, FIRE_AND_PURGE) | (FIRE_AND_PURGE, _) ⇒ FIRE_AND_PURGE
case (PURGE, FIRE) | (FIRE, PURGE) ⇒ FIRE_AND_PURGE // This could also be 
defined as `FIRE` or as `PURGE` without violating the monoid associativity law.
case (PURGE, PURGE) ⇒ PURGE
case (FIRE, FIRE) ⇒ FIRE
}

// Syntax extension for the combining operation for [[TriggerResult]] values.
implicit class OrOps(val r1: TriggerResult) extends AnyVal {
def \/(r2: TriggerResult): TriggerResult = or(r1, r2)
}

}

/** Combine two triggers into one. The new trigger fires whenever one of the 
two previously defined triggers fire.
* Both triggers must have the same type of window and the same type of stream 
elements.
*
* The firing events (`CONTINUE`, `FIRE`, `PURGE`, `FIRE_AND_PURGE`) are 
combined according to the monoid operation `\/` defined above.
*
* @param t1 A first trigger.
* @param t2 A second trigger.
* @tparam T The type of stream elements.
* @tparam W The type of window.
*/
final case class TriggerOf[T, W <: Window](t1: Trigger[T, W], t2: Trigger[T, 
W]) extends Trigger[T, W] {

import TriggerOf._

override def onElement(element: T, timestamp: Long, window: W, ctx: 
Trigger.TriggerContext): TriggerResult = t1.onElement(element, timestamp, 
window, ctx) \/ t2.onElement(element, timestamp, window, ctx)

override def onProcessingTime(time: Long, window: W, ctx: 
Trigger.TriggerContext): TriggerResult = t1.onProcessingTime(time, window, ctx) 
\/ t2.onProcessingTime(time, window, ctx)

override def onEventTime(time: Long, window: W, ctx: Trigger.TriggerContext): 
TriggerResult = t1.onEventTime(time, window, ctx) \/ t2.onEventTime(time, 
window, ctx)

override def canMerge: Boolean = t1.canMerge && t2.canMerge

override def onMerge(window: W, ctx: Trigger.OnMergeContext): Unit = {
t1.onMerge(window, ctx)
t2.onMerge(window, ctx)
}

override def clear(window: W, ctx: Trigger.TriggerContext): Unit = {
t1.clear(window, ctx)
t2.clear(window, ctx)
}
}{code}

> Combine two triggers into one (for streaming windows)
> -
>
> Key: FLINK-13658
> URL: https://issues.apache.org/jira/browse/FLINK-13658
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: Sergei Winitzki
>Priority: Major
>
> Combine two `Trigger`s into one. This allows users to write in one line a 
> windowed stream whose windows are defined by max element count together with 
> a max time delay between windows.
>  
> Presently, Flink documentation and Stack Overflow discussions tell users to 
> implement such triggers manually as custom triggers. However, the 
> `TriggerResult` enumeration type can be defined as a monoid, and so two 
> results can be naturally combined into one. This allows users to combine two 
> or more triggers automatically.
>  
> This implementation is a Scala-only prototype. I am new to Flink and may not 
> be able to contribute a fully compliant PR.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13588) StreamTask.handleAsyncException throws away the exception cause

2019-08-08 Thread John Lonergan (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903278#comment-16903278
 ] 

John Lonergan commented on FLINK-13588:
---

Don't agree.

If there is context then the code should not throw it away. Principal.

Without the exception message I cannot discover why the split failed.

For example we had a failure because of a zero byte avro file in hdfs. The
error message had the filename in it but the code throws it away.

As a result we had to write and run a separate trivial job that brute
forced reading all the files (100k) without flinks help.

The change is justified.
I don't think it's reasonable to throw this away. It looks like the error
handling /logging is a bit inconsistent for sure.

We are now running with a modified version of this class that wraps the
original exception into a runtime exception that includes the cause text.









> StreamTask.handleAsyncException throws away the exception cause
> ---
>
> Key: FLINK-13588
> URL: https://issues.apache.org/jira/browse/FLINK-13588
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.8.1
>Reporter: John Lonergan
>Priority: Major
>
> Code below throws the reason 'message' away making it hard to diagnose why a 
> split has failed for instance.
>  
> {code:java}
> https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java#L909
> @Override
>   public void handleAsyncException(String message, Throwable exception) {
>   if (isRunning) {
>   // only fail if the task is still running
>   getEnvironment().failExternally(exception);
>   }
> }{code}
>  
> Need to pass the message through so that we see it in logs please.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9390: [FLINK-13534][hive] Unable to query Hive table with decimal column

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9390: [FLINK-13534][hive] Unable to query 
Hive table with decimal column
URL: https://github.com/apache/flink/pull/9390#issuecomment-519354933
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit da1bae2e4fb180b2b8785936a1b5b355939badce (Thu Aug 08 
19:28:31 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13534).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #9390: [FLINK-13534][hive] Unable to query Hive table with decimal column

2019-08-08 Thread GitBox
xuefuz commented on a change in pull request #9390: [FLINK-13534][hive] Unable 
to query Hive table with decimal column
URL: https://github.com/apache/flink/pull/9390#discussion_r312205426
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/TableEnvHiveConnectorTest.java
 ##
 @@ -138,6 +138,34 @@ private void readWriteFormat(String format) throws 
Exception {
hiveShell.execute("drop database db1 cascade");
}
 
+   @Test
+   public void testDecimal() throws Exception {
+   hiveShell.execute("create database db1");
+   try {
+   // Hive's default decimal is decimal(10, 0)
+   hiveShell.execute("create table db1.src1 (x decimal)");
+   hiveShell.execute("create table db1.src2 (x decimal)");
+   hiveShell.execute("create table db1.dest (x decimal)");
+   // populate src1 from Hive
+   hiveShell.execute("insert into db1.src1 values 
(1),(2.0),(5.4),(5.5),(123456789123)");
+
+   TableEnvironment tableEnv = 
getTableEnvWithHiveCatalog();
+   // populate src2 with same data from Flink
+   tableEnv.sqlUpdate("insert into db1.src2 values (cast(1 
as decimal(10,0))), (cast(2.0 as decimal(10,0))), " +
 
 Review comment:
   could we have some non-zero scale decimal types?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] Fix some operator names are not set in blink planner

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] 
Fix some operator names are not set in blink planner
URL: https://github.com/apache/flink/pull/9363#issuecomment-518265997
 
 
   ## CI report:
   
   * 1fe6c332279c34546ec3db24a574dfd53500d20b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121971735)
   * fa3e7406f9664a59efcb448748511b656474e74c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122089384)
   * 28175449cb1d5eb8f318359090ea87e5b2af42d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122102858)
   * 81593a4dcb3573843c1c02cba0cb17abe1693065 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122290876)
   * 829830f8df1eb814ac44716f230c6aedcfaa5128 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122369808)
   * 0dcba7f669a5702e5730aac0273eb4a58fce2be9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122460702)
   * a5052f45f08b0760cdd4a163a87c0be7d6383c42 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122479843)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13603) Flink Table ApI not working with nested Json schema starting From 1.6.x

2019-08-08 Thread Yu Du (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903220#comment-16903220
 ] 

Yu Du commented on FLINK-13603:
---

This issue should be the same one as 
https://issues.apache.org/jira/browse/FLINK-12848

 

could be merge to the same ticket or remove as duplicate 

> Flink Table ApI not working with nested Json schema starting From 1.6.x
> ---
>
> Key: FLINK-13603
> URL: https://issues.apache.org/jira/browse/FLINK-13603
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.6.4, 1.7.2, 1.8.1
>Reporter: Yu Du
>Priority: Major
>  Labels: bug
> Attachments: FlinkTableBugCode, jsonSchema.json, jsonSchema2.json, 
> schema_mapping_error_screenshot .png
>
>
> starting from Flink 1.6.2 , some schema not working when have nested object .
> issue like :  Caused by: 
> org.apache.calcite.sql.validate.SqlValidatorException: Column 
> 'data.interaction.action_type' not found in table 
> Even we can see that column from Table Schema .
> And the same schema and query working on 1.5.2 , but not working for 1.6.x , 
> 1.7.x and 1.8.x
>  
> I tried to dive into the bug, and found the root cause is calcite library 
> doesn't mapping the column name with the correct Row type . 
> I checked Flink 1.6 using the same version of Calcite as Flink 1.5 .  Not 
> sure if Calcite is the root cause of this issue .
> Attached with the code sample and two issue json schemas . both examples give 
> column not found exception .
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent failing the wrong execution attempt in CheckpointFailureManager

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent 
failing the wrong execution attempt in CheckpointFailureManager
URL: https://github.com/apache/flink/pull/9364#issuecomment-518519629
 
 
   ## CI report:
   
   * db9e30b162a1f8f7958a3e1a5ff61214f205de1c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122057050)
   * df05d6d8946ec764655082180d893918bfc2a290 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122180572)
   * d35d5392868ee76b9869a2f62fb23ddc443498fa : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122472526)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12847) Update Kinesis Connectors to latest Apache licensed libraries

2019-08-08 Thread Steven Nelson (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903172#comment-16903172
 ] 

Steven Nelson commented on FLINK-12847:
---

It would be nice to have the 2.0 version pulled in so we can get the benefits 
of using enhanced fan-out and HTTP/2, so I would vote to pull this in as soon 
as possible.

> Update Kinesis Connectors to latest Apache licensed libraries
> -
>
> Key: FLINK-12847
> URL: https://issues.apache.org/jira/browse/FLINK-12847
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Dyana Rose
>Assignee: Dyana Rose
>Priority: Major
>
> Currently the referenced Kinesis Client Library and Kinesis Producer Library 
> code in the flink-connector-kinesis is licensed under the Amazon Software 
> License which is not compatible with the Apache License. This then requires a 
> fair amount of work in the CI pipeline and for users who want to use the 
> flink-connector-kinesis.
> The Kinesis Client Library v2.x and the AWS Java SDK v2.x both are now on the 
> Apache 2.0 license.
> [https://github.com/awslabs/amazon-kinesis-client/blob/master/LICENSE.txt]
> [https://github.com/aws/aws-sdk-java-v2/blob/master/LICENSE.txt]
> There is a PR for the Kinesis Producer Library to update it to the Apache 2.0 
> license ([https://github.com/awslabs/amazon-kinesis-producer/pull/256])
> The task should include, but not limited to, upgrading KCL/KPL to new 
> versions of Apache 2.0 license, changing licenses and NOTICE files in 
> flink-connector-kinesis, and adding flink-connector-kinesis to build, CI and 
> artifact publishing pipeline, updating the build profiles, updating 
> documentation that references the license incompatibility
> The expected outcome of this issue is that the flink-connector-kinesis will 
> be included with the standard build artifacts and will no longer need to be 
> built separately by users.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] Fix some operator names are not set in blink planner

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] 
Fix some operator names are not set in blink planner
URL: https://github.com/apache/flink/pull/9363#issuecomment-518265997
 
 
   ## CI report:
   
   * 1fe6c332279c34546ec3db24a574dfd53500d20b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121971735)
   * fa3e7406f9664a59efcb448748511b656474e74c : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/122089384)
   * 28175449cb1d5eb8f318359090ea87e5b2af42d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122102858)
   * 81593a4dcb3573843c1c02cba0cb17abe1693065 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122290876)
   * 829830f8df1eb814ac44716f230c6aedcfaa5128 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122369808)
   * 0dcba7f669a5702e5730aac0273eb4a58fce2be9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122460702)
   * a5052f45f08b0760cdd4a163a87c0be7d6383c42 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122479843)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] Fix some operator names are not set in blink planner

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9363: [FLINK-13587][table-planner-blink] 
Fix some operator names are not set in blink planner
URL: https://github.com/apache/flink/pull/9363#issuecomment-518263446
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit a5052f45f08b0760cdd4a163a87c0be7d6383c42 (Thu Aug 08 
17:16:16 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9396: [FLINK-13473][table] Add stream Windowed FlatAggregate support for blink planner

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9396: [FLINK-13473][table] Add stream 
Windowed FlatAggregate support for blink planner
URL: https://github.com/apache/flink/pull/9396#issuecomment-519562054
 
 
   ## CI report:
   
   * 9a5651f1dee8a7d5da17bc4e54831af55d7002c4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122459634)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9395: [hotfix][streaming] Fix the wrong word used in the comments in class KeyedStream

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9395: [hotfix][streaming] Fix the wrong 
word used in the comments in class KeyedStream
URL: https://github.com/apache/flink/pull/9395#issuecomment-519551891
 
 
   ## CI report:
   
   * 6a5450aa95ebeae0a2c011d64f23ec60fd858485 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122455285)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #8631: [FLINK-12745][ml] add sparse and 
dense vector class, and dense matrix class with basic operations.
URL: https://github.com/apache/flink/pull/8631#issuecomment-499077475
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c23d8ac2646f0b1f153d0dfb2950c53830838696 (Thu Aug 08 
16:39:37 UTC 2019)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #8631: [FLINK-12745][ml] add sparse and dense vector class, and dense matrix class with basic operations.

2019-08-08 Thread GitBox
walterddr commented on a change in pull request #8631: [FLINK-12745][ml] add 
sparse and dense vector class, and dense matrix class with basic operations.
URL: https://github.com/apache/flink/pull/8631#discussion_r312132803
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/matrix/SparseVectorTest.java
 ##
 @@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.matrix;
+
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Test cases for SparseVector.
+ */
+public class SparseVectorTest {
+   private static final double TOL = 1.0e-6;
+   private SparseVector v1 = null;
+   private SparseVector v2 = null;
+
+   @Before
+   public void setUp() throws Exception {
 
 Review comment:
   if these vectors are not changed across tests. using `static final` might be 
a better idea. (based on the test they are not changed at all I am assuming? 
please correct me if I were making wrong assumption)
   
   `@before` and `@after` invokes extra testing framework so it runs slower, it 
might be a good idea to lower overall test cost for Flink as a whole.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent failing the wrong execution attempt in CheckpointFailureManager

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent 
failing the wrong execution attempt in CheckpointFailureManager
URL: https://github.com/apache/flink/pull/9364#issuecomment-518519629
 
 
   ## CI report:
   
   * db9e30b162a1f8f7958a3e1a5ff61214f205de1c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122057050)
   * df05d6d8946ec764655082180d893918bfc2a290 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122180572)
   * d35d5392868ee76b9869a2f62fb23ddc443498fa : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/122472526)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent failing the wrong execution attempt in CheckpointFailureManager

2019-08-08 Thread GitBox
flinkbot edited a comment on issue #9364: [FLINK-13593][checkpointing] Prevent 
failing the wrong execution attempt in CheckpointFailureManager
URL: https://github.com/apache/flink/pull/9364#issuecomment-518518670
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit d35d5392868ee76b9869a2f62fb23ddc443498fa (Thu Aug 08 
16:22:18 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12481) Make processing time timer trigger run via the mailbox

2019-08-08 Thread Alex (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903104#comment-16903104
 ] 

Alex commented on FLINK-12481:
--

Hi [~SleePy], I've temporally closed the original PR of moving timers to 
mailbox model.

There were some discussions and changes in the underlying mailbox/executor 
implementation (that this effort depends on). The work was also put aside for 
awhile due to common effort for release 1.9.

I'll rebase and adapt my PR to the current {{master}} branch codebase, but the 
migration approach would probably will be very similar to the one proposed in 
the first PR.

> Make processing time timer trigger run via the mailbox
> --
>
> Key: FLINK-12481
> URL: https://issues.apache.org/jira/browse/FLINK-12481
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Stefan Richter
>Assignee: Alex
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This sub-task integrates the mailbox with processing time timer triggering. 
> Those triggers should now be enqueued as mailbox events and picked up by the 
> stream task's main thread for processing.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


  1   2   3   4   5   >