[GitHub] [flink] flinkbot edited a comment on issue #11388: [FLINK-16441][SQL] SQL Environment Configuration section will also be set in flink conf

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11388: [FLINK-16441][SQL] SQL Environment 
Configuration section will also be set in flink conf
URL: https://github.com/apache/flink/pull/11388#issuecomment-598031774
 
 
   
   ## CI report:
   
   * 0b39287557615cc135e3d57384c6f40c3069d428 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152935454) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6220)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11344: [FLINK-16250][python][ml] Add interfaces for PipelineStage and Pipeline

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11344: [FLINK-16250][python][ml] Add 
interfaces for PipelineStage and Pipeline
URL: https://github.com/apache/flink/pull/11344#issuecomment-596094520
 
 
   
   ## CI report:
   
   * 95473512ec290d571496fae32da8d33013c8f56e Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152931538) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6216)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the potential deadlock problem when reducing exclusive buffers to zero

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the 
potential deadlock problem when reducing exclusive buffers to zero
URL: https://github.com/apache/flink/pull/11351#issuecomment-596351676
 
 
   
   ## CI report:
   
   * 715889a35cfcc3aaf1b17f39dadaa86f755cc75d UNKNOWN
   * 548b22258f5e87fd53b45c7f4bb6de40bfd4e6d2 UNKNOWN
   * 9dba52d31972da8bc11d85737443253f764c6508 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152931551) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6217)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhengcanbin commented on issue #11346: [FLINK-16493][k8s] Use enum type instead of string type for KubernetesConfigOptions.REST_SERVICE_EXPOSED_TYPE

2020-03-11 Thread GitBox
zhengcanbin commented on issue #11346: [FLINK-16493][k8s] Use enum type instead 
of string type for KubernetesConfigOptions.REST_SERVICE_EXPOSED_TYPE
URL: https://github.com/apache/flink/pull/11346#issuecomment-598033698
 
 
   Hi, @knaufk. Could you help take a look? Kindly expect to see your reply.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16562) Handle JobManager termination future in place

2020-03-11 Thread Zili Chen (Jira)
Zili Chen created FLINK-16562:
-

 Summary: Handle JobManager termination future in place
 Key: FLINK-16562
 URL: https://issues.apache.org/jira/browse/FLINK-16562
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Reporter: Zili Chen
Assignee: Zili Chen
 Fix For: 1.11.0


After FLINK-11843 {{Dispatcher}} becomes a {{PermanentlyFencedRpcEndpoint}} and 
will be created as different instance in difference leader epoch. Thus, we 
don't have {{jobManagerTerminationFutures}} crosses multiple leader epoch that 
should be handled. Given the truth, we can remove 
{{jobManagerTerminationFutures}} field in {{Dispatcher}} and handle those 
futures in place, which will simplify the code and helps on further refactoring.

CC [~trohrmann]

I will create a branch later this week.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11388: [FLINK-16441][SQL] SQL Environment Configuration section will also be set in flink conf

2020-03-11 Thread GitBox
flinkbot commented on issue #11388: [FLINK-16441][SQL] SQL Environment 
Configuration section will also be set in flink conf
URL: https://github.com/apache/flink/pull/11388#issuecomment-598031774
 
 
   
   ## CI report:
   
   * 0b39287557615cc135e3d57384c6f40c3069d428 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11387: [FLINK-16343][table-planner-blink] Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11387: [FLINK-16343][table-planner-blink] 
Improve exception message when reading an unbounded source in batch mode
URL: https://github.com/apache/flink/pull/11387#issuecomment-597995106
 
 
   
   ## CI report:
   
   * 4101781e56c504d53168042efa996215b7d9d7bb Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/152924418) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6214)
 
   * b92a95ede6d30f98203c696c58d17dca9069a940 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152933744) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6218)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16526) Fix exception when computed column expression references a keyword column name

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16526:

Summary: Fix exception when computed column expression references a keyword 
column name  (was: Escape character doesn't work for computed column)

> Fix exception when computed column expression references a keyword column name
> --
>
> Key: FLINK-16526
> URL: https://issues.apache.org/jira/browse/FLINK-16526
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: YufeiLiu
>Assignee: Jark Wu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:sql}
> json_row  ROW<`timestamp` BIGINT>,
> `timestamp`   AS `json_row`.`timestamp`
> {code}
> It translate to "SELECT json_row.timestamp FROM __temp_table__"
> Throws exception "Encountered ". timestamp" at line 1, column 157. Was 
> expecting one of:..."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-16526) Fix exception when computed column expression references a keyword column name

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-16526.
-
Resolution: Fixed

Fixed in
 - master (1.11.0) : d723d0012c6b5e41e9d56784bc424e3942747225
 - 1.10.1: 884edd6dec549450ac44eb80d83de85eb50dc11b

> Fix exception when computed column expression references a keyword column name
> --
>
> Key: FLINK-16526
> URL: https://issues.apache.org/jira/browse/FLINK-16526
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: YufeiLiu
>Assignee: Jark Wu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:sql}
> json_row  ROW<`timestamp` BIGINT>,
> `timestamp`   AS `json_row`.`timestamp`
> {code}
> It translate to "SELECT json_row.timestamp FROM __temp_table__"
> Throws exception "Encountered ". timestamp" at line 1, column 157. Was 
> expecting one of:..."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on issue #11380: [FLINK-16526][table-planner-blink] Fix exception when computed column expression references a keyword column name

2020-03-11 Thread GitBox
wuchong commented on issue #11380: [FLINK-16526][table-planner-blink] Fix 
exception when computed column expression references a keyword column name
URL: https://github.com/apache/flink/pull/11380#issuecomment-598029515
 
 
   Thanks @danny0405 , will merge this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong merged pull request #11380: [FLINK-16526][table-planner-blink] Fix exception when computed column expression references a keyword column name

2020-03-11 Thread GitBox
wuchong merged pull request #11380: [FLINK-16526][table-planner-blink] Fix 
exception when computed column expression references a keyword column name
URL: https://github.com/apache/flink/pull/11380
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-16554) Extract static classes from StreamTask

2020-03-11 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski reassigned FLINK-16554:
--

Assignee: Roman Khachatryan

> Extract static classes from StreamTask
> --
>
> Key: FLINK-16554
> URL: https://issues.apache.org/jira/browse/FLINK-16554
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Task
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> StreamTask is currently 1400+ LOC.
> We can cut it to 1100+ by simply extracting these static classes into 
> separate files:
>  * `CheckpointingOperation`
>  * `AsyncCheckpointRunnable`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11387: [FLINK-16343][table-planner-blink] Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11387: [FLINK-16343][table-planner-blink] 
Improve exception message when reading an unbounded source in batch mode
URL: https://github.com/apache/flink/pull/11387#issuecomment-597995106
 
 
   
   ## CI report:
   
   * 4101781e56c504d53168042efa996215b7d9d7bb Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/152924418) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6214)
 
   * b92a95ede6d30f98203c696c58d17dca9069a940 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11334: [FLINK-16464][sql-client]result-mode 
tableau may shift when content contains Chinese String in SQL CLI
URL: https://github.com/apache/flink/pull/11334#issuecomment-595841256
 
 
   
   ## CI report:
   
   * 71e99474dc27f501c2c512e80761edd2057c00d0 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152927759) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6215)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11388: [FLINK-16441][SQL] SQL Environment Configuration section will also be set in flink conf

2020-03-11 Thread GitBox
flinkbot commented on issue #11388: [FLINK-16441][SQL] SQL Environment 
Configuration section will also be set in flink conf
URL: https://github.com/apache/flink/pull/11388#issuecomment-598024633
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 0b39287557615cc135e3d57384c6f40c3069d428 (Thu Mar 12 
06:15:15 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] gyfora commented on issue #11388: [FLINK-16441][SQL] SQL Environment Configuration section will also be set in flink conf

2020-03-11 Thread GitBox
gyfora commented on issue #11388: [FLINK-16441][SQL] SQL Environment 
Configuration section will also be set in flink conf
URL: https://github.com/apache/flink/pull/11388#issuecomment-598024337
 
 
   cc @wuchong 
   
   I still need to add a test case + some docs, just opening the PR for comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] gyfora opened a new pull request #11388: [FLINK-16441][SQL] SQL Environment Configuration section will also be set in flink conf

2020-03-11 Thread GitBox
gyfora opened a new pull request #11388: [FLINK-16441][SQL] SQL Environment 
Configuration section will also be set in flink conf
URL: https://github.com/apache/flink/pull/11388
 
 
   
   
   ## What is the purpose of the change
   
   Allow users to override regular Flink configuration settings from the SQL 
Client env file.
   The `cofniguration` section of the env yaml will be set in the Flink 
Configuration used to create the command line, executor etc in addition to the 
already present behavior of setting it as table config.
   
   ## Brief change log
   
   Move (and simplify) commandline related behavior from LocalExector to the 
ExecutionContext class where we have all the necessary environment related 
settings to create the proper Configuration.
   
   ## Verifying this change
   
   **This change still needs to be verified with tests as it has only been 
tested manually, do not merge yet**
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): No
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: No
 - The serializers: No
 - The runtime per-record code paths (performance sensitive): No
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: Yes
 - The S3 file system connector: No
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? not documented yet
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16441) Allow users to override flink-conf parameters from SQL CLI environment

2020-03-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16441:
---
Labels: pull-request-available  (was: )

> Allow users to override flink-conf parameters from SQL CLI environment
> --
>
> Key: FLINK-16441
> URL: https://issues.apache.org/jira/browse/FLINK-16441
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>
> There is currently no way of overriding flink configuration parameters when 
> using the SQL CLI.
> The configuration section of the env yaml should provide a way of doing so as 
> this is a very important requirement for multi-user/multi-app flink client 
> envs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-03-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴彦祖 updated FLINK-16070:

Attachment: image-2020-03-12-14-07-37-429.png

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Assignee: godfrey he
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
> Attachments: image-2020-03-12-14-07-37-429.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can not extract correct unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11317: [FLINK-16433][table-api] TableEnvironmentImpl doesn't clear…

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11317: [FLINK-16433][table-api] 
TableEnvironmentImpl doesn't clear…
URL: https://github.com/apache/flink/pull/11317#issuecomment-595145709
 
 
   
   ## CI report:
   
   * 6cb26ab23d18cd85dda0501861873fa9382332ff UNKNOWN
   * 3bfc593562df2a1b7026ea7474c558edd0472566 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152920061) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6213)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11344: [FLINK-16250][python][ml] Add interfaces for PipelineStage and Pipeline

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11344: [FLINK-16250][python][ml] Add 
interfaces for PipelineStage and Pipeline
URL: https://github.com/apache/flink/pull/11344#issuecomment-596094520
 
 
   
   ## CI report:
   
   * a976d122cca58ccf820360a2bf12a094871d0977 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152781552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6162)
 
   * 95473512ec290d571496fae32da8d33013c8f56e Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152931538) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6216)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the potential deadlock problem when reducing exclusive buffers to zero

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the 
potential deadlock problem when reducing exclusive buffers to zero
URL: https://github.com/apache/flink/pull/11351#issuecomment-596351676
 
 
   
   ## CI report:
   
   * 715889a35cfcc3aaf1b17f39dadaa86f755cc75d UNKNOWN
   * 548b22258f5e87fd53b45c7f4bb6de40bfd4e6d2 UNKNOWN
   * d282e3cb5692915fdd7a45ea5e193feba823c8c3 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/152916161) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6210)
 
   * 9dba52d31972da8bc11d85737443253f764c6508 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152931551) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6217)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-16441) Allow users to override flink-conf parameters from SQL CLI environment

2020-03-11 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-16441:
--

Assignee: Gyula Fora

> Allow users to override flink-conf parameters from SQL CLI environment
> --
>
> Key: FLINK-16441
> URL: https://issues.apache.org/jira/browse/FLINK-16441
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> There is currently no way of overriding flink configuration parameters when 
> using the SQL CLI.
> The configuration section of the env yaml should provide a way of doing so as 
> this is a very important requirement for multi-user/multi-app flink client 
> envs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16441) Allow users to override flink-conf parameters from SQL CLI environment

2020-03-11 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057620#comment-17057620
 ] 

Gyula Fora commented on FLINK-16441:


I have a quick fix that sets the contents of the configuration section into the 
Flink Configuration, happy to work on this.
I need to add a test case for it but otherwise it's a small change

> Allow users to override flink-conf parameters from SQL CLI environment
> --
>
> Key: FLINK-16441
> URL: https://issues.apache.org/jira/browse/FLINK-16441
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Reporter: Gyula Fora
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> There is currently no way of overriding flink configuration parameters when 
> using the SQL CLI.
> The configuration section of the env yaml should provide a way of doing so as 
> this is a very important requirement for multi-user/multi-app flink client 
> envs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11344: [FLINK-16250][python][ml] Add interfaces for PipelineStage and Pipeline

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11344: [FLINK-16250][python][ml] Add 
interfaces for PipelineStage and Pipeline
URL: https://github.com/apache/flink/pull/11344#issuecomment-596094520
 
 
   
   ## CI report:
   
   * a976d122cca58ccf820360a2bf12a094871d0977 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152781552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6162)
 
   * 95473512ec290d571496fae32da8d33013c8f56e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the potential deadlock problem when reducing exclusive buffers to zero

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the 
potential deadlock problem when reducing exclusive buffers to zero
URL: https://github.com/apache/flink/pull/11351#issuecomment-596351676
 
 
   
   ## CI report:
   
   * 715889a35cfcc3aaf1b17f39dadaa86f755cc75d UNKNOWN
   * 548b22258f5e87fd53b45c7f4bb6de40bfd4e6d2 UNKNOWN
   * d282e3cb5692915fdd7a45ea5e193feba823c8c3 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/152916161) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6210)
 
   * 9dba52d31972da8bc11d85737443253f764c6508 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16560) StreamExecutionEnvironment configuration is empty when building program via PackagedProgramUtils#createJobGraph

2020-03-11 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-16560:
--
Fix Version/s: 1.11.0
   1.10.1

> StreamExecutionEnvironment configuration is empty when building program via 
> PackagedProgramUtils#createJobGraph
> ---
>
> Key: FLINK-16560
> URL: https://issues.apache.org/jira/browse/FLINK-16560
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> PackagedProgramUtils#createJobGraph(...) is used to generate JobGraph in k8s 
> job mode.
> The problem is that the configuration field of StreamExecutionEnvironment is 
> a newly created one when building the job program. This is because 
> StreamPlanEnvironment ctor will base on the no param version ctor of 
> StreamExecutionEnvironment.
> This may lead to an unexpected result when invoking 
> StreamExecutionEnvironment#configure(...) which relies on the configuration. 
> Many configurations in the flink conf file will not be respected, like 
> pipeline.time-characteristic, pipeline.operator-chaining, 
> execution.buffer-timeout, and state backend configs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15290) Need a way to turn off vectorized orc reader for SQL CLI

2020-03-11 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057617#comment-17057617
 ] 

Jingsong Lee commented on FLINK-15290:
--

Hi [~TsReaper], it is done in FLINK-16179

> Need a way to turn off vectorized orc reader for SQL CLI
> 
>
> Key: FLINK-15290
> URL: https://issues.apache.org/jira/browse/FLINK-15290
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16227) Streaming bucketing end-to-end test / test_streaming_bucketing.sh unstable

2020-03-11 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057614#comment-17057614
 ] 

Zhijiang commented on FLINK-16227:
--

Another broken for nightly cron : 
[https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6205&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=d47e27f5-9721-5d5f-1cf3-62adbf3d115d]

> Streaming bucketing end-to-end test / test_streaming_bucketing.sh unstable
> --
>
> Key: FLINK-16227
> URL: https://issues.apache.org/jira/browse/FLINK-16227
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> This nightly cron job has failed: 
> https://travis-ci.org/apache/flink/jobs/653454540
> {code}
> ==
> Running 'Streaming bucketing end-to-end test'
> ==
> TEST_DATA_DIR: 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-05739414867
> Flink dist directory: 
> /home/travis/build/apache/flink/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> Setting up SSL with: internal JDK dynamic
> Using SAN 
> dns:travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7,ip:10.20.0.145,ip:172.17.0.1
> Certificate was added to keystore
> Certificate was added to keystore
> Certificate reply was installed in keystore
> MAC verified OK
> Setting up SSL with: rest JDK dynamic
> Using SAN 
> dns:travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7,ip:10.20.0.145,ip:172.17.0.1
> Certificate was added to keystore
> Certificate was added to keystore
> Certificate reply was installed in keystore
> MAC verified OK
> Mutual ssl auth: false
> Starting cluster.
> Starting standalonesession daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Dispatcher REST endpoint is up.
> [INFO] 1 instance(s) of taskexecutor are already running on 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> [INFO] 2 instance(s) of taskexecutor are already running on 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> [INFO] 3 instance(s) of taskexecutor are already running on 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Number of running task managers 1 is not yet 4.
> Number of running task managers 2 is not yet 4.
> Number of running task managers has reached 4.
> java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethod(Class.java:2128)
>   at 
> org.apache.flink.api.java.ClosureCleaner.usesCustomSerialization(ClosureCleaner.java:164)
>   at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:89)
>   at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:71)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1820)
>   at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:188)
>   at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1328)
>   at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:84)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>   at 
> org.apache.flink.client.cli.CliFrontend.execut

[jira] [Updated] (FLINK-16227) Streaming bucketing end-to-end test / test_streaming_bucketing.sh unstable

2020-03-11 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-16227:
-
Priority: Critical  (was: Major)

> Streaming bucketing end-to-end test / test_streaming_bucketing.sh unstable
> --
>
> Key: FLINK-16227
> URL: https://issues.apache.org/jira/browse/FLINK-16227
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
>
> This nightly cron job has failed: 
> https://travis-ci.org/apache/flink/jobs/653454540
> {code}
> ==
> Running 'Streaming bucketing end-to-end test'
> ==
> TEST_DATA_DIR: 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-05739414867
> Flink dist directory: 
> /home/travis/build/apache/flink/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> Setting up SSL with: internal JDK dynamic
> Using SAN 
> dns:travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7,ip:10.20.0.145,ip:172.17.0.1
> Certificate was added to keystore
> Certificate was added to keystore
> Certificate reply was installed in keystore
> MAC verified OK
> Setting up SSL with: rest JDK dynamic
> Using SAN 
> dns:travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7,ip:10.20.0.145,ip:172.17.0.1
> Certificate was added to keystore
> Certificate was added to keystore
> Certificate reply was installed in keystore
> MAC verified OK
> Mutual ssl auth: false
> Starting cluster.
> Starting standalonesession daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Dispatcher REST endpoint is up.
> [INFO] 1 instance(s) of taskexecutor are already running on 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> [INFO] 2 instance(s) of taskexecutor are already running on 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> [INFO] 3 instance(s) of taskexecutor are already running on 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Number of running task managers 1 is not yet 4.
> Number of running task managers 2 is not yet 4.
> Number of running task managers has reached 4.
> java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethod(Class.java:2128)
>   at 
> org.apache.flink.api.java.ClosureCleaner.usesCustomSerialization(ClosureCleaner.java:164)
>   at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:89)
>   at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:71)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1820)
>   at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:188)
>   at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1328)
>   at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:84)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>   

[GitHub] [flink] hequn8128 commented on a change in pull request #11344: [FLINK-16250][python][ml] Add interfaces for PipelineStage and Pipeline

2020-03-11 Thread GitBox
hequn8128 commented on a change in pull request #11344: 
[FLINK-16250][python][ml] Add interfaces for PipelineStage and Pipeline
URL: https://github.com/apache/flink/pull/11344#discussion_r391376778
 
 

 ##
 File path: flink-python/pyflink/ml/api/base.py
 ##
 @@ -0,0 +1,275 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import re
+
+from abc import ABCMeta, abstractmethod
+
+from pyflink.table.table_environment import TableEnvironment
+from pyflink.table.table import Table
+from pyflink.ml.api.param import WithParams, Params
+from py4j.java_gateway import get_field
+
+
+class PipelineStage(WithParams):
+"""
+Base class for a stage in a pipeline. The interface is only a concept, and 
does not have any
+actual functionality. Its subclasses must be either Estimator or 
Transformer. No other classes
+should inherit this interface directly.
+
+Each pipeline stage is with parameters, and requires a public empty 
constructor for
+restoration in Pipeline.
+"""
+
+def __init__(self, params=None):
+if params is None:
+self._params = Params()
+else:
+self._params = params
+
+def get_params(self) -> Params:
+return self._params
+
+def _convert_params_to_java(self, j_pipeline_stage):
+for param in self._params._param_map:
+java_param = self._make_java_param(j_pipeline_stage, param)
+java_value = self._make_java_value(self._params._param_map[param])
+j_pipeline_stage.set(java_param, java_value)
+
+@staticmethod
+def _make_java_param(j_pipeline_stage, param):
+# camel case to snake case
+name = re.sub(r'(? str:
+return self.get_params().to_json()
+
+def load_json(self, json: str) -> None:
+self.get_params().load_json(json)
+
+
+class Transformer(PipelineStage):
+"""
+A transformer is a PipelineStage that transforms an input Table to a 
result Table.
+"""
+
+__metaclass__ = ABCMeta
+
+@abstractmethod
+def transform(self, table_env: TableEnvironment, table: Table) -> Table:
+"""
+Applies the transformer on the input table, and returns the result 
table.
+
+:param table_env: the table environment to which the input table is 
bound.
+:param table: the table to be transformed
+:returns: the transformed table
+"""
+raise NotImplementedError()
+
+
+class JavaTransformer(Transformer):
+"""
+Base class for Transformer that wrap Java implementations. Subclasses 
should
+ensure they have the transformer Java object available as j_obj.
+"""
+
+def __init__(self, j_obj):
+super().__init__()
+self._j_obj = j_obj
+
+def transform(self, table_env: TableEnvironment, table: Table) -> Table:
+"""
+Applies the transformer on the input table, and returns the result 
table.
+
+:param table_env: the table environment to which the input table is 
bound.
+:param table: the table to be transformed
+:returns: the transformed table
+"""
+self._convert_params_to_java(self._j_obj)
+return Table(self._j_obj.transform(table_env._j_tenv, table._j_table))
+
+
+class Model(Transformer):
+"""
+Abstract class for models that are fitted by estimators.
+
+A model is an ordinary Transformer except how it is created. While 
ordinary transformers
+are defined by specifying the parameters directly, a model is usually 
generated by an Estimator
+when Estimator.fit(table_env, table) is invoked.
+"""
+
+__metaclass__ = ABCMeta
+
+
+class JavaModel(JavaTransformer, Model):
+"""
+Base class for JavaTransformer that wrap Java implementations.
+Subclasses should ensure they have the model Java object available as 
j_obj.
+"""
+
+
+class Estimator(PipelineStage):
+"""
+Estimators are PipelineStages responsible for training and generating 
machine learning models.
+
+The implementations are expected to take an input table as traini

[GitHub] [flink] flinkbot edited a comment on issue #11361: [FLINK-15727] Let BashJavaUtils return dynamic configs and JVM parame…

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11361: [FLINK-15727] Let BashJavaUtils 
return dynamic configs and JVM parame…
URL: https://github.com/apache/flink/pull/11361#issuecomment-596976998
 
 
   
   ## CI report:
   
   * 1ffc375e2b14f76c86e56d6d7aa888094a45d475 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152919016) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6212)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11386: [FLINK-16541][doc] Fix document of table.exec.shuffle-mode

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11386: [FLINK-16541][doc] Fix document of 
table.exec.shuffle-mode
URL: https://github.com/apache/flink/pull/11386#issuecomment-597972340
 
 
   
   ## CI report:
   
   * 7b06167c46752d062c2034dbbc00ac30d78aa542 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152917113) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6211)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11387: [FLINK-16343][table-planner-blink] Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11387: [FLINK-16343][table-planner-blink] 
Improve exception message when reading an unbounded source in batch mode
URL: https://github.com/apache/flink/pull/11387#issuecomment-597995106
 
 
   
   ## CI report:
   
   * 4101781e56c504d53168042efa996215b7d9d7bb Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/152924418) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6214)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11334: [FLINK-16464][sql-client]result-mode 
tableau may shift when content contains Chinese String in SQL CLI
URL: https://github.com/apache/flink/pull/11334#issuecomment-595841256
 
 
   
   ## CI report:
   
   * 8aa3b784db3bdd4b0c5bc455c96cfa59092cbe6f Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/152845532) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6191)
 
   * 71e99474dc27f501c2c512e80761edd2057c00d0 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152927759) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6215)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11317: [FLINK-16433][table-api] TableEnvironmentImpl doesn't clear…

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11317: [FLINK-16433][table-api] 
TableEnvironmentImpl doesn't clear…
URL: https://github.com/apache/flink/pull/11317#issuecomment-595145709
 
 
   
   ## CI report:
   
   * 6cb26ab23d18cd85dda0501861873fa9382332ff UNKNOWN
   * 3bfc593562df2a1b7026ea7474c558edd0472566 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152920061) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6213)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11387: [FLINK-16343][table-planner-blink] Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11387: [FLINK-16343][table-planner-blink] 
Improve exception message when reading an unbounded source in batch mode
URL: https://github.com/apache/flink/pull/11387#issuecomment-597995106
 
 
   
   ## CI report:
   
   * 4101781e56c504d53168042efa996215b7d9d7bb Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152924418) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6214)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15290) Need a way to turn off vectorized orc reader for SQL CLI

2020-03-11 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057605#comment-17057605
 ] 

Caizhi Weng commented on FLINK-15290:
-

Hi [~lzljs3620320],

I also agree that hive source configs should go into table config, as 
HiveTableSource is a table source and will be created and optimized during 
optimization. How's the progress of moving these options into table config, is 
it planned to be done in 1.11?

> Need a way to turn off vectorized orc reader for SQL CLI
> 
>
> Key: FLINK-15290
> URL: https://issues.apache.org/jira/browse/FLINK-15290
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11361: [FLINK-15727] Let BashJavaUtils return dynamic configs and JVM parame…

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11361: [FLINK-15727] Let BashJavaUtils 
return dynamic configs and JVM parame…
URL: https://github.com/apache/flink/pull/11361#issuecomment-596976998
 
 
   
   ## CI report:
   
   * 1ffc375e2b14f76c86e56d6d7aa888094a45d475 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152919016) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6212)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11374: [FLINK-16524][python] Optimize the result of FlattenRowCoder and ArrowCoder to generator to eliminate unnecessary function calls

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11374: [FLINK-16524][python] Optimize the 
result of FlattenRowCoder and ArrowCoder to generator to eliminate unnecessary 
function calls
URL: https://github.com/apache/flink/pull/11374#issuecomment-597497985
 
 
   
   ## CI report:
   
   * ef45405e3baca296b9acdfb40308276535713fec Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152913580) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6207)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11386: [FLINK-16541][doc] Fix document of table.exec.shuffle-mode

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11386: [FLINK-16541][doc] Fix document of 
table.exec.shuffle-mode
URL: https://github.com/apache/flink/pull/11386#issuecomment-597972340
 
 
   
   ## CI report:
   
   * 7b06167c46752d062c2034dbbc00ac30d78aa542 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152917113) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6211)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11334: [FLINK-16464][sql-client]result-mode tableau may shift when content contains Chinese String in SQL CLI

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11334: [FLINK-16464][sql-client]result-mode 
tableau may shift when content contains Chinese String in SQL CLI
URL: https://github.com/apache/flink/pull/11334#issuecomment-595841256
 
 
   
   ## CI report:
   
   * 8aa3b784db3bdd4b0c5bc455c96cfa59092cbe6f Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/152845532) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6191)
 
   * 71e99474dc27f501c2c512e80761edd2057c00d0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391399168
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,339 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import 
org.apache.flink.runtime.io.network.partition.consumer.InputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.annotation.Nullable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final int NUMBER_OF_BUFFER_RESPONSES = 5;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   private InputChannelID releasedInputChannelId;
+
+   @Before
+   public void setUp() throws IOException, InterruptedException {
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   networkBufferPool = new NetworkBufferPool(
+   NUMBER_OF_BUFFER_RESPONSES,
+   BUFFER_SIZE,
+   NUMBER_OF_BUFFER_RESPONSES);
+   channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = createRemoteInputChannel(
+   inputGate,
+   new TestingPartitionRequestClient(),
+   networkBufferPool);
+   inputGate.assignExclusiveSegments();
+   inputChannel.requestSubpartition(0);
+   handler.addInputChannel(inputChannel);
+   inputChannelId = inputChannel.getInputChannelId();
+
+   SingleInputGate releasedInputGate = createSingleInputGate(1);
+   RemoteInputChannel releasedInputChannel = new 
InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndS

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391397625
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,339 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import 
org.apache.flink.runtime.io.network.partition.consumer.InputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.annotation.Nullable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final int NUMBER_OF_BUFFER_RESPONSES = 5;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   private InputChannelID releasedInputChannelId;
+
+   @Before
+   public void setUp() throws IOException, InterruptedException {
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   networkBufferPool = new NetworkBufferPool(
+   NUMBER_OF_BUFFER_RESPONSES,
+   BUFFER_SIZE,
+   NUMBER_OF_BUFFER_RESPONSES);
+   channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = createRemoteInputChannel(
+   inputGate,
+   new TestingPartitionRequestClient(),
+   networkBufferPool);
+   inputGate.assignExclusiveSegments();
+   inputChannel.requestSubpartition(0);
+   handler.addInputChannel(inputChannel);
+   inputChannelId = inputChannel.getInputChannelId();
+
+   SingleInputGate releasedInputGate = createSingleInputGate(1);
+   RemoteInputChannel releasedInputChannel = new 
InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndS

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391397736
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,339 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import 
org.apache.flink.runtime.io.network.partition.consumer.InputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.annotation.Nullable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final int NUMBER_OF_BUFFER_RESPONSES = 5;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   private InputChannelID releasedInputChannelId;
+
+   @Before
+   public void setUp() throws IOException, InterruptedException {
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   networkBufferPool = new NetworkBufferPool(
+   NUMBER_OF_BUFFER_RESPONSES,
+   BUFFER_SIZE,
+   NUMBER_OF_BUFFER_RESPONSES);
+   channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = createRemoteInputChannel(
+   inputGate,
+   new TestingPartitionRequestClient(),
+   networkBufferPool);
+   inputGate.assignExclusiveSegments();
+   inputChannel.requestSubpartition(0);
+   handler.addInputChannel(inputChannel);
+   inputChannelId = inputChannel.getInputChannelId();
+
+   SingleInputGate releasedInputGate = createSingleInputGate(1);
+   RemoteInputChannel releasedInputChannel = new 
InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndS

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391397108
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,339 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import 
org.apache.flink.runtime.io.network.partition.consumer.InputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.annotation.Nullable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final int NUMBER_OF_BUFFER_RESPONSES = 5;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   private InputChannelID releasedInputChannelId;
+
+   @Before
+   public void setUp() throws IOException, InterruptedException {
 
 Review comment:
   setUp ->setup?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-11 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r391396891
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,339 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.TestingPartitionRequestClient;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
+import 
org.apache.flink.runtime.io.network.partition.consumer.InputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.annotation.Nullable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyBufferResponseHeader;
+import static 
org.apache.flink.runtime.io.network.netty.NettyTestUtil.verifyErrorResponse;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createRemoteInputChannel;
+import static 
org.apache.flink.runtime.io.network.partition.InputChannelTestUtils.createSingleInputGate;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+
+   private static final int BUFFER_SIZE = 1024;
+
+   private static final int NUMBER_OF_BUFFER_RESPONSES = 5;
+
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private EmbeddedChannel channel;
+
+   private NetworkBufferPool networkBufferPool;
+
+   private SingleInputGate inputGate;
+
+   private InputChannelID inputChannelId;
+
+   private InputChannelID releasedInputChannelId;
+
+   @Before
+   public void setUp() throws IOException, InterruptedException {
+   CreditBasedPartitionRequestClientHandler handler = new 
CreditBasedPartitionRequestClientHandler();
+   networkBufferPool = new NetworkBufferPool(
+   NUMBER_OF_BUFFER_RESPONSES,
+   BUFFER_SIZE,
+   NUMBER_OF_BUFFER_RESPONSES);
+   channel = new EmbeddedChannel(new 
NettyMessageClientDecoderDelegate(handler));
+
+   inputGate = createSingleInputGate(1);
+   RemoteInputChannel inputChannel = createRemoteInputChannel(
+   inputGate,
+   new TestingPartitionRequestClient(),
+   networkBufferPool);
+   inputGate.assignExclusiveSegments();
+   inputChannel.requestSubpartition(0);
+   handler.addInputChannel(inputChannel);
+   inputChannelId = inputChannel.getInputChannelId();
+
+   SingleInputGate releasedInputGate = createSingleInputGate(1);
+   RemoteInputChannel releasedInputChannel = new 
InputChannelBuilder()
+   .setMemorySegmentProvider(networkBufferPool)
+   .buildRemoteAndS

[GitHub] [flink] flinkbot edited a comment on issue #11387: [FLINK-16343][table-planner-blink] Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11387: [FLINK-16343][table-planner-blink] 
Improve exception message when reading an unbounded source in batch mode
URL: https://github.com/apache/flink/pull/11387#issuecomment-597995106
 
 
   
   ## CI report:
   
   * 4101781e56c504d53168042efa996215b7d9d7bb Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152924418) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6214)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16160) Schema#proctime and Schema#rowtime don't work in TableEnvironment#connect code path

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16160:

Fix Version/s: (was: 1.10.1)

> Schema#proctime and Schema#rowtime don't work in TableEnvironment#connect 
> code path
> ---
>
> Key: FLINK-16160
> URL: https://issues.apache.org/jira/browse/FLINK-16160
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Zhenghua Gao
>Priority: Critical
> Fix For: 1.11.0
>
>
> In ConnectTableDescriptor#createTemporaryTable, the proctime/rowtime 
> properties are ignored so the generated catalog table is not correct. We 
> should fix this to let TableEnvironment#connect() support watermark.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16561) Resuming Externalized Checkpoint (rocks, incremental, no parallelism change) end-to-end test fails on Azure

2020-03-11 Thread Biao Liu (Jira)
Biao Liu created FLINK-16561:


 Summary: Resuming Externalized Checkpoint (rocks, incremental, no 
parallelism change) end-to-end test fails on Azure
 Key: FLINK-16561
 URL: https://issues.apache.org/jira/browse/FLINK-16561
 Project: Flink
  Issue Type: Test
  Components: Tests
Affects Versions: 1.11.0
Reporter: Biao Liu


{quote}Caused by: java.io.IOException: Cannot access file system for 
checkpoint/savepoint path 'file://.'.
at 
org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpointPointer(AbstractFsCheckpointStorage.java:233)
at 
org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpoint(AbstractFsCheckpointStorage.java:110)
at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1332)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.tryRestoreExecutionGraphFromSavepoint(SchedulerBase.java:314)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:247)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.(SchedulerBase.java:223)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.(DefaultScheduler.java:118)
at 
org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:103)
at 
org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:281)
at 
org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:269)
at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.(JobManagerRunnerImpl.java:146)
... 10 more
Caused by: java.io.IOException: Found local file path with authority '.' in 
path 'file://.'. Hint: Did you forget a slash? (correct path would be 
'file:///.')
at 
org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:441)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
at 
org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpointPointer(AbstractFsCheckpointStorage.java:230)
... 22 more
{quote}

The original log is here, 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6073&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=2b7514ee-e706-5046-657b-3430666e7bd9

There are some similar tickets about this case, but the stack here looks 
different. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16021) DescriptorProperties.putTableSchema does not include constraints

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16021:

Priority: Critical  (was: Major)

> DescriptorProperties.putTableSchema does not include constraints
> 
>
> Key: FLINK-16021
> URL: https://issues.apache.org/jira/browse/FLINK-16021
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Ecosystem
>Affects Versions: 1.10.0
>Reporter: Timo Walther
>Priority: Critical
>
> FLINK-14978 added primary keys as the first constraints but forgot about 
> adding them to the property map as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16021) DescriptorProperties.putTableSchema does not include constraints

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16021:

Fix Version/s: 1.11.0
   1.10.1

> DescriptorProperties.putTableSchema does not include constraints
> 
>
> Key: FLINK-16021
> URL: https://issues.apache.org/jira/browse/FLINK-16021
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Ecosystem
>Affects Versions: 1.10.0
>Reporter: Timo Walther
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> FLINK-14978 added primary keys as the first constraints but forgot about 
> adding them to the property map as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16110) LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) *PROCTIME*"

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-16110.
---
Resolution: Invalid

Close this issue according to the discussion. Please reopen it if you have 
other thoughts [~godfreyhe]. 

> LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) 
> *PROCTIME*"
> 
>
> Key: FLINK-16110
> URL: https://issues.apache.org/jira/browse/FLINK-16110
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: godfrey he
>Priority: Major
>
>  {{TIMESTAMP(3) *ROWTIME*}} is the string representation of 
> {{TimestampType(true, TimestampKind.ROWTIME, 3)}} , however 
> {{LogicalTypeParser}} can't convert it to  {{TimestampType(true, 
> TimestampKind.ROWTIME, 3)}}. 
> TIMESTAMP(3) *PROCTIME* is the same case.
> the exception looks like:
> {code}
> org.apache.flink.table.api.ValidationException: Could not parse type at 
> position 12: Unexpected token: *ROWTIME*
>  Input type string: TIMESTAMP(3) *ROWTIME*
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:371)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:380)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parseTokens(LogicalTypeParser.java:357)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.access$000(LogicalTypeParser.java:333)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:106)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:116)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16282) Wrong exception using DESCRIBE SQL command

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16282:

Fix Version/s: 1.11.0
   1.10.1

> Wrong exception using DESCRIBE SQL command
> --
>
> Key: FLINK-16282
> URL: https://issues.apache.org/jira/browse/FLINK-16282
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Nico Kruber
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> When trying to describe a table like this
> {code:java}
> Table facttable = tEnv.sqlQuery("DESCRIBE fact_table");
> {code}
> currently, you get a strange exception which should rather be a "not 
> supported" exception
> {code}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 10 to line 1, column 19: Column 
> 'fact_table' not found in any table
>   at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validateInternal(FlinkPlannerImpl.scala:130)
>   at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:105)
>   at 
> org.apache.flink.table.sqlexec.SqlToOperationConverter.convert(SqlToOperationConverter.java:124)
>   at org.apache.flink.table.planner.ParserImpl.parse(ParserImpl.java:66)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:464)
>   at com.ververica.LateralTableJoin.main(LateralTableJoin.java:92)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 10 to line 1, column 19: Column 'fact_table' not found in any table
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:463)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:834)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:819)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:4841)
>   at 
> org.apache.calcite.sql.validate.DelegatingScope.fullyQualify(DelegatingScope.java:259)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateIdentifier(SqlValidatorImpl.java:2943)
>   at 
> org.apache.calcite.sql.SqlIdentifier.validateExpr(SqlIdentifier.java:297)
>   at org.apache.calcite.sql.SqlOperator.validateCall(SqlOperator.java:407)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateCall(SqlValidatorImpl.java:5304)
>   at org.apache.calcite.sql.SqlCall.validate(SqlCall.java:116)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:943)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:650)
>   at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validateInternal(FlinkPlannerImpl.scala:126)
>   ... 5 more
> Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Column 
> 'fact_table' not found in any table
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:463)
>   at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:572)
>   ... 17 more
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16451) listagg with distinct for over window codegen error

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16451:

Fix Version/s: 1.11.0
   1.10.1

> listagg with distinct for over window  codegen error
> 
>
> Key: FLINK-16451
> URL: https://issues.apache.org/jira/browse/FLINK-16451
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.2, 1.10.0
>Reporter: jinfeng
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> When I use lisgagg with distinct and over window.
> {code:java}
> //代码占位符
> "select listagg(distinct product, '|') over(partition by user order by 
> proctime rows between 200 preceding and current row) as product, user from " 
> + testTable
> {code}
> I got the follwing exception
> {code:java}
> //代码占位符
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 3, 
> Size: 3 at java.util.ArrayList.rangeCheck(ArrayList.java:657) at 
> java.util.ArrayList.get(ArrayList.java:433) at 
> java.util.Collections$UnmodifiableList.get(Collections.java:1311) at 
> org.apache.flink.table.types.logical.RowType.getTypeAt(RowType.java:174) at 
> org.apache.flink.table.planner.codegen.GenerateUtils$.generateFieldAccess(GenerateUtils.scala:635)
>  at 
> org.apache.flink.table.planner.codegen.GenerateUtils$.generateFieldAccess(GenerateUtils.scala:620)
>  at 
> org.apache.flink.table.planner.codegen.GenerateUtils$.generateInputAccess(GenerateUtils.scala:524)
>  at 
> org.apache.flink.table.planner.codegen.agg.DistinctAggCodeGen$$anonfun$10.apply(DistinctAggCodeGen.scala:374)
>  at 
> org.apache.flink.table.planner.codegen.agg.DistinctAggCodeGen$$anonfun$10.apply(DistinctAggCodeGen.scala:374)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>  at scala.collection.mutable.ArrayOps$ofInt.foreach(ArrayOps.scala:234) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.mutable.ArrayOps$ofInt.map(ArrayOps.scala:234) at 
> org.apache.flink.table.planner.codegen.agg.DistinctAggCodeGen.generateKeyExpression(DistinctAggCodeGen.scala:374)
>  at 
> org.apache.flink.table.planner.codegen.agg.DistinctAggCodeGen.accumulate(DistinctAggCodeGen.scala:192)
>  at 
> org.apache.flink.table.planner.codegen.agg.AggsHandlerCodeGenerator$$anonfun$12.apply(AggsHandlerCodeGenerator.scala:871)
>  at 
> org.apache.flink.table.planner.codegen.agg.AggsHandlerCodeGenerator$$anonfun$12.apply(AggsHandlerCodeGenerator.scala:871)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>  at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186) at 
> org.apache.flink.table.planner.codegen.agg.AggsHandlerCodeGenerator.genAccumulate(AggsHandlerCodeGenerator.scala:871)
>  at 
> org.apache.flink.table.planner.codegen.agg.AggsHandlerCodeGenerator.generateAggsHandler(AggsHandlerCodeGenerator.scala:329)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecOverAggregate.createBoundedOverProcessFunction(StreamExecOverAggregate.scala:425)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.scala:255)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.scala:56)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecOverAggregate.translateToPlan(StreamExecOverAggregate.scala:56)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:54)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:39)
> {code}
> But It worked with 
> {code:java}
> //代码占位符
> select listagg(distinct product) over(partition by user order by proctime 
> rows between 200 preceding and current row) as product, user from " + 
> testTable
> {code}
>  
> The exception will be throw  at the below code. 
> {code:java}
> //代码占位符
> private def generateKeyExpression(
> ctx: CodeGeneratorContext,
> generator: E

[jira] [Updated] (FLINK-15669) SQL client can't cancel flink job

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15669:

Priority: Critical  (was: Major)

> SQL client can't cancel flink job
> -
>
> Key: FLINK-15669
> URL: https://issues.apache.org/jira/browse/FLINK-15669
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> in sql client, CLI client do cancel query operation through {{void 
> cancelQuery(String sessionId, String resultId)}} method in {{Executor}}. 
> However, the {{resultId}} is a random UUID, is not the job id. So CLI client 
> can't cancel a running job.
> related code in {{LocalExecutor}}:
> {code:java}
> private  ResultDescriptor executeQueryInternal(String sessionId, 
> ExecutionContext context, String query) {
>..
>   // store the result with a unique id
>   final String resultId = UUID.randomUUID().toString();
>   resultStore.storeResult(resultId, result);
>   ..
>   // create execution
>   final ProgramDeployer deployer = new ProgramDeployer(
>   configuration, jobName, pipeline);
>   // start result retrieval
>   result.startRetrieval(deployer);
>   return new ResultDescriptor(
>   resultId,
>   removeTimeAttributes(table.getSchema()),
>   result.isMaterialized());
> }
> private  void cancelQueryInternal(ExecutionContext context, String 
> resultId) {
>   ..
>   // stop Flink job
>   try (final ClusterDescriptor clusterDescriptor = 
> context.createClusterDescriptor()) {
>   ClusterClient clusterClient = null;
>   try {
>   // retrieve existing cluster
>   clusterClient = 
> clusterDescriptor.retrieve(context.getClusterId()).getClusterClient();
>   try {
>   //  cancel job through resultId ===
>   clusterClient.cancel(new 
> JobID(StringUtils.hexStringToByte(resultId))).get();
>   } catch (Throwable t) {
>   // the job might has finished earlier
>   }
>   } catch (Exception e) {
>   throw new SqlExecutionException("Could not retrieve or 
> create a cluster.", e);
>   } finally {
>   try {
>   if (clusterClient != null) {
>   clusterClient.close();
>   }
>   } catch (Exception e) {
>   // ignore
>   }
>   }
>   } catch (SqlExecutionException e) {
>   throw e;
>   } catch (Exception e) {
>   throw new SqlExecutionException("Could not locate a cluster.", 
> e);
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16047) Blink planner produces wrong aggregate results with state clean up

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16047:

Fix Version/s: 1.11.0

> Blink planner produces wrong aggregate results with state clean up
> --
>
> Key: FLINK-16047
> URL: https://issues.apache.org/jira/browse/FLINK-16047
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Timo Walther
>Priority: Critical
> Fix For: 1.9.3, 1.10.1, 1.11.0
>
>
> It seems that FLINK-10674 has not been ported to the Blink planner.
> Because state clean up happens in processing time, it might be the case that 
> retractions are arriving after the state has been cleaned up. Before these 
> changes, a new accumulator was created and invalid retraction messages were 
> emitted. This change drops retraction messages for which no accumulator 
> exists.
> These lines are missing in 
> {{org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction}}:
> {code}
> if (null == accumulators) {
>   // Don't create a new accumulator for a retraction message. This
>   // might happen if the retraction message is the first message for the
>   // key or after a state clean up.
>   if (!inputC.change) {
> return
>   }
>   // first accumulate message
>   firstRow = true
>   accumulators = function.createAccumulators()
> } else {
>   firstRow = false
> }
> {code}
> The bug has not been verified. I spotted it only by looking at the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11387: [FLINK-16343][table-planner-blink] Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread GitBox
flinkbot commented on issue #11387: [FLINK-16343][table-planner-blink] Improve 
exception message when reading an unbounded source in batch mode
URL: https://github.com/apache/flink/pull/11387#issuecomment-597995106
 
 
   
   ## CI report:
   
   * 4101781e56c504d53168042efa996215b7d9d7bb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16345) Computed column can not refer time attribute column

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16345:

Priority: Critical  (was: Major)

> Computed column can not refer time attribute column 
> 
>
> Key: FLINK-16345
> URL: https://issues.apache.org/jira/browse/FLINK-16345
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> If a computed column refer a time attribute column, computed column will lose 
>  time attribute and cause validation fail.
> {code:java}
> CREATE TABLE orders (
>   order_id STRING,
>   order_time TIMESTAMP(3),
>   amount DOUBLE,
>   amount_kg as amount * 1000,
>   // can not select computed column standard_ts which from column order_time 
> that used as WATERMARK
>   standard_ts as order_time + INTERVAL '8' HOUR,
>   WATERMARK FOR order_time AS order_time
> ) WITH (
>   'connector.type' = 'kafka',
>   'connector.version' = '0.10',
>   'connector.topic' = 'flink_orders',
>   'connector.properties.zookeeper.connect' = 'localhost:2181',
>   'connector.properties.bootstrap.servers' = 'localhost:9092',
>   'connector.properties.group.id' = 'testGroup',
>   'connector.startup-mode' = 'earliest-offset',
>   'format.type' = 'json',
>   'format.derive-schema' = 'true'
> );
> {code}
> The query `select amount_kg from orders` runs normally,  
> the` he query `select standard_ts from orders` throws a validation exception 
> message as following:
> {noformat}
> [ERROR] Could not execute SQL statement. Reason:
>  java.lang.AssertionError: Conversion to relational algebra failed to 
> preserve datatypes:
>  validated type:
>  RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" order_id, TIME 
> ATTRIBUTE(ROWTIME) order_time, DOUBLE amount, DOUBLE amount_kg, TIMESTAMP(3) 
> ts) NOT NULL
>  converted type:
>  RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" order_id, TIME 
> ATTRIBUTE(ROWTIME) order_time, DOUBLE amount, DOUBLE amount_kg, TIME 
> ATTRIBUTE(ROWTIME) ts) NOT NULL
>  rel:
>  LogicalProject(order_id=[$0], order_time=[$1], amount=[$2], amount_kg=[$3], 
> ts=[$4])
>  LogicalWatermarkAssigner(rowtime=[order_time], watermark=[$1])
>  LogicalProject(order_id=[$0], order_time=[$1], amount=[$2], amount_kg=[*($2, 
> 1000)], ts=[+($1, 2880:INTERVAL HOUR)])
>  LogicalTableScan(table=[[default_catalog, default_database, orders, source: 
> [Kafka010TableSource(order_id, order_time, amount)]]])
>  {noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16441) Allow users to override flink-conf parameters from SQL CLI environment

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16441:

Priority: Critical  (was: Major)

> Allow users to override flink-conf parameters from SQL CLI environment
> --
>
> Key: FLINK-16441
> URL: https://issues.apache.org/jira/browse/FLINK-16441
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Reporter: Gyula Fora
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> There is currently no way of overriding flink configuration parameters when 
> using the SQL CLI.
> The configuration section of the env yaml should provide a way of doing so as 
> this is a very important requirement for multi-user/multi-app flink client 
> envs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16560) StreamExecutionEnvironment configuration is empty when building program via PackagedProgramUtils#createJobGraph

2020-03-11 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-16560:

Description: 
PackagedProgramUtils#createJobGraph(...) is used to generate JobGraph in k8s 
job mode.
The problem is that the configuration field of StreamExecutionEnvironment is a 
newly created one when building the job program. This is because 
StreamPlanEnvironment ctor will base on the no param version ctor of 
StreamExecutionEnvironment.

This may lead to an unexpected result when invoking 
StreamExecutionEnvironment#configure(...) which relies on the configuration. 
Many configurations in the flink conf file will not be respected, like 
pipeline.time-characteristic, pipeline.operator-chaining, 
execution.buffer-timeout, and state backend configs.

  was:
PackagedProgramUtils#createJobGraph(...) is used to generate JobGraph in k8s 
job mode.
The problem is that the configuration field of StreamExecutionEnvironment is a 
newly created one when building the job program. This is because 
StreamPlanEnvironment ctor will base on the no param version ctor of 
StreamExecutionEnvironment.

This may lead to an unexpected result when invoking 
StreamExecutionEnvironment#configure(...) which relies on the configuration.


> StreamExecutionEnvironment configuration is empty when building program via 
> PackagedProgramUtils#createJobGraph
> ---
>
> Key: FLINK-16560
> URL: https://issues.apache.org/jira/browse/FLINK-16560
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Priority: Major
>
> PackagedProgramUtils#createJobGraph(...) is used to generate JobGraph in k8s 
> job mode.
> The problem is that the configuration field of StreamExecutionEnvironment is 
> a newly created one when building the job program. This is because 
> StreamPlanEnvironment ctor will base on the no param version ctor of 
> StreamExecutionEnvironment.
> This may lead to an unexpected result when invoking 
> StreamExecutionEnvironment#configure(...) which relies on the configuration. 
> Many configurations in the flink conf file will not be respected, like 
> pipeline.time-characteristic, pipeline.operator-chaining, 
> execution.buffer-timeout, and state backend configs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16441) Allow users to override flink-conf parameters from SQL CLI environment

2020-03-11 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057586#comment-17057586
 ] 

Jark Wu commented on FLINK-16441:
-

Hi [~gyfora], do you want to work on this? I would like to have this 
improvement in the upcoming 1.10.1 release. 

> Allow users to override flink-conf parameters from SQL CLI environment
> --
>
> Key: FLINK-16441
> URL: https://issues.apache.org/jira/browse/FLINK-16441
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Reporter: Gyula Fora
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> There is currently no way of overriding flink configuration parameters when 
> using the SQL CLI.
> The configuration section of the env yaml should provide a way of doing so as 
> this is a very important requirement for multi-user/multi-app flink client 
> envs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16560) StreamExecutionEnvironment configuration is empty when building program via PackagedProgramUtils#createJobGraph

2020-03-11 Thread Zhu Zhu (Jira)
Zhu Zhu created FLINK-16560:
---

 Summary: StreamExecutionEnvironment configuration is empty when 
building program via PackagedProgramUtils#createJobGraph
 Key: FLINK-16560
 URL: https://issues.apache.org/jira/browse/FLINK-16560
 Project: Flink
  Issue Type: Bug
  Components: Client / Job Submission
Affects Versions: 1.10.0
Reporter: Zhu Zhu


PackagedProgramUtils#createJobGraph(...) is used to generate JobGraph in k8s 
job mode.
The problem is that the configuration field of StreamExecutionEnvironment is a 
newly created one when building the job program. This is because 
StreamPlanEnvironment ctor will base on the no param version ctor of 
StreamExecutionEnvironment.

This may lead to an unexpected result when invoking 
StreamExecutionEnvironment#configure(...) which relies on the configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16560) StreamExecutionEnvironment configuration is empty when building program via PackagedProgramUtils#createJobGraph

2020-03-11 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057585#comment-17057585
 ] 

Zhu Zhu commented on FLINK-16560:
-

cc [~aljoscha] [~klion26]

> StreamExecutionEnvironment configuration is empty when building program via 
> PackagedProgramUtils#createJobGraph
> ---
>
> Key: FLINK-16560
> URL: https://issues.apache.org/jira/browse/FLINK-16560
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Priority: Major
>
> PackagedProgramUtils#createJobGraph(...) is used to generate JobGraph in k8s 
> job mode.
> The problem is that the configuration field of StreamExecutionEnvironment is 
> a newly created one when building the job program. This is because 
> StreamPlanEnvironment ctor will base on the no param version ctor of 
> StreamExecutionEnvironment.
> This may lead to an unexpected result when invoking 
> StreamExecutionEnvironment#configure(...) which relies on the configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-16294) JDBC connector support create database table automatically

2020-03-11 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057583#comment-17057583
 ] 

Jark Wu edited comment on FLINK-16294 at 3/12/20, 3:53 AM:
---

What about to use this title "Support to create non-existed table in database 
automatically when writing data to JDBC connector"?


was (Author: jark):
What about to use this title "Support to create non-exist table in database 
automatically when writing data to JDBC connector"?

> JDBC connector support create database table automatically
> --
>
> Key: FLINK-16294
> URL: https://issues.apache.org/jira/browse/FLINK-16294
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Kafka connector/Elasticsearch connector support create topic/index 
> automatically when topic/index not exists in kafka/Elasticsearch from now.
> This issue aims to support JDBC connector can create database table 
> automatically which will be more friendly to user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16294) JDBC connector support create database table automatically

2020-03-11 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057583#comment-17057583
 ] 

Jark Wu commented on FLINK-16294:
-

What about to use this title "Support to create non-exist table in database 
automatically when writing data to JDBC connector"?

> JDBC connector support create database table automatically
> --
>
> Key: FLINK-16294
> URL: https://issues.apache.org/jira/browse/FLINK-16294
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Kafka connector/Elasticsearch connector support create topic/index 
> automatically when topic/index not exists in kafka/Elasticsearch from now.
> This issue aims to support JDBC connector can create database table 
> automatically which will be more friendly to user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11387: [FLINK-16343][table-planner-blink] Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread GitBox
flinkbot commented on issue #11387: [FLINK-16343][table-planner-blink] Improve 
exception message when reading an unbounded source in batch mode
URL: https://github.com/apache/flink/pull/11387#issuecomment-597992527
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4101781e56c504d53168042efa996215b7d9d7bb (Thu Mar 12 
03:49:58 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong opened a new pull request #11387: [FLINK-16343][table-planner-blink] Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread GitBox
wuchong opened a new pull request #11387: [FLINK-16343][table-planner-blink] 
Improve exception message when reading an unbounded source in batch mode
URL: https://github.com/apache/flink/pull/11387
 
 
   
   
   
   ## What is the purpose of the change
   
   This is an improvement for the exception message when query on an unbounded 
source in batch mode. 
   Before this commit, the exception will be an unsupported plan error which is 
hard to understand:
   
   ```
   org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query: 
   
   FlinkLogicalCalc(select=[ts, a, b], where=[>(a, 1)])
   +- FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
src, source: [CollectionTableSource(ts, a, b)]]], fields=[ts, a, b])
   ```
   
   
   ## Brief change log
   
   - validate the `TableSource` at the early stage, i.e. in 
`CatalogSourceTable` where it is created.
   
   ## Verifying this change
   
   - Add an unit test for the expected exception message
   - Removes `testStreamSourceTableWithRowtime` and `testBatchTableWithRowtime` 
in `CatalogTableITCase` which have been covered by `testReadWriteCsvUsingDDL`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16343) Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16343:
---
Labels: pull-request-available  (was: )

> Improve exception message when reading an unbounded source in batch mode
> 
>
> Key: FLINK-16343
> URL: https://issues.apache.org/jira/browse/FLINK-16343
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Kurt Young
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> We can just ignore watermark in batch mode. 
> cc [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the potential deadlock problem when reducing exclusive buffers to zero

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the 
potential deadlock problem when reducing exclusive buffers to zero
URL: https://github.com/apache/flink/pull/11351#issuecomment-596351676
 
 
   
   ## CI report:
   
   * 715889a35cfcc3aaf1b17f39dadaa86f755cc75d UNKNOWN
   * 548b22258f5e87fd53b45c7f4bb6de40bfd4e6d2 UNKNOWN
   * d282e3cb5692915fdd7a45ea5e193feba823c8c3 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/152916161) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6210)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11238: [FLINK-16304][python] Remove python packages bundled in the flink-python jar.

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11238: [FLINK-16304][python] Remove python 
packages bundled in the flink-python jar.
URL: https://github.com/apache/flink/pull/11238#issuecomment-591923781
 
 
   
   ## CI report:
   
   * a715f80958d35e1507f075982e63e5844d94d893 UNKNOWN
   * 1e1f9de29552097bc7a7973c6532f678dfc1dabe UNKNOWN
   * 4d180da9a55cbbf11f1c1b6b6dc3a0cbe6a101f4 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152916120) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6209)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16343) Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057572#comment-17057572
 ] 

Kurt Young commented on FLINK-16343:


Thanks [~jark], and sorry for the false alarm. I will change this from bug to 
improvement.

> Improve exception message when reading an unbounded source in batch mode
> 
>
> Key: FLINK-16343
> URL: https://issues.apache.org/jira/browse/FLINK-16343
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Kurt Young
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> We can just ignore watermark in batch mode. 
> cc [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16343) Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-16343:
---
Fix Version/s: (was: 1.10.1)

> Improve exception message when reading an unbounded source in batch mode
> 
>
> Key: FLINK-16343
> URL: https://issues.apache.org/jira/browse/FLINK-16343
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Kurt Young
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.11.0
>
>
> We can just ignore watermark in batch mode. 
> cc [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16343) Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-16343:
---
Issue Type: Improvement  (was: Bug)

> Improve exception message when reading an unbounded source in batch mode
> 
>
> Key: FLINK-16343
> URL: https://issues.apache.org/jira/browse/FLINK-16343
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Kurt Young
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> We can just ignore watermark in batch mode. 
> cc [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16343) Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057571#comment-17057571
 ] 

Jark Wu commented on FLINK-16343:
-

The exception is hard to understand, we should improve the exception message. I 
have updated the title. 

> Improve exception message when reading an unbounded source in batch mode
> 
>
> Key: FLINK-16343
> URL: https://issues.apache.org/jira/browse/FLINK-16343
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Kurt Young
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> We can just ignore watermark in batch mode. 
> cc [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16343) Improve exception message when reading an unbounded source in batch mode

2020-03-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16343:

Summary: Improve exception message when reading an unbounded source in 
batch mode  (was: Failed to read a table with watermark in batch mode)

> Improve exception message when reading an unbounded source in batch mode
> 
>
> Key: FLINK-16343
> URL: https://issues.apache.org/jira/browse/FLINK-16343
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Kurt Young
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> We can just ignore watermark in batch mode. 
> cc [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] ifndef-SleePy commented on issue #11347: [FLINK-14971][checkpointing] Make all the non-IO operations in CheckpointCoordinator single-threaded

2020-03-11 Thread GitBox
ifndef-SleePy commented on issue #11347: [FLINK-14971][checkpointing] Make all 
the non-IO operations in CheckpointCoordinator single-threaded
URL: https://github.com/apache/flink/pull/11347#issuecomment-597988749
 
 
   Hey @pnowojski , thanks for reviewing. The e2e testing case got passed this 
time, I think it's an unstable case. I will file a ticket for this. Thanks for 
reminding.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-16343) Failed to read a table with watermark in batch mode

2020-03-11 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057569#comment-17057569
 ] 

Jark Wu edited comment on FLINK-16343 at 3/12/20, 3:29 AM:
---

After some discussion with Kurt, this happens when querying on an unbounded 
source in batch mode. 

The reproduce example:

{code:scala}
  @Test
  def testScanOnUnboundedSource(): Unit = {
val util = batchTestUtil()
util.addTable(
  """
|CREATE TABLE src (
|  ts TIMESTAMP(3),
|  a INT,
|  b DOUBLE,
|  WATERMARK FOR ts AS ts - INTERVAL '0.001' SECOND
|) WITH (
|  'connector' = 'COLLECTION',
|  'is-bounded' = 'false'
|)
  """.stripMargin)
util.verifyPlan("SELECT * FROM src WHERE a > 1")
  }
{code}


The exception is as following:

{code:java}
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query: 

FlinkLogicalCalc(select=[ts, a, b], where=[>(a, 1)])
+- FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, src, 
source: [CollectionTableSource(ts, a, b)]]], fields=[ts, a, b])
{code}



was (Author: jark):
After some discussion with Kurt, this happens when querying on an unbounded 
source in batch mode. The exception is as following:

{code:java}
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query: 

FlinkLogicalCalc(select=[ts, a, b], where=[>(a, 1)])
+- FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, src, 
source: [CollectionTableSource(ts, a, b)]]], fields=[ts, a, b])
{code}


> Failed to read a table with watermark in batch mode
> ---
>
> Key: FLINK-16343
> URL: https://issues.apache.org/jira/browse/FLINK-16343
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Kurt Young
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> We can just ignore watermark in batch mode. 
> cc [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16343) Failed to read a table with watermark in batch mode

2020-03-11 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057569#comment-17057569
 ] 

Jark Wu commented on FLINK-16343:
-

After some discussion with Kurt, this happens when querying on an unbounded 
source in batch mode. The exception is as following:

{code:java}
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query: 

FlinkLogicalCalc(select=[ts, a, b], where=[>(a, 1)])
+- FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, src, 
source: [CollectionTableSource(ts, a, b)]]], fields=[ts, a, b])
{code}


> Failed to read a table with watermark in batch mode
> ---
>
> Key: FLINK-16343
> URL: https://issues.apache.org/jira/browse/FLINK-16343
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Kurt Young
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> We can just ignore watermark in batch mode. 
> cc [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] ifndef-SleePy commented on a change in pull request #11347: [FLINK-14971][checkpointing] Make all the non-IO operations in CheckpointCoordinator single-threaded

2020-03-11 Thread GitBox
ifndef-SleePy commented on a change in pull request #11347: 
[FLINK-14971][checkpointing] Make all the non-IO operations in 
CheckpointCoordinator single-threaded
URL: https://github.com/apache/flink/pull/11347#discussion_r391383507
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
 ##
 @@ -1018,61 +1019,64 @@ else if (checkpoint != null) {
 * Important: This method should only be called in the checkpoint 
lock scope.
 *
 * @param pendingCheckpoint to complete
-* @throws CheckpointException if the completion failed
 */
-   private void completePendingCheckpoint(PendingCheckpoint 
pendingCheckpoint) throws CheckpointException {
-   final long checkpointId = pendingCheckpoint.getCheckpointId();
-   final CompletedCheckpoint completedCheckpoint;
-
+   private void completePendingCheckpoint(PendingCheckpoint 
pendingCheckpoint) {
// As a first step to complete the checkpoint, we register its 
state with the registry
Map operatorStates = 
pendingCheckpoint.getOperatorStates();
sharedStateRegistry.registerAll(operatorStates.values());
 
-   try {
-   try {
-   completedCheckpoint = 
pendingCheckpoint.finalizeCheckpoint();
-   
failureManager.handleCheckpointSuccess(pendingCheckpoint.getCheckpointId());
-   }
-   catch (Exception e1) {
-   // abort the current pending checkpoint if we 
fails to finalize the pending checkpoint.
-   if (!pendingCheckpoint.isDiscarded()) {
-   abortPendingCheckpoint(
-   pendingCheckpoint,
-   new CheckpointException(
-   
CheckpointFailureReason.FINALIZE_CHECKPOINT_FAILURE, e1));
+   final CompletableFuture 
completedCheckpointFuture = pendingCheckpoint.finalizeCheckpoint();
+   completedCheckpointFuture.thenApplyAsync((completedCheckpoint) 
-> {
+   synchronized (lock) {
+   if (shutdown) {
+   return null;
+   }
+   // the pending checkpoint must be discarded 
after the finalization
+   
Preconditions.checkState(pendingCheckpoint.isDiscarded() && completedCheckpoint 
!= null);
+   try {
+   
completedCheckpointStore.addCheckpoint(completedCheckpoint);
+   return completedCheckpoint;
+   } catch (Throwable t) {
+   try {
+   
completedCheckpoint.discardOnFailedStoring();
+   } catch (Exception e) {
+   LOG.warn("Could not properly 
discard completed checkpoint {}.", completedCheckpoint.getCheckpointID(), e);
+   }
+   throw new CompletionException(t);
}
-
-   throw new CheckpointException("Could not 
finalize the pending checkpoint " + checkpointId + '.',
-   
CheckpointFailureReason.FINALIZE_CHECKPOINT_FAILURE, e1);
}
-
-   // the pending checkpoint must be discarded after the 
finalization
-   
Preconditions.checkState(pendingCheckpoint.isDiscarded() && completedCheckpoint 
!= null);
-
-   try {
-   
completedCheckpointStore.addCheckpoint(completedCheckpoint);
-   } catch (Exception exception) {
-   // we failed to store the completed checkpoint. 
Let's clean up
-   executor.execute(new Runnable() {
-   @Override
-   public void run() {
-   try {
-   
completedCheckpoint.discardOnFailedStoring();
-   } catch (Throwable t) {
-   LOG.warn("Could not 
properly discard completed checkpoint {}.", 
completedCheckpoint.getCheckpointID(), t);
-   }
+   }, executor)
+   .whenCompleteAsync((completedCheckpoint, throwable) -> {
+   synchronized (lock) {
+  

[jira] [Updated] (FLINK-16547) Correct the order to write temporary files in YarnClusterDescriptor#startAppMaster

2020-03-11 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-16547:
-
Summary: Correct the order to write temporary files in 
YarnClusterDescriptor#startAppMaster  (was: Corrent the order to write 
temporary files in YarnClusterDescriptor#startAppMaster)

> Correct the order to write temporary files in 
> YarnClusterDescriptor#startAppMaster
> --
>
> Key: FLINK-16547
> URL: https://issues.apache.org/jira/browse/FLINK-16547
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Canbin Zheng
>Priority: Minor
> Fix For: 1.11.0
>
>
> Currently, in {{YarnClusterDescriptor#startAppMaster}}, we first write out 
> and upload the Flink Configuration file, then start to write out the JobGraph 
> file and set its name into the Flink Configuration object, the afterward 
> setting is not written into the Flink Configuration file so that it does not 
> take effect in the cluster side.
> Since in the client-side we name the JobGraph file with the default value of 
> FileJobGraphRetriever.JOB_GRAPH_FILE_PATH option, the cluster side could 
> succeed in retrieving that file. 
> This ticket proposes to write out the JobGraph file before the Configuration 
> file to ensure that the setting of FileJobGraphRetriever.JOB_GRAPH_FILE_PATH 
> is delivered to the cluster side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ commented on issue #11361: [FLINK-15727] Let BashJavaUtils return dynamic configs and JVM parame…

2020-03-11 Thread GitBox
KarmaGYZ commented on issue #11361: [FLINK-15727] Let BashJavaUtils return 
dynamic configs and JVM parame…
URL: https://github.com/apache/flink/pull/11361#issuecomment-597986864
 
 
   Thanks for the review @zentol . PR updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] ifndef-SleePy commented on a change in pull request #11347: [FLINK-14971][checkpointing] Make all the non-IO operations in CheckpointCoordinator single-threaded

2020-03-11 Thread GitBox
ifndef-SleePy commented on a change in pull request #11347: 
[FLINK-14971][checkpointing] Make all the non-IO operations in 
CheckpointCoordinator single-threaded
URL: https://github.com/apache/flink/pull/11347#discussion_r391381769
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
 ##
 @@ -588,8 +588,8 @@ private void startTriggeringCheckpoint(
final CompletableFuture 
coordinatorCheckpointsComplete = pendingCheckpointCompletableFuture
.thenComposeAsync((pendingCheckpoint) ->

OperatorCoordinatorCheckpoints.triggerAndAcknowledgeAllCoordinatorCheckpointsWithCompletion(
-   
coordinatorsToCheckpoint, pendingCheckpoint, timer),
-   timer);
+   
coordinatorsToCheckpoint, pendingCheckpoint, mainThreadExecutor),
 
 Review comment:
   Oops, I just realize this commit includes some codes belongs the preceding 
commit. It's must caused by the conflict resolving. And yes, you are right. 
This commit could be squashed with the preceding one. I will do the squashing 
when all comments are addressed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the potential deadlock problem when reducing exclusive buffers to zero

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11351: [FLINK-16404][runtime] Solve the 
potential deadlock problem when reducing exclusive buffers to zero
URL: https://github.com/apache/flink/pull/11351#issuecomment-596351676
 
 
   
   ## CI report:
   
   * 715889a35cfcc3aaf1b17f39dadaa86f755cc75d UNKNOWN
   * 548b22258f5e87fd53b45c7f4bb6de40bfd4e6d2 UNKNOWN
   * d282e3cb5692915fdd7a45ea5e193feba823c8c3 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152916161) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6210)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11317: [FLINK-16433][table-api] TableEnvironmentImpl doesn't clear…

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11317: [FLINK-16433][table-api] 
TableEnvironmentImpl doesn't clear…
URL: https://github.com/apache/flink/pull/11317#issuecomment-595145709
 
 
   
   ## CI report:
   
   * 6cb26ab23d18cd85dda0501861873fa9382332ff UNKNOWN
   * 6b044fa7bc048d0f2381eb80f438bdf0db00bd56 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152811765) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6175)
 
   * 3bfc593562df2a1b7026ea7474c558edd0472566 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152920061) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6213)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11326: [FLINK-11427][formats] Add protobuf parquet support for StreamingFileSink

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11326: [FLINK-11427][formats] Add protobuf 
parquet support for StreamingFileSink
URL: https://github.com/apache/flink/pull/11326#issuecomment-595547530
 
 
   
   ## CI report:
   
   * 6beb3402b4bd44d60b0b8c893edae0493a7b85ed Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152915173) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6208)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11238: [FLINK-16304][python] Remove python packages bundled in the flink-python jar.

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11238: [FLINK-16304][python] Remove python 
packages bundled in the flink-python jar.
URL: https://github.com/apache/flink/pull/11238#issuecomment-591923781
 
 
   
   ## CI report:
   
   * a715f80958d35e1507f075982e63e5844d94d893 UNKNOWN
   * 1e1f9de29552097bc7a7973c6532f678dfc1dabe UNKNOWN
   * 4d180da9a55cbbf11f1c1b6b6dc3a0cbe6a101f4 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152916120) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6209)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16446) Add rate limiting feature for FlinkKafkaConsumer

2020-03-11 Thread Zou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057557#comment-17057557
 ] 

Zou commented on FLINK-16446:
-

I have implemented this in our internal version, and I'd like to contribute it 
to the community. 

> Add rate limiting feature for FlinkKafkaConsumer
> 
>
> Key: FLINK-16446
> URL: https://issues.apache.org/jira/browse/FLINK-16446
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Zou
>Priority: Major
>
> There is a rate limiting feature in FlinkKafkaConsumer010 and 
> FlinkKafkaConsumer011, but not in FlinkKafkaConsumer.  We could also add this 
> feature in FlinkKafkaConsumer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16558) Reword Stateful Functions doc's tagline

2020-03-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16558:
---
Labels: pull-request-available  (was: )

> Reword Stateful Functions doc's tagline 
> 
>
> Key: FLINK-16558
> URL: https://issues.apache.org/jira/browse/FLINK-16558
> Project: Flink
>  Issue Type: Task
>  Components: Documentation, Stateful Functions
>Affects Versions: statefun-1.1
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Minor
>  Labels: pull-request-available
>
> The current tagline is "A framework for stateful distributed applications by 
> the original creators of Apache Flink®."
> The part about "by the original creators of Apache Flink" reads a bit 
> out-of-place now, since the project is now maintained by the Apache Flink 
> community.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] tzulitai opened a new pull request #56: [FLINK-16558] [doc] Reword project website tagline

2020-03-11 Thread GitBox
tzulitai opened a new pull request #56: [FLINK-16558] [doc] Reword project 
website tagline
URL: https://github.com/apache/flink-statefun/pull/56
 
 
   This PR changes the project tagline in the docs to:
   
   
![image](https://user-images.githubusercontent.com/5284370/76483039-ed52a880-6450-11ea-974f-eb8caaabf7df.png)
   
   It previously read: "A framework for stateful distributed applications by 
the original creators of Apache Flink®." which seems a bit outdated now, since 
the project is maintained by the community as a whole.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16559) Cannot create Hive avro table in test

2020-03-11 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057553#comment-17057553
 ] 

Rui Li commented on FLINK-16559:


This is due to avro conflict between hive-exec and flink-shaded-hadoop-2-uber. 
In flink-shaded-hadoop-2-uber {{org/codehaus/jackson/JsonNode}} is relocated. 
Therefore if we load {{Schema$Field}} from flink-shaded-hadoop-2-uber, we'll 
get this NoSuchMethodError.

> Cannot create Hive avro table in test
> -
>
> Key: FLINK-16559
> URL: https://issues.apache.org/jira/browse/FLINK-16559
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Tests
>Affects Versions: 1.10.0
>Reporter: Rui Li
>Priority: Major
>
> Trying to create a Hive avro table will hit the following exception:
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.avro.Schema$Field.(Ljava/lang/String;Lorg/apache/avro/Schema;Ljava/lang/String;Lorg/codehaus/jackson/JsonNode;)V
>   at 
> org.apache.hadoop.hive.serde2.avro.TypeInfoToSchema.createAvroField(TypeInfoToSchema.java:76)
>   at 
> org.apache.hadoop.hive.serde2.avro.TypeInfoToSchema.convert(TypeInfoToSchema.java:61)
>   at 
> org.apache.hadoop.hive.serde2.avro.AvroSerDe.getSchemaFromCols(AvroSerDe.java:170)
>   at 
> org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:114)
>   at 
> org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:83)
>   at 
> org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:533)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:449)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:436)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:281)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:263)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:641)
>   at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:624)
>   at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:831)
> ..
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 commented on a change in pull request #11344: [FLINK-16250][python][ml] Add interfaces for PipelineStage and Pipeline

2020-03-11 Thread GitBox
hequn8128 commented on a change in pull request #11344: 
[FLINK-16250][python][ml] Add interfaces for PipelineStage and Pipeline
URL: https://github.com/apache/flink/pull/11344#discussion_r391376778
 
 

 ##
 File path: flink-python/pyflink/ml/api/base.py
 ##
 @@ -0,0 +1,275 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import re
+
+from abc import ABCMeta, abstractmethod
+
+from pyflink.table.table_environment import TableEnvironment
+from pyflink.table.table import Table
+from pyflink.ml.api.param import WithParams, Params
+from py4j.java_gateway import get_field
+
+
+class PipelineStage(WithParams):
+"""
+Base class for a stage in a pipeline. The interface is only a concept, and 
does not have any
+actual functionality. Its subclasses must be either Estimator or 
Transformer. No other classes
+should inherit this interface directly.
+
+Each pipeline stage is with parameters, and requires a public empty 
constructor for
+restoration in Pipeline.
+"""
+
+def __init__(self, params=None):
+if params is None:
+self._params = Params()
+else:
+self._params = params
+
+def get_params(self) -> Params:
+return self._params
+
+def _convert_params_to_java(self, j_pipeline_stage):
+for param in self._params._param_map:
+java_param = self._make_java_param(j_pipeline_stage, param)
+java_value = self._make_java_value(self._params._param_map[param])
+j_pipeline_stage.set(java_param, java_value)
+
+@staticmethod
+def _make_java_param(j_pipeline_stage, param):
+# camel case to snake case
+name = re.sub(r'(? str:
+return self.get_params().to_json()
+
+def load_json(self, json: str) -> None:
+self.get_params().load_json(json)
+
+
+class Transformer(PipelineStage):
+"""
+A transformer is a PipelineStage that transforms an input Table to a 
result Table.
+"""
+
+__metaclass__ = ABCMeta
+
+@abstractmethod
+def transform(self, table_env: TableEnvironment, table: Table) -> Table:
+"""
+Applies the transformer on the input table, and returns the result 
table.
+
+:param table_env: the table environment to which the input table is 
bound.
+:param table: the table to be transformed
+:returns: the transformed table
+"""
+raise NotImplementedError()
+
+
+class JavaTransformer(Transformer):
+"""
+Base class for Transformer that wrap Java implementations. Subclasses 
should
+ensure they have the transformer Java object available as j_obj.
+"""
+
+def __init__(self, j_obj):
+super().__init__()
+self._j_obj = j_obj
+
+def transform(self, table_env: TableEnvironment, table: Table) -> Table:
+"""
+Applies the transformer on the input table, and returns the result 
table.
+
+:param table_env: the table environment to which the input table is 
bound.
+:param table: the table to be transformed
+:returns: the transformed table
+"""
+self._convert_params_to_java(self._j_obj)
+return Table(self._j_obj.transform(table_env._j_tenv, table._j_table))
+
+
+class Model(Transformer):
+"""
+Abstract class for models that are fitted by estimators.
+
+A model is an ordinary Transformer except how it is created. While 
ordinary transformers
+are defined by specifying the parameters directly, a model is usually 
generated by an Estimator
+when Estimator.fit(table_env, table) is invoked.
+"""
+
+__metaclass__ = ABCMeta
+
+
+class JavaModel(JavaTransformer, Model):
+"""
+Base class for JavaTransformer that wrap Java implementations.
+Subclasses should ensure they have the model Java object available as 
j_obj.
+"""
+
+
+class Estimator(PipelineStage):
+"""
+Estimators are PipelineStages responsible for training and generating 
machine learning models.
+
+The implementations are expected to take an input table as traini

[GitHub] [flink] hequn8128 commented on a change in pull request #11344: [FLINK-16250][python][ml] Add interfaces for PipelineStage and Pipeline

2020-03-11 Thread GitBox
hequn8128 commented on a change in pull request #11344: 
[FLINK-16250][python][ml] Add interfaces for PipelineStage and Pipeline
URL: https://github.com/apache/flink/pull/11344#discussion_r391376778
 
 

 ##
 File path: flink-python/pyflink/ml/api/base.py
 ##
 @@ -0,0 +1,275 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import re
+
+from abc import ABCMeta, abstractmethod
+
+from pyflink.table.table_environment import TableEnvironment
+from pyflink.table.table import Table
+from pyflink.ml.api.param import WithParams, Params
+from py4j.java_gateway import get_field
+
+
+class PipelineStage(WithParams):
+"""
+Base class for a stage in a pipeline. The interface is only a concept, and 
does not have any
+actual functionality. Its subclasses must be either Estimator or 
Transformer. No other classes
+should inherit this interface directly.
+
+Each pipeline stage is with parameters, and requires a public empty 
constructor for
+restoration in Pipeline.
+"""
+
+def __init__(self, params=None):
+if params is None:
+self._params = Params()
+else:
+self._params = params
+
+def get_params(self) -> Params:
+return self._params
+
+def _convert_params_to_java(self, j_pipeline_stage):
+for param in self._params._param_map:
+java_param = self._make_java_param(j_pipeline_stage, param)
+java_value = self._make_java_value(self._params._param_map[param])
+j_pipeline_stage.set(java_param, java_value)
+
+@staticmethod
+def _make_java_param(j_pipeline_stage, param):
+# camel case to snake case
+name = re.sub(r'(? str:
+return self.get_params().to_json()
+
+def load_json(self, json: str) -> None:
+self.get_params().load_json(json)
+
+
+class Transformer(PipelineStage):
+"""
+A transformer is a PipelineStage that transforms an input Table to a 
result Table.
+"""
+
+__metaclass__ = ABCMeta
+
+@abstractmethod
+def transform(self, table_env: TableEnvironment, table: Table) -> Table:
+"""
+Applies the transformer on the input table, and returns the result 
table.
+
+:param table_env: the table environment to which the input table is 
bound.
+:param table: the table to be transformed
+:returns: the transformed table
+"""
+raise NotImplementedError()
+
+
+class JavaTransformer(Transformer):
+"""
+Base class for Transformer that wrap Java implementations. Subclasses 
should
+ensure they have the transformer Java object available as j_obj.
+"""
+
+def __init__(self, j_obj):
+super().__init__()
+self._j_obj = j_obj
+
+def transform(self, table_env: TableEnvironment, table: Table) -> Table:
+"""
+Applies the transformer on the input table, and returns the result 
table.
+
+:param table_env: the table environment to which the input table is 
bound.
+:param table: the table to be transformed
+:returns: the transformed table
+"""
+self._convert_params_to_java(self._j_obj)
+return Table(self._j_obj.transform(table_env._j_tenv, table._j_table))
+
+
+class Model(Transformer):
+"""
+Abstract class for models that are fitted by estimators.
+
+A model is an ordinary Transformer except how it is created. While 
ordinary transformers
+are defined by specifying the parameters directly, a model is usually 
generated by an Estimator
+when Estimator.fit(table_env, table) is invoked.
+"""
+
+__metaclass__ = ABCMeta
+
+
+class JavaModel(JavaTransformer, Model):
+"""
+Base class for JavaTransformer that wrap Java implementations.
+Subclasses should ensure they have the model Java object available as 
j_obj.
+"""
+
+
+class Estimator(PipelineStage):
+"""
+Estimators are PipelineStages responsible for training and generating 
machine learning models.
+
+The implementations are expected to take an input table as traini

[jira] [Updated] (FLINK-16559) Cannot create Hive avro table in test

2020-03-11 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-16559:
---
Description: 
Trying to create a Hive avro table will hit the following exception:
{noformat}
Caused by: java.lang.NoSuchMethodError: 
org.apache.avro.Schema$Field.(Ljava/lang/String;Lorg/apache/avro/Schema;Ljava/lang/String;Lorg/codehaus/jackson/JsonNode;)V
at 
org.apache.hadoop.hive.serde2.avro.TypeInfoToSchema.createAvroField(TypeInfoToSchema.java:76)
at 
org.apache.hadoop.hive.serde2.avro.TypeInfoToSchema.convert(TypeInfoToSchema.java:61)
at 
org.apache.hadoop.hive.serde2.avro.AvroSerDe.getSchemaFromCols(AvroSerDe.java:170)
at 
org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:114)
at 
org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:83)
at 
org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:533)
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:449)
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:436)
at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:281)
at 
org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:263)
at 
org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:641)
at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:624)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:831)
..
{noformat}

> Cannot create Hive avro table in test
> -
>
> Key: FLINK-16559
> URL: https://issues.apache.org/jira/browse/FLINK-16559
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Tests
>Affects Versions: 1.10.0
>Reporter: Rui Li
>Priority: Major
>
> Trying to create a Hive avro table will hit the following exception:
> {noformat}
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.avro.Schema$Field.(Ljava/lang/String;Lorg/apache/avro/Schema;Ljava/lang/String;Lorg/codehaus/jackson/JsonNode;)V
>   at 
> org.apache.hadoop.hive.serde2.avro.TypeInfoToSchema.createAvroField(TypeInfoToSchema.java:76)
>   at 
> org.apache.hadoop.hive.serde2.avro.TypeInfoToSchema.convert(TypeInfoToSchema.java:61)
>   at 
> org.apache.hadoop.hive.serde2.avro.AvroSerDe.getSchemaFromCols(AvroSerDe.java:170)
>   at 
> org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:114)
>   at 
> org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:83)
>   at 
> org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:533)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:449)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:436)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:281)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:263)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:641)
>   at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:624)
>   at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:831)
> ..
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11374: [FLINK-16524][python] Optimize the result of FlattenRowCoder and ArrowCoder to generator to eliminate unnecessary function calls

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11374: [FLINK-16524][python] Optimize the 
result of FlattenRowCoder and ArrowCoder to generator to eliminate unnecessary 
function calls
URL: https://github.com/apache/flink/pull/11374#issuecomment-597497985
 
 
   
   ## CI report:
   
   * ef45405e3baca296b9acdfb40308276535713fec Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152913580) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6207)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on a change in pull request #11344: [FLINK-16250][python][ml] Add interfaces for PipelineStage and Pipeline

2020-03-11 Thread GitBox
hequn8128 commented on a change in pull request #11344: 
[FLINK-16250][python][ml] Add interfaces for PipelineStage and Pipeline
URL: https://github.com/apache/flink/pull/11344#discussion_r391376778
 
 

 ##
 File path: flink-python/pyflink/ml/api/base.py
 ##
 @@ -0,0 +1,275 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import re
+
+from abc import ABCMeta, abstractmethod
+
+from pyflink.table.table_environment import TableEnvironment
+from pyflink.table.table import Table
+from pyflink.ml.api.param import WithParams, Params
+from py4j.java_gateway import get_field
+
+
+class PipelineStage(WithParams):
+"""
+Base class for a stage in a pipeline. The interface is only a concept, and 
does not have any
+actual functionality. Its subclasses must be either Estimator or 
Transformer. No other classes
+should inherit this interface directly.
+
+Each pipeline stage is with parameters, and requires a public empty 
constructor for
+restoration in Pipeline.
+"""
+
+def __init__(self, params=None):
+if params is None:
+self._params = Params()
+else:
+self._params = params
+
+def get_params(self) -> Params:
+return self._params
+
+def _convert_params_to_java(self, j_pipeline_stage):
+for param in self._params._param_map:
+java_param = self._make_java_param(j_pipeline_stage, param)
+java_value = self._make_java_value(self._params._param_map[param])
+j_pipeline_stage.set(java_param, java_value)
+
+@staticmethod
+def _make_java_param(j_pipeline_stage, param):
+# camel case to snake case
+name = re.sub(r'(? str:
+return self.get_params().to_json()
+
+def load_json(self, json: str) -> None:
+self.get_params().load_json(json)
+
+
+class Transformer(PipelineStage):
+"""
+A transformer is a PipelineStage that transforms an input Table to a 
result Table.
+"""
+
+__metaclass__ = ABCMeta
+
+@abstractmethod
+def transform(self, table_env: TableEnvironment, table: Table) -> Table:
+"""
+Applies the transformer on the input table, and returns the result 
table.
+
+:param table_env: the table environment to which the input table is 
bound.
+:param table: the table to be transformed
+:returns: the transformed table
+"""
+raise NotImplementedError()
+
+
+class JavaTransformer(Transformer):
+"""
+Base class for Transformer that wrap Java implementations. Subclasses 
should
+ensure they have the transformer Java object available as j_obj.
+"""
+
+def __init__(self, j_obj):
+super().__init__()
+self._j_obj = j_obj
+
+def transform(self, table_env: TableEnvironment, table: Table) -> Table:
+"""
+Applies the transformer on the input table, and returns the result 
table.
+
+:param table_env: the table environment to which the input table is 
bound.
+:param table: the table to be transformed
+:returns: the transformed table
+"""
+self._convert_params_to_java(self._j_obj)
+return Table(self._j_obj.transform(table_env._j_tenv, table._j_table))
+
+
+class Model(Transformer):
+"""
+Abstract class for models that are fitted by estimators.
+
+A model is an ordinary Transformer except how it is created. While 
ordinary transformers
+are defined by specifying the parameters directly, a model is usually 
generated by an Estimator
+when Estimator.fit(table_env, table) is invoked.
+"""
+
+__metaclass__ = ABCMeta
+
+
+class JavaModel(JavaTransformer, Model):
+"""
+Base class for JavaTransformer that wrap Java implementations.
+Subclasses should ensure they have the model Java object available as 
j_obj.
+"""
+
+
+class Estimator(PipelineStage):
+"""
+Estimators are PipelineStages responsible for training and generating 
machine learning models.
+
+The implementations are expected to take an input table as traini

[jira] [Commented] (FLINK-16294) JDBC connector support create database table automatically

2020-03-11 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057546#comment-17057546
 ] 

Leonard Xu commented on FLINK-16294:


Hi [~sjwiesman]

(1) this feature is different with JDBC catalog, it's general for all catalogs

(2) this improvement only happens when user writing data out to jdbc table and 
the db table not exists

the issue aims improve the out-of-box experience of jdbc connector.

 

 

 

> JDBC connector support create database table automatically
> --
>
> Key: FLINK-16294
> URL: https://issues.apache.org/jira/browse/FLINK-16294
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Kafka connector/Elasticsearch connector support create topic/index 
> automatically when topic/index not exists in kafka/Elasticsearch from now.
> This issue aims to support JDBC connector can create database table 
> automatically which will be more friendly to user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11361: [FLINK-15727] Let BashJavaUtils return dynamic configs and JVM parame…

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11361: [FLINK-15727] Let BashJavaUtils 
return dynamic configs and JVM parame…
URL: https://github.com/apache/flink/pull/11361#issuecomment-596976998
 
 
   
   ## CI report:
   
   * cd09736f8968ac04c685e0bd502a18bd163388bf Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152594634) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6116)
 
   * 1ffc375e2b14f76c86e56d6d7aa888094a45d475 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/152919016) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6212)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16559) Cannot create Hive avro table in test

2020-03-11 Thread Rui Li (Jira)
Rui Li created FLINK-16559:
--

 Summary: Cannot create Hive avro table in test
 Key: FLINK-16559
 URL: https://issues.apache.org/jira/browse/FLINK-16559
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive, Tests
Affects Versions: 1.10.0
Reporter: Rui Li






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11317: [FLINK-16433][table-api] TableEnvironmentImpl doesn't clear…

2020-03-11 Thread GitBox
flinkbot edited a comment on issue #11317: [FLINK-16433][table-api] 
TableEnvironmentImpl doesn't clear…
URL: https://github.com/apache/flink/pull/11317#issuecomment-595145709
 
 
   
   ## CI report:
   
   * 6cb26ab23d18cd85dda0501861873fa9382332ff UNKNOWN
   * 6b044fa7bc048d0f2381eb80f438bdf0db00bd56 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/152811765) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6175)
 
   * 3bfc593562df2a1b7026ea7474c558edd0472566 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   >