[jira] [Comment Edited] (FLINK-16380) AZP: Python test fails on jdk11 nightly test (misc profile)

2020-03-02 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049164#comment-17049164
 ] 

Robert Metzger edited comment on FLINK-16380 at 3/2/20 12:23 PM:
-

I suspect an issue with our Azure, as this error is not occurring on travis 
(reference: https://travis-ci.org/apache/flink/jobs/656930995)


was (Author: rmetzger):
I suspect an issue with our Azure, as this error is not occurring on travis

> AZP: Python test fails on jdk11 nightly test (misc profile)
> ---
>
> Key: FLINK-16380
> URL: https://issues.apache.org/jira/browse/FLINK-16380
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>
> Logs: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=d5dbfc72-24cf-5a8f-e213-1ae80d4b2df8&t=cb83ed8c-7d59-59ba-b58d-25e43fbaa4b2
> {code}
> - Captured stderr call 
> -
> Error: A JNI error has occurred, please check your installation and try again
> Exception in thread "main" java.lang.UnsupportedClassVersionError: 
> org/apache/flink/client/python/PythonGatewayServer has been compiled by a 
> more recent version of the Java Runtime (class file version 55.0), this 
> version of the Java Runtime only recognizes class file versions up to 52.0
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:757)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
>   at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
> ___ StreamTableWindowTests.test_over_window 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16378) Disable JDK 11 Docker tests on AZP

2020-03-02 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-16378:


Assignee: Robert Metzger

> Disable JDK 11 Docker tests on AZP
> --
>
> Key: FLINK-16378
> URL: https://issues.apache.org/jira/browse/FLINK-16378
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>
> Build log: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=eec879f1-c5a2-5810-2b49-ba5c6bfecb27&t=484f04d6-55db-5161-9f93-391b1677737d
> {code}Error: A JNI error has occurred, please check your installation and try 
> again
> Exception in thread "main" java.lang.UnsupportedClassVersionError: 
> org/apache/flink/client/cli/CliFrontend has been compiled by a more recent 
> version of the Java Runtime (class file version 55.0), this version of the 
> Java Runtime only recognizes class file versions up to 52.0
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
> Running the job failed.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11281: [FLINK-16203][sql] Support JSON_OBJECT for blink planner

2020-03-02 Thread GitBox
flinkbot commented on issue #11281: [FLINK-16203][sql] Support JSON_OBJECT for 
blink planner
URL: https://github.com/apache/flink/pull/11281#issuecomment-593376597
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit ccac1a1c139b1bb38b7d8f13ca065a100f669bc9 (Mon Mar 02 
12:23:24 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16380) AZP: Python test fails on jdk11 nightly test (misc profile)

2020-03-02 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049164#comment-17049164
 ] 

Robert Metzger commented on FLINK-16380:


I suspect an issue with our Azure, as this error is not occurring on travis

> AZP: Python test fails on jdk11 nightly test (misc profile)
> ---
>
> Key: FLINK-16380
> URL: https://issues.apache.org/jira/browse/FLINK-16380
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>
> Logs: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=d5dbfc72-24cf-5a8f-e213-1ae80d4b2df8&t=cb83ed8c-7d59-59ba-b58d-25e43fbaa4b2
> {code}
> - Captured stderr call 
> -
> Error: A JNI error has occurred, please check your installation and try again
> Exception in thread "main" java.lang.UnsupportedClassVersionError: 
> org/apache/flink/client/python/PythonGatewayServer has been compiled by a 
> more recent version of the Java Runtime (class file version 55.0), this 
> version of the Java Runtime only recognizes class file versions up to 52.0
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:757)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
>   at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
> ___ StreamTableWindowTests.test_over_window 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16380) AZP: Python test fails on jdk11 nightly test (misc profile)

2020-03-02 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049163#comment-17049163
 ] 

Chesnay Schepler commented on FLINK-16380:
--

Looks like the environment isn't setup correctly since it is trying to run 
things with Java 8.

> AZP: Python test fails on jdk11 nightly test (misc profile)
> ---
>
> Key: FLINK-16380
> URL: https://issues.apache.org/jira/browse/FLINK-16380
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>
> Logs: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=d5dbfc72-24cf-5a8f-e213-1ae80d4b2df8&t=cb83ed8c-7d59-59ba-b58d-25e43fbaa4b2
> {code}
> - Captured stderr call 
> -
> Error: A JNI error has occurred, please check your installation and try again
> Exception in thread "main" java.lang.UnsupportedClassVersionError: 
> org/apache/flink/client/python/PythonGatewayServer has been compiled by a 
> more recent version of the Java Runtime (class file version 55.0), this 
> version of the Java Runtime only recognizes class file versions up to 52.0
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:757)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
>   at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
> ___ StreamTableWindowTests.test_over_window 
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16378) Disable JDK 11 Docker tests on AZP

2020-03-02 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-16378:
-
Summary: Disable JDK 11 Docker tests on AZP  (was: Kerberized YARN on 
Docker test fails on jdk11 on AZP)

> Disable JDK 11 Docker tests on AZP
> --
>
> Key: FLINK-16378
> URL: https://issues.apache.org/jira/browse/FLINK-16378
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Priority: Major
>
> Build log: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=eec879f1-c5a2-5810-2b49-ba5c6bfecb27&t=484f04d6-55db-5161-9f93-391b1677737d
> {code}Error: A JNI error has occurred, please check your installation and try 
> again
> Exception in thread "main" java.lang.UnsupportedClassVersionError: 
> org/apache/flink/client/cli/CliFrontend has been compiled by a more recent 
> version of the Java Runtime (class file version 55.0), this version of the 
> Java Runtime only recognizes class file versions up to 52.0
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
> Running the job failed.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16203) Support JSON_OBJECT for blink planner

2020-03-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16203:
---
Labels: pull-request-available  (was: )

> Support JSON_OBJECT for blink planner
> -
>
> Key: FLINK-16203
> URL: https://issues.apache.org/jira/browse/FLINK-16203
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Zili Chen
>Assignee: Forward Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-16378) Disable JDK 11 Docker tests on AZP

2020-03-02 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-16378:
--

> Disable JDK 11 Docker tests on AZP
> --
>
> Key: FLINK-16378
> URL: https://issues.apache.org/jira/browse/FLINK-16378
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Priority: Major
>
> Build log: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=eec879f1-c5a2-5810-2b49-ba5c6bfecb27&t=484f04d6-55db-5161-9f93-391b1677737d
> {code}Error: A JNI error has occurred, please check your installation and try 
> again
> Exception in thread "main" java.lang.UnsupportedClassVersionError: 
> org/apache/flink/client/cli/CliFrontend has been compiled by a more recent 
> version of the Java Runtime (class file version 55.0), this version of the 
> Java Runtime only recognizes class file versions up to 52.0
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
> Running the job failed.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] XuQianJin-Stars opened a new pull request #11281: [FLINK-16203][sql] Support JSON_OBJECT for blink planner

2020-03-02 Thread GitBox
XuQianJin-Stars opened a new pull request #11281: [FLINK-16203][sql] Support 
JSON_OBJECT for blink planner
URL: https://github.com/apache/flink/pull/11281
 
 
   ## What is the purpose of the change
   
   Support `JSON_OBJECT` function for blink planner
   
   ## Brief change log
   
   *(for example:)*
   
   - Introduce `JSON_OBJECT` to `FlinkSqlOperatorTable`
   - Add corresponding test cases
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change added tests in `JsonFunctionsTest.scala` 
   
   *(example:)*
   
   - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
   - *Extended integration test for recovery after master (JobManager) failure*
   - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
   - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
   - Dependencies (does it add or upgrade a dependency): (yes / no)
   - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
   - The serializers: (yes / no / don't know)
   - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
   - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
   - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
   - Does this pull request introduce a new feature? (yes / no)
   - If yes, how is the feature documented? (not applicable / docs / JavaDocs / 
not documented)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16380) AZP: Python test fails on jdk11 nightly test (misc profile)

2020-03-02 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-16380:
--

 Summary: AZP: Python test fails on jdk11 nightly test (misc 
profile)
 Key: FLINK-16380
 URL: https://issues.apache.org/jira/browse/FLINK-16380
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines
Reporter: Robert Metzger
Assignee: Robert Metzger


Logs: 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=d5dbfc72-24cf-5a8f-e213-1ae80d4b2df8&t=cb83ed8c-7d59-59ba-b58d-25e43fbaa4b2

{code}
- Captured stderr call -
Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.UnsupportedClassVersionError: 
org/apache/flink/client/python/PythonGatewayServer has been compiled by a more 
recent version of the Java Runtime (class file version 55.0), this version of 
the Java Runtime only recognizes class file versions up to 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:757)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
___ StreamTableWindowTests.test_over_window 

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11280: [FLINK-16377][table] Support inline user defined functions in expression dsl

2020-03-02 Thread GitBox
flinkbot commented on issue #11280: [FLINK-16377][table] Support inline user 
defined functions in expression dsl
URL: https://github.com/apache/flink/pull/11280#issuecomment-593373960
 
 
   
   ## CI report:
   
   * f5de98e8347bd4652b3ba5f91cd09281d621a72a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11275: [FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11275: [FLINK-16359][table-runtime] 
Introduce WritableVectors for abstract writing
URL: https://github.com/apache/flink/pull/11275#issuecomment-593230700
 
 
   
   ## CI report:
   
   * 2df6f64a7d1a62836521bd02ec0ce77fe2b14efb Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151300922) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5798)
 
   * bb523979ae8f59e3e5985d57cf1661bc8a42260c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11233: [FLINK-16194][k8s] Refactor the Kubernetes decorator design

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11233: [FLINK-16194][k8s] Refactor the 
Kubernetes decorator design
URL: https://github.com/apache/flink/pull/11233#issuecomment-591843120
 
 
   
   ## CI report:
   
   * 0017b8a8930024d0e9ce3355d050565a5007831d UNKNOWN
   * 978c6f00810a66a87325af6454e49e2f084e45be UNKNOWN
   * abfef134eb5940d7831dab4c794d654c2fa70263 UNKNOWN
   * d78c1b1b282a3d7fcfc797bf909dca2b5f2264e4 UNKNOWN
   * 78750a20196de5f01b01b59b1cd15c752bdee5c0 UNKNOWN
   * b088e0e0530fa3178a547b26b7e950165181caaf Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151326230) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5809)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11220: [FLINK-16249][python][ml] Add interfaces for Params, ParamInfo and WithParams

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11220: [FLINK-16249][python][ml] Add 
interfaces for Params, ParamInfo and WithParams
URL: https://github.com/apache/flink/pull/11220#issuecomment-591333787
 
 
   
   ## CI report:
   
   * bc6799fbd1e26ba48a902527d870b95d796aab10 UNKNOWN
   * f7f9e049d2d60ce0c2e6e05c2120b2fbbf980029 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150641649) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5620)
 
   * 2103b9a1eb59a96a7f64d58a6ecc12b97a5bb487 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151345341) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11119: [FLINK-15396][json] Support to ignore parse errors for JSON format

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #9: [FLINK-15396][json] Support to 
ignore parse errors for JSON format
URL: https://github.com/apache/flink/pull/9#issuecomment-587365636
 
 
   
   ## CI report:
   
   * 68fbf9d61e9c89eb323d7eb5cc08d18f9f852065 UNKNOWN
   * 1ecd2ee1b734a3dc868d20c8c2532447071f0233 UNKNOWN
   * 23df46401002fc6a2fd1800d76c069296c7339a3 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151328780) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5811)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-16374) StreamingKafkaITCase: IOException: error=13, Permission denied

2020-03-02 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049087#comment-17049087
 ] 

Robert Metzger edited comment on FLINK-16374 at 3/2/20 12:02 PM:
-

The error does not occur all the time. This failure happened on the 
"AlibabaCI001-agent2" machine. I'm adding this in case it is a machine specific 
issue. 
Failed also on {AlibabaCI001-agent1}: 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=b09cc737-3452-5710-2b65-ee4e507c2164&t=a305ae01-9f74-49a1-ae87-f48844c190ca

I'll keep investigating this ...


was (Author: rmetzger):
The error does not occur all the time. This failure happened on the 
"AlibabaCI001-agent2" machine. I'm adding this in case it is a machine specific 
issue.

> StreamingKafkaITCase: IOException: error=13, Permission denied
> --
>
> Key: FLINK-16374
> URL: https://issues.apache.org/jira/browse/FLINK-16374
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> Build: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5792&view=logs&j=25197a20-5964-5b06-5716-045f87dc0ea9&t=0c53f4dc-c81e-5ebb-13b2-08f1994a2d32
> {code}
> 2020-03-02T05:13:23.4758068Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-03-02T05:13:55.8260013Z [ERROR] Tests run: 3, Failures: 0, Errors: 3, 
> Skipped: 0, Time elapsed: 32.346 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-03-02T05:13:55.8262664Z [ERROR] testKafka[0: 
> kafka-version:0.10.2.0](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)
>   Time elapsed: 9.217 s  <<< ERROR!
> 2020-03-02T05:13:55.8264067Z java.io.IOException: Cannot run program 
> "/tmp/junit5236495846374568650/junit4714535957173883866/bin/start-cluster.sh":
>  error=13, Permission denied
> 2020-03-02T05:13:55.8264733Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> 2020-03-02T05:13:55.8265242Z Caused by: java.io.IOException: error=13, 
> Permission denied
> 2020-03-02T05:13:55.8265717Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> 2020-03-02T05:13:55.8266083Z 
> 2020-03-02T05:13:55.8271420Z [ERROR] testKafka[1: 
> kafka-version:0.11.0.2](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)
>   Time elapsed: 11.228 s  <<< ERROR!
> 2020-03-02T05:13:55.8272670Z java.io.IOException: Cannot run program 
> "/tmp/junit8038960384540194088/junit1280636219654303027/bin/start-cluster.sh":
>  error=13, Permission denied
> 2020-03-02T05:13:55.8273343Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> 2020-03-02T05:13:55.8273847Z Caused by: java.io.IOException: error=13, 
> Permission denied
> 2020-03-02T05:13:55.8274418Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> 2020-03-02T05:13:55.8274768Z 
> 2020-03-02T05:13:55.8275429Z [ERROR] testKafka[2: 
> kafka-version:2.2.0](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)  
> Time elapsed: 11.89 s  <<< ERROR!
> 2020-03-02T05:13:55.8276386Z java.io.IOException: Cannot run program 
> "/tmp/junit5500905670445852005/junit4695208010500962520/bin/start-cluster.sh":
>  error=13, Permission denied
> 2020-03-02T05:13:55.8277257Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> 2020-03-02T05:13:55.8277760Z Caused by: java.io.IOException: error=13, 
> Permission denied
> 2020-03-02T05:13:55.8278228Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11280: [FLINK-16377][table] Support inline user defined functions in expression dsl

2020-03-02 Thread GitBox
flinkbot commented on issue #11280: [FLINK-16377][table] Support inline user 
defined functions in expression dsl
URL: https://github.com/apache/flink/pull/11280#issuecomment-593369399
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 10d251511e987c85c737c44afdf936006bdaa683 (Mon Mar 02 
12:02:40 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16377) Support inline user defined functions in expression dsl

2020-03-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16377:
---
Labels: pull-request-available  (was: )

> Support inline user defined functions in expression dsl
> ---
>
> Key: FLINK-16377
> URL: https://issues.apache.org/jira/browse/FLINK-16377
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys opened a new pull request #11280: [FLINK-16377][table] Support inline user defined functions in expression dsl

2020-03-02 Thread GitBox
dawidwys opened a new pull request #11280: [FLINK-16377][table] Support inline 
user defined functions in expression dsl
URL: https://github.com/apache/flink/pull/11280
 
 
   ## What is the purpose of the change
   
   It adds a method `call(UserDefinedFunction function, Object... params)` that 
let users use user defined functions without registering them in a catalog 
before.
   
   It also adds support for the new type inference stack when functions are 
used from the expression dsl.
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   - Extended `ExpressionResolverTest`
   - Extended `FunctionITCase
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**yes** / no)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11279: [FLINK-16362] [table] Remove deprecated method in StreamTableSink

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11279: [FLINK-16362] [table] Remove 
deprecated method in StreamTableSink
URL: https://github.com/apache/flink/pull/11279#issuecomment-593309320
 
 
   
   ## CI report:
   
   * 4b6685acc5ab65537e3bd0e9dbd53fef52126e85 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151323146) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5808)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11220: [FLINK-16249][python][ml] Add interfaces for Params, ParamInfo and WithParams

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11220: [FLINK-16249][python][ml] Add 
interfaces for Params, ParamInfo and WithParams
URL: https://github.com/apache/flink/pull/11220#issuecomment-591333787
 
 
   
   ## CI report:
   
   * bc6799fbd1e26ba48a902527d870b95d796aab10 UNKNOWN
   * f7f9e049d2d60ce0c2e6e05c2120b2fbbf980029 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150641649) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5620)
 
   * 2103b9a1eb59a96a7f64d58a6ecc12b97a5bb487 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #11275: [FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing

2020-03-02 Thread GitBox
JingsongLi commented on issue #11275: [FLINK-16359][table-runtime] Introduce 
WritableVectors for abstract writing
URL: https://github.com/apache/flink/pull/11275#issuecomment-593366194
 
 
   > Thanks @JingsongLi for the PR. I left some minor comments. Also noted 
we're doing some mem copy in methods like 
`HeapDoubleVector::setDoublesFromBinary `, and wonder whether we should add 
some safety check to make sure the source/dest have sufficient space, so as to 
avoid potential segment faults?
   
   Good suggestion! Updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16378) Kerberized YARN on Docker test fails on jdk11 on AZP

2020-03-02 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049151#comment-17049151
 ] 

Robert Metzger commented on FLINK-16378:


Thanks for pointing to the other ticket. I would still like to use this ticket 
to open a PR for AZP to disable test execution for all container tests on jdk11.

> Kerberized YARN on Docker test fails on jdk11 on AZP
> 
>
> Key: FLINK-16378
> URL: https://issues.apache.org/jira/browse/FLINK-16378
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Priority: Major
>
> Build log: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=eec879f1-c5a2-5810-2b49-ba5c6bfecb27&t=484f04d6-55db-5161-9f93-391b1677737d
> {code}Error: A JNI error has occurred, please check your installation and try 
> again
> Exception in thread "main" java.lang.UnsupportedClassVersionError: 
> org/apache/flink/client/cli/CliFrontend has been compiled by a more recent 
> version of the Java Runtime (class file version 55.0), this version of the 
> Java Runtime only recognizes class file versions up to 52.0
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
> Running the job failed.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16379) Introduce fromValues in TableEnvironment

2020-03-02 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-16379:


 Summary: Introduce fromValues in TableEnvironment
 Key: FLINK-16379
 URL: https://issues.apache.org/jira/browse/FLINK-16379
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.11.0


Introduce a fromValues method to TableEnvironment similar to {{VALUES}} clause 
in SQL



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16355) Inconsistent library versions notice.

2020-03-02 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-16355.

Resolution: Won't Fix

A broad-spectrum issue such as this doesn't provide much value.

Different modules are free to use different versions so long as they relocate 
them, and for others we cannot really fix it due to how the ecosystem works 
(multiple supported kafka/hadoop/hive version).

> Inconsistent library versions notice.
> -
>
> Key: FLINK-16355
> URL: https://issues.apache.org/jira/browse/FLINK-16355
> Project: Flink
>  Issue Type: Improvement
>Reporter: Kaifeng Huang
>Priority: Major
> Attachments: apache flink.pdf
>
>
> Hi. I have implemented a tool to detect library version inconsistencies. Your 
> project have 9 inconsistent libraries and 9 false consistent libraries.
> Take org.apache.hadoop:hadoop-common for example, this library is declared as 
> version 2.4.1 in flink-yarn-tests, 3.1.0 in 
> flink-filesystems/flink-s3-fs-base, 2.7.5 in flink-table/flink-sql-client and 
> etc... Such version inconsistencies may cause unnecessary maintenance effort 
> in the long run. For example, if two modules become inter-dependent, library 
> version conflict may happen. It has already become a common issue and hinders 
> development progress. Thus a version harmonization is necessary.
> Provided we applied a version harmonization, I calculated the cost it may 
> have to harmonize to all upper versions including an up-to-date one. The cost 
> refers to POM config changes and API invocation changes. Take 
> org.apache.hadoop:hadoop-common for example, if we harmonize all the library 
> versions into 3.1.3. The concern is, how much should the project code adapt 
> to the newer library version. We list an effort table to quantify the 
> harmonization cost.
> The effort table is listed below. It shows the overall harmonization effort 
> by modules. The columns represents the number of library APIs and API 
> calls(NA,NAC), deleted APIs and API calls(NDA,NDAC) as well as modified API 
> and API calls(NMA,NMAC). Modified APIs refers to those APIs whose call graph 
> is not the same as previous version. Take the first row for example, if 
> upgrading the library into version 3.1.3. Given that 103 APIs is used in 
> module flink-filesystems/flink-fs-hadoop-shaded, 0 of them is deleted in a 
> recommended version(which will throw a NoMethodFoundError unless re-compiling 
> the project), 55 of them is regarded as modified which could break the former 
> API contract.
> ||Index||Module||NA(NAC)||NDA(NDAC)||NMA(NMAC)||
> |1|flink-filesystems/flink-fs-hadoop-shaded|103(223)|0(0)|55(115)|
> |2|flink-filesystems/flink-s3-fs-base|2(4)|0(0)|1(1)|
> |3|flink-yarn-tests|0(0)|0(0)|0(0)|
> |4|..|..|..|..|
> Also we provided another table to show the potential files that may be 
> affected due to library API change, which could help to spot the concerned 
> API usage and rerun the test cases. The table is listed below.
> ||Module||File||Type||API||
> |flink-filesystems/flink-s3-fs-base|flink-filesystems/flink-s3-fs-base/src/main/java/org/apache/flink/fs/s3/common/writer/S3RecoverableMultipartUploadFactory.java|modify|org.apache.hadoop.fs.Path.isAbsolute()|
> |flink-filesystems/flink-fs-hadoop-shaded|flink-filesystems/flink-fs-hadoop-shaded/src/main/java/org/apache/hadoop/util/VersionInfo.java|modify|org.apache.hadoop.util.VersionInfo._getDate()|
> |flink-filesystems/flink-fs-hadoop-shaded|flink-filesystems/flink-fs-hadoop-shaded/src/main/java/org/apache/hadoop/util/VersionInfo.java|modify|org.apache.hadoop.util.VersionInfo._getBuildVersion()|
> |4|..|..|..|
>  
> As for false consistency, take log4j log4j jar for example. The library is 
> declared in version 1.2.17 in all modules. However they are declared 
> differently. As components are developed in parallel, if one single library 
> version is updated, which could become inconsistent as mentioned above, may 
> cause above-mentioned inconsistency issues
> If you are interested, you can have a more complete and detailed report in 
> the attached PDF file.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16015) Refine fallback filesystems to only handle specific filesystems

2020-03-02 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-16015.
--
Fix Version/s: 1.11.0
 Release Note: By default, if there is an official filesystem plugin for a 
given schema, it will not be allowed to use fallback filesystem factories (like 
HADOOP libraries on the classpath) to load it. Added 
{{fs.allowed-fallback-filesystems}} configuration option to override this 
behaviour.
   Resolution: Fixed

Merged as 91399fe2cd to master branch.

> Refine fallback filesystems to only handle specific filesystems
> ---
>
> Key: FLINK-16015
> URL: https://issues.apache.org/jira/browse/FLINK-16015
> Project: Flink
>  Issue Type: Improvement
>  Components: FileSystems
>Affects Versions: 1.10.1, 1.11.0
>Reporter: Arvid Heise
>Assignee: Arvid Heise
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, if no s3 plugin is included, hadoop is used as a fallback, which 
> introduces a wide variety of problems. We should probably only white list 
> specific protocols that work well (e.g. hdfs).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16378) Kerberized YARN on Docker test fails on jdk11 on AZP

2020-03-02 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-16378.

Resolution: Duplicate

> Kerberized YARN on Docker test fails on jdk11 on AZP
> 
>
> Key: FLINK-16378
> URL: https://issues.apache.org/jira/browse/FLINK-16378
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Priority: Major
>
> Build log: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=eec879f1-c5a2-5810-2b49-ba5c6bfecb27&t=484f04d6-55db-5161-9f93-391b1677737d
> {code}Error: A JNI error has occurred, please check your installation and try 
> again
> Exception in thread "main" java.lang.UnsupportedClassVersionError: 
> org/apache/flink/client/cli/CliFrontend has been compiled by a more recent 
> version of the Java Runtime (class file version 55.0), this version of the 
> Java Runtime only recognizes class file versions up to 52.0
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
> Running the job failed.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol commented on a change in pull request #11123: [FLINK-16014][s3] Force usage of SAXParserFactory over XMLReaderFactory in aws-java-sdk-s3.

2020-03-02 Thread GitBox
zentol commented on a change in pull request #11123: [FLINK-16014][s3] Force 
usage of SAXParserFactory over XMLReaderFactory in aws-java-sdk-s3.
URL: https://github.com/apache/flink/pull/11123#discussion_r386345138
 
 

 ##
 File path: flink-filesystems/flink-s3-fs-base/pom.xml
 ##
 @@ -231,6 +231,30 @@ under the License.
true


+
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   package
+   
+   shade
+   
+   
+   
+   
+   
com.amazonaws:aws-java-sdk-s3
+   

+   
com/amazonaws/services/s3/model/transform/XmlResponsesSaxParser**
 
 Review comment:
   add a comment for why this is done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-16007) Add rules to push down the Java Calls contained in Python Correlate node

2020-03-02 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng closed FLINK-16007.
---
Resolution: Resolved

> Add rules to push down the Java Calls contained in Python Correlate node
> 
>
> Key: FLINK-16007
> URL: https://issues.apache.org/jira/browse/FLINK-16007
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Java Calls contained in Python Correlate node should be extracted to make 
> sure the TableFunction works well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pnowojski merged pull request #11139: [FLINK-16015][filesystems]Throw an error when a plugin for a known scheme is missing.

2020-03-02 Thread GitBox
pnowojski merged pull request #11139:  [FLINK-16015][filesystems]Throw an error 
when a plugin for a known scheme is missing.
URL: https://github.com/apache/flink/pull/11139
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16007) Add rules to push down the Java Calls contained in Python Correlate node

2020-03-02 Thread Hequn Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049123#comment-17049123
 ] 

Hequn Cheng commented on FLINK-16007:
-

Resolved in 1.11.0 via  f0bdf3179c1deff41eaaccb81544f19150519c30

> Add rules to push down the Java Calls contained in Python Correlate node
> 
>
> Key: FLINK-16007
> URL: https://issues.apache.org/jira/browse/FLINK-16007
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Java Calls contained in Python Correlate node should be extracted to make 
> sure the TableFunction works well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #11275: [FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing

2020-03-02 Thread GitBox
JingsongLi commented on a change in pull request #11275: 
[FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing
URL: https://github.com/apache/flink/pull/11275#discussion_r386344682
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/heap/HeapIntVector.java
 ##
 @@ -47,4 +49,45 @@ public int getInt(int i) {
return dictionary.decodeToInt(dictionaryIds.vector[i]);
}
}
+
+   @Override
+   public void setInt(int i, int value) {
+   if (i >= vector.length) {
+   throw new RuntimeException();
+   }
+   vector[i] = value;
+   }
+
+   @Override
+   public void setIntsFromBinary(int rowId, int count, byte[] src, int 
srcIndex) {
+   if (LITTLE_ENDIAN) {
+   UNSAFE.copyMemory(src, BYTE_ARRAY_OFFSET + srcIndex, 
vector,
+   INT_ARRAY_OFFSET + rowId * 4L, count * 
4L);
+   } else {
+   long srcOffset = srcIndex + BYTE_ARRAY_OFFSET;
+   for (int i = 0; i < count; ++i, srcOffset += 4) {
+   vector[i + rowId] = 
Integer.reverseBytes(UNSAFE.getInt(src, srcOffset));
+   }
+   }
+   }
+
+   @Override
+   public void setInts(int rowId, int count, int value) {
+   if (rowId + count - 1 >= vector.length) {
+   throw new RuntimeException();
 
 Review comment:
   I think we can remove this check, leave the exception produced by Java array.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 closed pull request #11242: [FLINK-16007][table-planner][table-planner-blink][python] Add PythonCorrelateSplitRule to push down the Java Calls contained in Python Correlate

2020-03-02 Thread GitBox
hequn8128 closed pull request #11242: 
[FLINK-16007][table-planner][table-planner-blink][python] Add 
PythonCorrelateSplitRule to push down the Java Calls contained in Python 
Correlate node
URL: https://github.com/apache/flink/pull/11242
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16378) Kerberized YARN on Docker test fails on jdk11 on AZP

2020-03-02 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-16378:
--

 Summary: Kerberized YARN on Docker test fails on jdk11 on AZP
 Key: FLINK-16378
 URL: https://issues.apache.org/jira/browse/FLINK-16378
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines
Reporter: Robert Metzger


Build log: 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=eec879f1-c5a2-5810-2b49-ba5c6bfecb27&t=484f04d6-55db-5161-9f93-391b1677737d

{code}Error: A JNI error has occurred, please check your installation and try 
again
Exception in thread "main" java.lang.UnsupportedClassVersionError: 
org/apache/flink/client/cli/CliFrontend has been compiled by a more recent 
version of the Java Runtime (class file version 55.0), this version of the Java 
Runtime only recognizes class file versions up to 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
Running the job failed.
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #11275: [FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing

2020-03-02 Thread GitBox
JingsongLi commented on a change in pull request #11275: 
[FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing
URL: https://github.com/apache/flink/pull/11275#discussion_r386343885
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/heap/HeapTimestampVector.java
 ##
 @@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.dataformat.vector.heap;
+
+import org.apache.flink.table.dataformat.SqlTimestamp;
+import 
org.apache.flink.table.dataformat.vector.writable.WritableTimestampVector;
+
+import java.util.Arrays;
+
+import static org.apache.flink.table.dataformat.SqlTimestamp.fromEpochMillis;
+
+/**
+ * This class represents a nullable byte column vector.
+ */
+public class HeapTimestampVector extends AbstractHeapVector implements 
WritableTimestampVector {
+
+   private static final long serialVersionUID = 1L;
+
+   private final long[] milliseconds;
+   private final int[] nanoOfMilliseconds;
+
+   public HeapTimestampVector(int len) {
+   super(len);
+   this.milliseconds = new long[len];
+   this.nanoOfMilliseconds = new int[len];
+   }
+
+   @Override
+   public SqlTimestamp getTimestamp(int i, int precision) {
 
 Review comment:
   We can give the opportunity to optimize `nanoOfMilliseconds`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11275: [FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing

2020-03-02 Thread GitBox
JingsongLi commented on a change in pull request #11275: 
[FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing
URL: https://github.com/apache/flink/pull/11275#discussion_r386343426
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/writable/WritableBytesVector.java
 ##
 @@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.dataformat.vector.writable;
+
+import org.apache.flink.table.dataformat.vector.BytesColumnVector;
+
+/**
+ * Writable {@link BytesColumnVector}.
+ */
+public interface WritableBytesVector extends WritableColumnVector, 
BytesColumnVector {
+
+   /**
+* Set byte[] at rowId with the provided value.
+*/
+   void appendBytes(int rowId, byte[] value, int offset, int length);
 
 Review comment:
   I'll add comment.
   `Note: Must append values according to the order of rowId, can not random 
append.`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #11220: [FLINK-16249][python][ml] Add interfaces for Params, ParamInfo and WithParams

2020-03-02 Thread GitBox
hequn8128 commented on issue #11220: [FLINK-16249][python][ml] Add interfaces 
for Params, ParamInfo and WithParams
URL: https://github.com/apache/flink/pull/11220#issuecomment-593360206
 
 
   Hi @dianfu @walterddr @becketqin 
   
   Thanks a lot for your suggestions. I have updated the PR. The changes mainly 
include:
   - Refactor the module structure, i.e., add pyflink/ml/api and pyflink/ml/lib 
module, remove pyflink/mllib module.
   - Add sphinx doc
   - Rename _paramMap to _param_map
   
   I have not made jsonpickle as an optional dependencies, we can have more 
discussions if you have further concerns. Any comments are appreciated. 
   
   Best, Hequn


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and 
read log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-592336212
 
 
   
   ## CI report:
   
   * e474761e49927bb30bac6a297e992dc2ec98e01a UNKNOWN
   * 54ac8f26c67344a0ed000c882afd66e68c8f6bd0 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151328904) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5812)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11233: [FLINK-16194][k8s] Refactor the Kubernetes decorator design

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11233: [FLINK-16194][k8s] Refactor the 
Kubernetes decorator design
URL: https://github.com/apache/flink/pull/11233#issuecomment-591843120
 
 
   
   ## CI report:
   
   * 0017b8a8930024d0e9ce3355d050565a5007831d UNKNOWN
   * 978c6f00810a66a87325af6454e49e2f084e45be UNKNOWN
   * abfef134eb5940d7831dab4c794d654c2fa70263 UNKNOWN
   * d78c1b1b282a3d7fcfc797bf909dca2b5f2264e4 UNKNOWN
   * 78750a20196de5f01b01b59b1cd15c752bdee5c0 UNKNOWN
   * b088e0e0530fa3178a547b26b7e950165181caaf Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151326230) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5809)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhuzhurk commented on issue #10858: [FLINK-15582] Change LegacySchedulerBatchSchedulingTest to DefaultSchedulerBatchSchedulingTest

2020-03-02 Thread GitBox
zhuzhurk commented on issue #10858: [FLINK-15582] Change 
LegacySchedulerBatchSchedulingTest to DefaultSchedulerBatchSchedulingTest
URL: https://github.com/apache/flink/pull/10858#issuecomment-593359435
 
 
   @GJL I'd prefer to replace those legacy scheduling tests if they help for 
the new scheduler. 
   
   Most of the test cases, however, does not test the scheduler but just 
depends on a scheduler to test some other behaviors, e.g. 
ExecutionGraphPartitionReleaseTest, ExecutionGraphSuspendTest. For these case I 
think we can replace with DefaultScheduler since DefaultScheduler is the 
default scheduler now. 
   
   For the other tests which test the scheduling, including this one, if you 
feels it's better to keep the tests for legacy scheduler before we made the 
decision to remove it, we can postpone rewriting those tests to avoid some 
unnecessary duplicate work. For this PR, since I have such a PR already, I can 
revert it to the last version, namely having it testing both the legacy 
scheduler and new scheduler.
   
   WDYT?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11275: [FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing

2020-03-02 Thread GitBox
JingsongLi commented on a change in pull request #11275: 
[FLINK-16359][table-runtime] Introduce WritableVectors for abstract writing
URL: https://github.com/apache/flink/pull/11275#discussion_r386341375
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/vector/heap/HeapIntVector.java
 ##
 @@ -47,4 +49,45 @@ public int getInt(int i) {
return dictionary.decodeToInt(dictionaryIds.vector[i]);
}
}
+
+   @Override
+   public void setInt(int i, int value) {
+   if (i >= vector.length) {
+   throw new RuntimeException();
+   }
+   vector[i] = value;
+   }
+
+   @Override
+   public void setIntsFromBinary(int rowId, int count, byte[] src, int 
srcIndex) {
+   if (LITTLE_ENDIAN) {
+   UNSAFE.copyMemory(src, BYTE_ARRAY_OFFSET + srcIndex, 
vector,
+   INT_ARRAY_OFFSET + rowId * 4L, count * 
4L);
+   } else {
+   long srcOffset = srcIndex + BYTE_ARRAY_OFFSET;
+   for (int i = 0; i < count; ++i, srcOffset += 4) {
+   vector[i + rowId] = 
Integer.reverseBytes(UNSAFE.getInt(src, srcOffset));
+   }
+   }
+   }
+
+   @Override
+   public void setInts(int rowId, int count, int value) {
+   if (rowId + count - 1 >= vector.length) {
+   throw new RuntimeException();
 
 Review comment:
   Nice catch!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] nielsbasjes commented on a change in pull request #11245: [FLINK-15794][Kubernetes] Generate the Kubernetes default image version

2020-03-02 Thread GitBox
nielsbasjes commented on a change in pull request #11245: 
[FLINK-15794][Kubernetes] Generate the Kubernetes default image version
URL: https://github.com/apache/flink/pull/11245#discussion_r386336444
 
 

 ##
 File path: docs/ops/deployment/kubernetes.md
 ##
 @@ -112,7 +112,7 @@ An early version of a [Flink Helm 
chart](https://github.com/docker-flink/example
 
 ### Session cluster resource definitions
 
-The Deployment definitions use the pre-built image `flink:latest` which can be 
found [on Docker Hub](https://hub.docker.com/r/_/flink/).
+The Deployment definitions use the pre-built image `flink:{{ site.version 
}}-scala_2.11` (or `flink:{{ site.version }}-scala_2.12`) which can be found 
[on Docker Hub](https://hub.docker.com/r/_/flink/).
 
 Review comment:
   - Strange. I see here that the site.version for 1.9.2 was actually set to 
1.9.2 : https://github.com/apache/flink/blob/release-1.9.2/docs/_config.yml#L30 
. I do agree that for example 
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/connectors/pubsub.html
 still shows 1.9.0. To me this seems more like the site needs to be regenerated 
than this variable having the incorrect value.
   
   - Perhaps on releasing a 'SNAPSHOT' version of the software a 'SNAPSHOT' 
version docker image with the same software should be released too?
   
   On the other hand ... it is 'just documentation'.
   I do see that apparently the 2.12 is 'preferred' so I'm swapping them in the 
documentation.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11174: [FLINK-16199][sql] Support IS JSON predicate for blink planner

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11174: [FLINK-16199][sql] Support IS JSON 
predicate for blink planner
URL: https://github.com/apache/flink/pull/11174#issuecomment-589653271
 
 
   
   ## CI report:
   
   * c80a4b33ca4fede8ccfb000deb81005bb8d83710 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151333798) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5813)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] yanghua commented on issue #11113: [FLINK-15709] Update CEPMigrationTest to restore from 1.10 savepoint

2020-03-02 Thread GitBox
yanghua commented on issue #3: [FLINK-15709] Update CEPMigrationTest to 
restore from 1.10 savepoint
URL: https://github.com/apache/flink/pull/3#issuecomment-593348847
 
 
   @aljoscha and @tillrohrmann All the migration tests have been done. Can we 
merge these PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] nielsbasjes commented on a change in pull request #11245: [FLINK-15794][Kubernetes] Generate the Kubernetes default image version

2020-03-02 Thread GitBox
nielsbasjes commented on a change in pull request #11245: 
[FLINK-15794][Kubernetes] Generate the Kubernetes default image version
URL: https://github.com/apache/flink/pull/11245#discussion_r386328946
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/configuration/KubernetesConfigOptions.java
 ##
 @@ -121,8 +122,11 @@
public static final ConfigOption CONTAINER_IMAGE =
key("kubernetes.container.image")
.stringType()
-   .defaultValue("flink:latest")
-   .withDescription("Image to use for Flink containers.");
+   .defaultValue("flink:" + Version.PROJECT_VERSION + "-scala_" + 
Version.SCALA_VERSION)
 
 Review comment:
   I reported https://issues.apache.org/jira/browse/FLINK-16289 last week about 
getting java.io.InvalidClassException: 
org.apache.flink.table.codegen.GeneratedAggregationsFunction.
   Root cause after several days of digging:
   - I had scala 2.11 on my machine
   - Downloaded Flink 1.10.0 for scala 2.11 to start my session on Kubernetes.
   - Used Flink 1.10.0 for scala 2.11 for all dependencies in my project that I 
built and ran.
   - Then "Flink 1.10.0 for scala 2.11" on Kubernetes downloaded "flink:latest" 
image which is really "Flink 1.10.0 for scala 2.12" which broke it all.
   
   Also if for some reason I choose not to upgrade and a new release for Flink 
is created (i.e. 1.11) then my application may also break.
   
   So I think "Flink X for scala Y" must download the docker image for exactly 
THAT combination of versions.
   
   So I think having the scala version in there explicitly is a must have.
   
   or ... We ditch the scala 2.11 variant completely and have only 1 scala 
variant.
   
   I'll have a look at the EnvironmentInformation you mentioned, perhaps that 
is the better place to add the scala version.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16377) Support inline user defined functions in expression dsl

2020-03-02 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-16377:


 Summary: Support inline user defined functions in expression dsl
 Key: FLINK-16377
 URL: https://issues.apache.org/jira/browse/FLINK-16377
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API, Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16033) Introduce a Java Expression DSL

2020-03-02 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-16033:
-
Description: Introduce the basic expressions. The new Java expression DSL 
should be feature equivalent to string based expression parser. It does not 
support calls with new inference stack yet.

> Introduce a Java Expression DSL
> ---
>
> Key: FLINK-16033
> URL: https://issues.apache.org/jira/browse/FLINK-16033
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Introduce the basic expressions. The new Java expression DSL should be 
> feature equivalent to string based expression parser. It does not support 
> calls with new inference stack yet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16033) Introduce a Java Expression DSL

2020-03-02 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-16033.

Resolution: Fixed

Implemented in 029d578e098e2b02281d5c320da55c06e239b2b9

> Introduce a Java Expression DSL
> ---
>
> Key: FLINK-16033
> URL: https://issues.apache.org/jira/browse/FLINK-16033
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11279: [FLINK-16362] [table] Remove deprecated method in StreamTableSink

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11279: [FLINK-16362] [table] Remove 
deprecated method in StreamTableSink
URL: https://github.com/apache/flink/pull/11279#issuecomment-593309320
 
 
   
   ## CI report:
   
   * 4b6685acc5ab65537e3bd0e9dbd53fef52126e85 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151323146) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5808)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11174: [FLINK-16199][sql] Support IS JSON predicate for blink planner

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11174: [FLINK-16199][sql] Support IS JSON 
predicate for blink planner
URL: https://github.com/apache/flink/pull/11174#issuecomment-589653271
 
 
   
   ## CI report:
   
   * dace802f82ba2eb5f49a47032c4fa91eec9fd741 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150199116) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5486)
 
   * c80a4b33ca4fede8ccfb000deb81005bb8d83710 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151333798) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5813)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16370) YARNHighAvailabilityITCase fails on JDK11 nightly test: NoSuchMethodError: org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.server.quorum.flexible.QuorumMaj

2020-03-02 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049102#comment-17049102
 ] 

Robert Metzger commented on FLINK-16370:


In the files uploaded to transfer.sh, in 
42719.25/yarn-tests/container_1583080501498_0001_01_01/jobmanager.log

> YARNHighAvailabilityITCase fails on JDK11 nightly test: NoSuchMethodError: 
> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.server.quorum.flexible.QuorumMaj
> ---
>
> Key: FLINK-16370
> URL: https://issues.apache.org/jira/browse/FLINK-16370
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>Priority: Major
>
> This is the output from the CI system 
> (https://travis-ci.org/apache/flink/jobs/656931001)
> {code}
> 16:35:30.798 [ERROR] 
> testKillYarnSessionClusterEntrypoint(org.apache.flink.yarn.YARNHighAvailabilityITCase)
>   Time elapsed: 10.363 s  <<< ERROR!
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't 
> deploy Yarn session cluster
>   at 
> org.apache.flink.yarn.YARNHighAvailabilityITCase.deploySessionCluster(YARNHighAvailabilityITCase.java:296)
>   at 
> org.apache.flink.yarn.YARNHighAvailabilityITCase.lambda$testKillYarnSessionClusterEntrypoint$0(YARNHighAvailabilityITCase.java:165)
>   at 
> org.apache.flink.yarn.YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint(YARNHighAvailabilityITCase.java:157)
> Caused by: 
> org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: 
> The YARN application unexpectedly switched to state FAILED during deployment. 
> Diagnostics from YARN: Application application_1583080501498_0002 failed 2 
> times in previous 1 milliseconds due to AM Container for 
> appattempt_1583080501498_0002_02 exited with  exitCode: 1
> Failing this attempt.Diagnostics: Exception from container-launch.
> Container id: container_1583080501498_0002_02_01
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1: 
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
>   at org.apache.hadoop.util.Shell.run(Shell.java:869)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> ... snip ...
> 16:44:14.840 [INFO] Results:
> 16:44:14.840 [INFO] 
> 16:44:14.840 [ERROR] Errors: 
> 16:44:14.840 [ERROR]   
> YARNHighAvailabilityITCase.testJobRecoversAfterKillingTaskManager:187->YarnTestBase.runTest:242->lambda$testJobRecoversAfterKillingTaskManager$1:191->deploySessionCluster:296
>  » ClusterDeployment
> 16:44:14.840 [ERROR]   
> YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint:157->YarnTestBase.runTest:242->lambda$testKillYarnSessionClusterEntrypoint$0:165->deploySessionCluster:296
>  » ClusterDeployment
> 16:44:14.840 [INFO] 
> 16:44:14.840 [ERROR] Tests run: 25, Failures: 0, Errors: 2, Skipped: 4
> {code}
> Digging deeper into the problem, this seems to be the root cause:
> {code}
> 2020-03-01 16:35:14,444 INFO  
> org.apache.flink.shaded.curator4.org.apache.curator.utils.Compatibility [] - 
> Using emulated InjectSessionExpiration
> 2020-03-01 16:35:14,466 WARN  
> org.apache.flink.shaded.curator4.org.apache.curator.CuratorZookeeperClient [] 
> - session timeout [1000] is less than connection timeout [15000]
> 2020-03-01 16:35:14,491 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Shutting 
> YarnSessionClusterEntrypoint down with application status FAILED. Diagnostics 
> java.lang.NoSuchMethodError: 
> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.server.quorum.flexible.QuorumMaj.(Ljava/util/Map;)V
>   at 
> org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.EnsembleTracker.(EnsembleTracker.java:57)
>   at 
> org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl.(CuratorFramework

[jira] [Updated] (FLINK-16376) Using consistent method to get Yarn application directory

2020-03-02 Thread Victor Wong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor Wong updated FLINK-16376:

Description: 
The Yarn application directory of Flink is "/user/{user.name}/.flink", but this 
logic is separated in different places.

1. org.apache.flink.yarn.YarnClusterDescriptor#getYarnFilesDir

{code:java}
private Path getYarnFilesDir(final ApplicationId appId) throws 
IOException {
final FileSystem fileSystem = FileSystem.get(yarnConfiguration);
final Path homeDir = fileSystem.getHomeDirectory();
return new Path(homeDir, ".flink/" + appId + '/');
}
{code}

2. org.apache.flink.yarn.Utils#uploadLocalFileToRemote


{code:java}
// copy resource to HDFS
String suffix =
".flink/"
+ appId
+ (relativeTargetPath.isEmpty() ? "" : "/" + 
relativeTargetPath)
+ "/" + localSrcPath.getName();

Path dst = new Path(homedir, suffix);
{code}

We can extract `getYarnFilesDir` method to `org.apache.flink.yarn.Utils`, and 
use this method to get Yarn application directory in all the other places.




  was:
The Yarn application directory of Flink is "/user/{user.name}/.flink", but this 
logic is separated in different places.

1. org.apache.flink.yarn.YarnClusterDescriptor#getYarnFilesDir

{code:java}
private Path getYarnFilesDir(final ApplicationId appId) throws 
IOException {
final FileSystem fileSystem = FileSystem.get(yarnConfiguration);
final Path homeDir = fileSystem.getHomeDirectory();
return *new Path(homeDir, ".flink/" + appId + '/');*
}
{code}

2. org.apache.flink.yarn.Utils#uploadLocalFileToRemote


{code:java}
// copy resource to HDFS
String suffix =
*".flink/"*
*+ appId*
+ (relativeTargetPath.isEmpty() ? "" : "/" + 
relativeTargetPath)
+ "/" + localSrcPath.getName();

Path dst = new Path(homedir, suffix);
{code}

We can extract `getYarnFilesDir` method to `org.apache.flink.yarn.Utils`, and 
use this method to get Yarn application directory in all the other places.





> Using consistent method to get Yarn application directory
> -
>
> Key: FLINK-16376
> URL: https://issues.apache.org/jira/browse/FLINK-16376
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: Victor Wong
>Priority: Major
>
> The Yarn application directory of Flink is "/user/{user.name}/.flink", but 
> this logic is separated in different places.
> 1. org.apache.flink.yarn.YarnClusterDescriptor#getYarnFilesDir
> {code:java}
>   private Path getYarnFilesDir(final ApplicationId appId) throws 
> IOException {
>   final FileSystem fileSystem = FileSystem.get(yarnConfiguration);
>   final Path homeDir = fileSystem.getHomeDirectory();
>   return new Path(homeDir, ".flink/" + appId + '/');
>   }
> {code}
> 2. org.apache.flink.yarn.Utils#uploadLocalFileToRemote
> {code:java}
>   // copy resource to HDFS
>   String suffix =
>   ".flink/"
>   + appId
>   + (relativeTargetPath.isEmpty() ? "" : "/" + 
> relativeTargetPath)
>   + "/" + localSrcPath.getName();
>   Path dst = new Path(homedir, suffix);
> {code}
> We can extract `getYarnFilesDir` method to `org.apache.flink.yarn.Utils`, and 
> use this method to get Yarn application directory in all the other places.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16376) Using consistent method to get Yarn application directory

2020-03-02 Thread Victor Wong (Jira)
Victor Wong created FLINK-16376:
---

 Summary: Using consistent method to get Yarn application directory
 Key: FLINK-16376
 URL: https://issues.apache.org/jira/browse/FLINK-16376
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / YARN
Affects Versions: 1.10.0
Reporter: Victor Wong


The Yarn application directory of Flink is "/user/{user.name}/.flink", but this 
logic is separated in different places.

1. org.apache.flink.yarn.YarnClusterDescriptor#getYarnFilesDir

{code:java}
private Path getYarnFilesDir(final ApplicationId appId) throws 
IOException {
final FileSystem fileSystem = FileSystem.get(yarnConfiguration);
final Path homeDir = fileSystem.getHomeDirectory();
return *new Path(homeDir, ".flink/" + appId + '/');*
}
{code}

2. org.apache.flink.yarn.Utils#uploadLocalFileToRemote


{code:java}
// copy resource to HDFS
String suffix =
*".flink/"*
*+ appId*
+ (relativeTargetPath.isEmpty() ? "" : "/" + 
relativeTargetPath)
+ "/" + localSrcPath.getName();

Path dst = new Path(homedir, suffix);
{code}

We can extract `getYarnFilesDir` method to `org.apache.flink.yarn.Utils`, and 
use this method to get Yarn application directory in all the other places.






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16370) YARNHighAvailabilityITCase fails on JDK11 nightly test: NoSuchMethodError: org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.server.quorum.flexible.QuorumMaj

2020-03-02 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049099#comment-17049099
 ] 

Chesnay Schepler commented on FLINK-16370:
--

Where did you find the NoSuchMethodError?

> YARNHighAvailabilityITCase fails on JDK11 nightly test: NoSuchMethodError: 
> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.server.quorum.flexible.QuorumMaj
> ---
>
> Key: FLINK-16370
> URL: https://issues.apache.org/jira/browse/FLINK-16370
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>Priority: Major
>
> This is the output from the CI system 
> (https://travis-ci.org/apache/flink/jobs/656931001)
> {code}
> 16:35:30.798 [ERROR] 
> testKillYarnSessionClusterEntrypoint(org.apache.flink.yarn.YARNHighAvailabilityITCase)
>   Time elapsed: 10.363 s  <<< ERROR!
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't 
> deploy Yarn session cluster
>   at 
> org.apache.flink.yarn.YARNHighAvailabilityITCase.deploySessionCluster(YARNHighAvailabilityITCase.java:296)
>   at 
> org.apache.flink.yarn.YARNHighAvailabilityITCase.lambda$testKillYarnSessionClusterEntrypoint$0(YARNHighAvailabilityITCase.java:165)
>   at 
> org.apache.flink.yarn.YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint(YARNHighAvailabilityITCase.java:157)
> Caused by: 
> org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: 
> The YARN application unexpectedly switched to state FAILED during deployment. 
> Diagnostics from YARN: Application application_1583080501498_0002 failed 2 
> times in previous 1 milliseconds due to AM Container for 
> appattempt_1583080501498_0002_02 exited with  exitCode: 1
> Failing this attempt.Diagnostics: Exception from container-launch.
> Container id: container_1583080501498_0002_02_01
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1: 
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
>   at org.apache.hadoop.util.Shell.run(Shell.java:869)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> ... snip ...
> 16:44:14.840 [INFO] Results:
> 16:44:14.840 [INFO] 
> 16:44:14.840 [ERROR] Errors: 
> 16:44:14.840 [ERROR]   
> YARNHighAvailabilityITCase.testJobRecoversAfterKillingTaskManager:187->YarnTestBase.runTest:242->lambda$testJobRecoversAfterKillingTaskManager$1:191->deploySessionCluster:296
>  » ClusterDeployment
> 16:44:14.840 [ERROR]   
> YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint:157->YarnTestBase.runTest:242->lambda$testKillYarnSessionClusterEntrypoint$0:165->deploySessionCluster:296
>  » ClusterDeployment
> 16:44:14.840 [INFO] 
> 16:44:14.840 [ERROR] Tests run: 25, Failures: 0, Errors: 2, Skipped: 4
> {code}
> Digging deeper into the problem, this seems to be the root cause:
> {code}
> 2020-03-01 16:35:14,444 INFO  
> org.apache.flink.shaded.curator4.org.apache.curator.utils.Compatibility [] - 
> Using emulated InjectSessionExpiration
> 2020-03-01 16:35:14,466 WARN  
> org.apache.flink.shaded.curator4.org.apache.curator.CuratorZookeeperClient [] 
> - session timeout [1000] is less than connection timeout [15000]
> 2020-03-01 16:35:14,491 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Shutting 
> YarnSessionClusterEntrypoint down with application status FAILED. Diagnostics 
> java.lang.NoSuchMethodError: 
> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.server.quorum.flexible.QuorumMaj.(Ljava/util/Map;)V
>   at 
> org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.EnsembleTracker.(EnsembleTracker.java:57)
>   at 
> org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl.(CuratorFrameworkImpl.java:159)
>   at 
> org.apache.flink.shaded.curator4.org.apac

[jira] [Updated] (FLINK-16375) Remove references to registerTableSource/Sink methods from documentation

2020-03-02 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-16375:
-
Summary: Remove references to registerTableSource/Sink methods from 
documentation  (was: Remove references to those methods from documentation)

> Remove references to registerTableSource/Sink methods from documentation
> 
>
> Key: FLINK-16375
> URL: https://issues.apache.org/jira/browse/FLINK-16375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / API
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.11.0
>
>
> We should remove mentions of 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16375) Remove references to registerTableSource/Sink methods from documentation

2020-03-02 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-16375:
-
Description: We should remove mentions of the registerTableSouce/Sink 
methods from documentation and replace them with suggested approach.  (was: We 
should remove mentions of )

> Remove references to registerTableSource/Sink methods from documentation
> 
>
> Key: FLINK-16375
> URL: https://issues.apache.org/jira/browse/FLINK-16375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / API
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.11.0
>
>
> We should remove mentions of the registerTableSouce/Sink methods from 
> documentation and replace them with suggested approach.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16375) Remove references to those methods from documentation

2020-03-02 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-16375:


 Summary: Remove references to those methods from documentation
 Key: FLINK-16375
 URL: https://issues.apache.org/jira/browse/FLINK-16375
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table SQL / API
Reporter: Dawid Wysakowicz
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16375) Remove references to those methods from documentation

2020-03-02 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-16375:
-
Description: We should remove mentions of 

> Remove references to those methods from documentation
> -
>
> Key: FLINK-16375
> URL: https://issues.apache.org/jira/browse/FLINK-16375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / API
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.11.0
>
>
> We should remove mentions of 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-10885) Avro Confluent Schema Registry E2E test failed on Travis

2020-03-02 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reopened FLINK-10885:


I'm reopening this issue, as it is surfacing again: 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=7f070ad9-d3e4-5b4a-824a-6e246348565d&t=7226e28c-e6d3-52ae-63fd-3e3d5725f2e2

> Avro Confluent Schema Registry E2E test failed on Travis
> 
>
> Key: FLINK-10885
> URL: https://issues.apache.org/jira/browse/FLINK-10885
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Test Infrastructure
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
>
> https://travis-ci.org/zentol/flink/jobs/454943551
> {code}
> Waiting for schema registry...
> [2018-11-14 12:20:59,394] ERROR Server died unexpectedly:  
> (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
> org.apache.kafka.common.config.ConfigException: No supported Kafka endpoints 
> are configured. Either kafkastore.bootstrap.servers must have at least one 
> endpoint matching kafkastore.security.protocol or broker endpoints loaded 
> from ZooKeeper must have at least one endpoint matching 
> kafkastore.security.protocol.
>   at 
> io.confluent.kafka.schemaregistry.storage.KafkaStore.endpointsToBootstrapServers(KafkaStore.java:313)
>   at 
> io.confluent.kafka.schemaregistry.storage.KafkaStore.(KafkaStore.java:130)
>   at 
> io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.(KafkaSchemaRegistry.java:144)
>   at 
> io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:53)
>   at 
> io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
>   at io.confluent.rest.Application.createServer(Application.java:149)
>   at 
> io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16370) YARNHighAvailabilityITCase fails on JDK11 nightly test: NoSuchMethodError: org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.server.quorum.flexible.QuorumMaj

2020-03-02 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-16370:


Assignee: Chesnay Schepler

> YARNHighAvailabilityITCase fails on JDK11 nightly test: NoSuchMethodError: 
> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.server.quorum.flexible.QuorumMaj
> ---
>
> Key: FLINK-16370
> URL: https://issues.apache.org/jira/browse/FLINK-16370
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>Priority: Major
>
> This is the output from the CI system 
> (https://travis-ci.org/apache/flink/jobs/656931001)
> {code}
> 16:35:30.798 [ERROR] 
> testKillYarnSessionClusterEntrypoint(org.apache.flink.yarn.YARNHighAvailabilityITCase)
>   Time elapsed: 10.363 s  <<< ERROR!
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't 
> deploy Yarn session cluster
>   at 
> org.apache.flink.yarn.YARNHighAvailabilityITCase.deploySessionCluster(YARNHighAvailabilityITCase.java:296)
>   at 
> org.apache.flink.yarn.YARNHighAvailabilityITCase.lambda$testKillYarnSessionClusterEntrypoint$0(YARNHighAvailabilityITCase.java:165)
>   at 
> org.apache.flink.yarn.YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint(YARNHighAvailabilityITCase.java:157)
> Caused by: 
> org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: 
> The YARN application unexpectedly switched to state FAILED during deployment. 
> Diagnostics from YARN: Application application_1583080501498_0002 failed 2 
> times in previous 1 milliseconds due to AM Container for 
> appattempt_1583080501498_0002_02 exited with  exitCode: 1
> Failing this attempt.Diagnostics: Exception from container-launch.
> Container id: container_1583080501498_0002_02_01
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1: 
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
>   at org.apache.hadoop.util.Shell.run(Shell.java:869)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:834)
> ... snip ...
> 16:44:14.840 [INFO] Results:
> 16:44:14.840 [INFO] 
> 16:44:14.840 [ERROR] Errors: 
> 16:44:14.840 [ERROR]   
> YARNHighAvailabilityITCase.testJobRecoversAfterKillingTaskManager:187->YarnTestBase.runTest:242->lambda$testJobRecoversAfterKillingTaskManager$1:191->deploySessionCluster:296
>  » ClusterDeployment
> 16:44:14.840 [ERROR]   
> YARNHighAvailabilityITCase.testKillYarnSessionClusterEntrypoint:157->YarnTestBase.runTest:242->lambda$testKillYarnSessionClusterEntrypoint$0:165->deploySessionCluster:296
>  » ClusterDeployment
> 16:44:14.840 [INFO] 
> 16:44:14.840 [ERROR] Tests run: 25, Failures: 0, Errors: 2, Skipped: 4
> {code}
> Digging deeper into the problem, this seems to be the root cause:
> {code}
> 2020-03-01 16:35:14,444 INFO  
> org.apache.flink.shaded.curator4.org.apache.curator.utils.Compatibility [] - 
> Using emulated InjectSessionExpiration
> 2020-03-01 16:35:14,466 WARN  
> org.apache.flink.shaded.curator4.org.apache.curator.CuratorZookeeperClient [] 
> - session timeout [1000] is less than connection timeout [15000]
> 2020-03-01 16:35:14,491 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Shutting 
> YarnSessionClusterEntrypoint down with application status FAILED. Diagnostics 
> java.lang.NoSuchMethodError: 
> org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.server.quorum.flexible.QuorumMaj.(Ljava/util/Map;)V
>   at 
> org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.EnsembleTracker.(EnsembleTracker.java:57)
>   at 
> org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl.(CuratorFrameworkImpl.java:159)
>   at 
> org.apache.flink.shaded.curator4.org.apache.curator.framework.CuratorFrameworkFactory$Builder.build(Cur

[jira] [Commented] (FLINK-16279) Per job Yarn application leak in normal execution mode.

2020-03-02 Thread Kostas Kloudas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049092#comment-17049092
 ] 

Kostas Kloudas commented on FLINK-16279:


If I understand the usecase correctly, we have a job submitted in yarn pre-job 
mode, attached, using {{executeAsync()}} and the job failed but the cluster was 
expecting a final request for the job result to shut down, but this request 
never came.

[~tison] For session cluster the cluster lifecycle is independent from that of 
the job, so I guess that no action should be taken in this case.

For the per-job cluster, the {{shutdownOnExit}} could maybe work because as 
soon as the client disconnects, it will issue a (best-effort) shutdown cluster 
command. 

Another (maybe cleaner) solution, could be that if the job reaches a terminal 
state which is NOT normal termination, then we always tear down the cluster. 
The benefit of this method is that it is the dispatcher that aligns the 
lifecycle of the job with that of the cluster, and not the client which is 
controlled by the user. This would require changes in the 
{{jobReachedGloballyTerminalState()}} in the {{MiniDispatcher}}. This of course 
leaves open the scenario of what happens when the client fails/disconnects. In 
this case we will have a "zombie" cluster. In this case we may need the 
{{shutdownOnExit}}.

But these are just thoughts that I have not yet investigated 100%.

What are your thoughts on that? 

> Per job Yarn application leak in normal execution mode.
> ---
>
> Key: FLINK-16279
> URL: https://issues.apache.org/jira/browse/FLINK-16279
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Wenlong Lyu
>Priority: Major
>
> I run a job in yarn per job mode using {{env.executeAsync}}, the job failed 
> but the yarn cluster didn't be destroyed.
> After some research on the code, I found that:
> when running in attached mode, MiniDispatcher will never set 
> {{shutDownfuture}} before received a request from job client. 
> {code}
>   if (executionMode == ClusterEntrypoint.ExecutionMode.NORMAL) {
>   // terminate the MiniDispatcher once we served the 
> first JobResult successfully
>   jobResultFuture.thenAccept((JobResult result) -> {
>   ApplicationStatus status = 
> result.getSerializedThrowable().isPresent() ?
>   ApplicationStatus.FAILED : 
> ApplicationStatus.SUCCEEDED;
>   LOG.debug("Shutting down per-job cluster 
> because someone retrieved the job result.");
>   shutDownFuture.complete(status);
>   });
>   } 
> {code}
> However, when running in async mode(submit job by env.executeAsync), there 
> may be no request from job client because when a user find that the job is 
> failed from job client, he may never request the result again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] nielsbasjes commented on a change in pull request #11245: [FLINK-15794][Kubernetes] Generate the Kubernetes default image version

2020-03-02 Thread GitBox
nielsbasjes commented on a change in pull request #11245: 
[FLINK-15794][Kubernetes] Generate the Kubernetes default image version
URL: https://github.com/apache/flink/pull/11245#discussion_r386319077
 
 

 ##
 File path: docs/ops/deployment/kubernetes.md
 ##
 @@ -112,7 +112,7 @@ An early version of a [Flink Helm 
chart](https://github.com/docker-flink/example
 
 ### Session cluster resource definitions
 
-The Deployment definitions use the pre-built image `flink:latest` which can be 
found [on Docker Hub](https://hub.docker.com/r/_/flink/).
+The Deployment definitions use the pre-built image `flink:{{ site.version 
}}-scala_2.11` (or `flink:{{ site.version }}-scala_2.12`) which can be found 
[on Docker Hub](https://hub.docker.com/r/_/flink/).
 The image is built from this [Github 
repository](https://github.com/docker-flink/docker-flink).
 
 Review comment:
   Yes, I'll update that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.

2020-03-02 Thread GitBox
JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table 
SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.
URL: https://github.com/apache/flink/pull/11223#discussion_r386317273
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCAppenOnlyWriterTest.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.io.jdbc;
+
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter;
+import org.apache.flink.api.java.tuple.Tuple2;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import java.sql.BatchUpdateException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.Statement;
+
+import static org.apache.flink.api.java.io.jdbc.JDBCOutputFormatTest.toRow;
+import static org.mockito.Mockito.doReturn;
+
+/**
+ * Test for the {@link AppendOnlyWriter}.
+ */
+public class JDBCAppenOnlyWriterTest extends JDBCTestBase {
+   private JDBCUpsertOutputFormat format;
 
 Review comment:
   Add a empty line above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.

2020-03-02 Thread GitBox
JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table 
SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.
URL: https://github.com/apache/flink/pull/11223#discussion_r386317145
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCAppenOnlyWriterTest.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.io.jdbc;
+
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter;
+import org.apache.flink.api.java.tuple.Tuple2;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import java.sql.BatchUpdateException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.Statement;
+
+import static org.apache.flink.api.java.io.jdbc.JDBCOutputFormatTest.toRow;
+import static org.mockito.Mockito.doReturn;
+
+/**
+ * Test for the {@link AppendOnlyWriter}.
+ */
+public class JDBCAppenOnlyWriterTest extends JDBCTestBase {
+   private JDBCUpsertOutputFormat format;
 
 Review comment:
   close this format?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.

2020-03-02 Thread GitBox
JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table 
SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.
URL: https://github.com/apache/flink/pull/11223#discussion_r386317919
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCAppenOnlyWriterTest.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.io.jdbc;
+
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter;
+import org.apache.flink.api.java.tuple.Tuple2;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import java.sql.BatchUpdateException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.Statement;
+
+import static org.apache.flink.api.java.io.jdbc.JDBCOutputFormatTest.toRow;
+import static org.mockito.Mockito.doReturn;
+
+/**
+ * Test for the {@link AppendOnlyWriter}.
+ */
+public class JDBCAppenOnlyWriterTest extends JDBCTestBase {
+   private JDBCUpsertOutputFormat format;
+   private String[] fieldNames;
+   private String[] keyFields;
+
+   @Before
+   public void setup() {
+   fieldNames = new String[]{"id", "title", "author", "price", 
"qty"};
+   keyFields = null;
+   }
+
+   @Test(expected = BatchUpdateException.class)
+   public void testMaxRetry() throws Exception {
+   format = JDBCUpsertOutputFormat.builder()
+   .setOptions(JDBCOptions.builder()
+   .setDBUrl(DB_URL)
+   .setTableName(OUTPUT_TABLE)
+   .build())
+   .setFieldNames(fieldNames)
+   .setKeyFields(keyFields)
+   .build();
+   RuntimeContext context = Mockito.mock(RuntimeContext.class);
+   ExecutionConfig config = Mockito.mock(ExecutionConfig.class);
+   doReturn(config).when(context).getExecutionConfig();
+   doReturn(true).when(config).isObjectReuseEnabled();
+   format.setRuntimeContext(context);
+   format.open(0, 1);
+
+   // alter table schema to trigger retry logic after failure.
+   alterTable();
+   for (TestEntry entry : TEST_DATA) {
+   format.writeRecord(Tuple2.of(true, toRow(entry)));
+   }
+
+   // after retry default times, throws a BatchUpdateException.
+   format.flush();
+   }
+
+   private void alterTable() throws Exception {
+   Class.forName(DRIVER_CLASS);
+   try (Connection conn = DriverManager.getConnection(DB_URL);
+   Statement stat = conn.createStatement()) {
+   stat.execute("ALTER  TABLE " + OUTPUT_TABLE + " DROP 
COLUMN " + fieldNames[1]);
+   stat.close();
 
 Review comment:
   Remove close, already have try.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.

2020-03-02 Thread GitBox
JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table 
SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.
URL: https://github.com/apache/flink/pull/11223#discussion_r386317947
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCAppenOnlyWriterTest.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.io.jdbc;
+
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter;
+import org.apache.flink.api.java.tuple.Tuple2;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import java.sql.BatchUpdateException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.Statement;
+
+import static org.apache.flink.api.java.io.jdbc.JDBCOutputFormatTest.toRow;
+import static org.mockito.Mockito.doReturn;
+
+/**
+ * Test for the {@link AppendOnlyWriter}.
+ */
+public class JDBCAppenOnlyWriterTest extends JDBCTestBase {
+   private JDBCUpsertOutputFormat format;
+   private String[] fieldNames;
+   private String[] keyFields;
+
+   @Before
+   public void setup() {
+   fieldNames = new String[]{"id", "title", "author", "price", 
"qty"};
+   keyFields = null;
+   }
+
+   @Test(expected = BatchUpdateException.class)
+   public void testMaxRetry() throws Exception {
+   format = JDBCUpsertOutputFormat.builder()
+   .setOptions(JDBCOptions.builder()
+   .setDBUrl(DB_URL)
+   .setTableName(OUTPUT_TABLE)
+   .build())
+   .setFieldNames(fieldNames)
+   .setKeyFields(keyFields)
+   .build();
+   RuntimeContext context = Mockito.mock(RuntimeContext.class);
+   ExecutionConfig config = Mockito.mock(ExecutionConfig.class);
+   doReturn(config).when(context).getExecutionConfig();
+   doReturn(true).when(config).isObjectReuseEnabled();
+   format.setRuntimeContext(context);
+   format.open(0, 1);
+
+   // alter table schema to trigger retry logic after failure.
+   alterTable();
+   for (TestEntry entry : TEST_DATA) {
+   format.writeRecord(Tuple2.of(true, toRow(entry)));
+   }
+
+   // after retry default times, throws a BatchUpdateException.
+   format.flush();
+   }
+
+   private void alterTable() throws Exception {
+   Class.forName(DRIVER_CLASS);
+   try (Connection conn = DriverManager.getConnection(DB_URL);
+   Statement stat = conn.createStatement()) {
+   stat.execute("ALTER  TABLE " + OUTPUT_TABLE + " DROP 
COLUMN " + fieldNames[1]);
+   stat.close();
+   conn.close();
+   }
+   }
+
+   @After
+   public void clear() throws Exception {
+   Class.forName(DRIVER_CLASS);
+   try (
+   Connection conn = DriverManager.getConnection(DB_URL);
+   Statement stat = conn.createStatement()) {
+   stat.execute("DELETE FROM " + OUTPUT_TABLE);
+
+   stat.close();
 
 Review comment:
   Remove close, already have try.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.

2020-03-02 Thread GitBox
JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table 
SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.
URL: https://github.com/apache/flink/pull/11223#discussion_r386317045
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCAppenOnlyWriterTest.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.io.jdbc;
+
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter;
+import org.apache.flink.api.java.tuple.Tuple2;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import java.sql.BatchUpdateException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.Statement;
+
+import static org.apache.flink.api.java.io.jdbc.JDBCOutputFormatTest.toRow;
+import static org.mockito.Mockito.doReturn;
+
+/**
+ * Test for the {@link AppendOnlyWriter}.
+ */
+public class JDBCAppenOnlyWriterTest extends JDBCTestBase {
+   private JDBCUpsertOutputFormat format;
+   private String[] fieldNames;
+   private String[] keyFields;
 
 Review comment:
   Don't need member. Just pass null.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13987) add log list and read log by name

2020-03-02 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao reassigned FLINK-13987:


Assignee: lining

> add log list and read log by name
> -
>
> Key: FLINK-13987
> URL: https://issues.apache.org/jira/browse/FLINK-13987
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST
>Reporter: lining
>Assignee: lining
>Priority: Major
>
> As the job running, the log files are becoming large.
> As the application runs on JVM, sometimes the user needs to see the log of 
> GC, but there isn't this content.
> Above all, we need new apis:
>  *  list taskmanager all log file
>  ** /taskmanagers/taskmanagerid/logs
>  ** 
> {code:java}
> {
>   "logs": [
> {
>   "name": "taskmanager.log",
>   "size": 12529
> }
>   ]
> } {code}
>  * read taskmanager log file
>  **  /taskmanagers/log/[filename]
>  ** response: same as taskmanager’s log
>  * list jobmanager all log file
>  ** /jobmanager/logs
>  ** 
> {code:java}
> {
>   "logs": [
> {
>   "name": "jobmanager.log",
>   "size": 12529
> }
>   ]
> }{code}
>  * read jobmanager log file
>  **  /jobmanager/log/[filename]
>  ** response: same as jobmanager's log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.

2020-03-02 Thread GitBox
JingsongLi commented on a change in pull request #11223: [FLINK-16281][Table 
SQL / Ecosystem] parameter 'maxRetryTimes' can not work in JDBCUpsertTableSink.
URL: https://github.com/apache/flink/pull/11223#discussion_r386318083
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCUpsertTableSinkITCase.java
 ##
 @@ -52,6 +52,7 @@
public static final String DB_URL = "jdbc:derby:memory:upsert";
public static final String OUTPUT_TABLE1 = "upsertSink";
public static final String OUTPUT_TABLE2 = "appendSink";
+   public static final String NOT_EXISTS_TABLE = "notExistedTable";
 
 Review comment:
   Please remove this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and 
read log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-592336212
 
 
   
   ## CI report:
   
   * e474761e49927bb30bac6a297e992dc2ec98e01a UNKNOWN
   * 70e8ca9774fc4247657f5d6aecc43459229ba9bb Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151307447) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5801)
 
   * 54ac8f26c67344a0ed000c882afd66e68c8f6bd0 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151328904) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5812)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11278: [hotfix] [docs] fix the mismatch between java and scala examples

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11278: [hotfix] [docs] fix the mismatch 
between java and scala examples
URL: https://github.com/apache/flink/pull/11278#issuecomment-593259962
 
 
   
   ## CI report:
   
   * d51ad7d8b1d185562d1e431e2db01621fc9dfa50 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151308926) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5803)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11174: [FLINK-16199][sql] Support IS JSON predicate for blink planner

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11174: [FLINK-16199][sql] Support IS JSON 
predicate for blink planner
URL: https://github.com/apache/flink/pull/11174#issuecomment-589653271
 
 
   
   ## CI report:
   
   * dace802f82ba2eb5f49a47032c4fa91eec9fd741 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150199116) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5486)
 
   * c80a4b33ca4fede8ccfb000deb81005bb8d83710 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11276: [FLINK-16029][table-planner-blink] Remove register source and sink in test cases of blink planner

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11276: [FLINK-16029][table-planner-blink] 
Remove register source and sink in test cases of blink planner
URL: https://github.com/apache/flink/pull/11276#issuecomment-593234780
 
 
   
   ## CI report:
   
   * 71de84623b4521fecf2026ddb9a343ed196b3a4f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151318423) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5806)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11119: [FLINK-15396][json] Support to ignore parse errors for JSON format

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #9: [FLINK-15396][json] Support to 
ignore parse errors for JSON format
URL: https://github.com/apache/flink/pull/9#issuecomment-587365636
 
 
   
   ## CI report:
   
   * 68fbf9d61e9c89eb323d7eb5cc08d18f9f852065 UNKNOWN
   * baf7d0242e8aacb3cd46154c864201dbb7315c86 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/151318299) 
   * 1ecd2ee1b734a3dc868d20c8c2532447071f0233 UNKNOWN
   * 23df46401002fc6a2fd1800d76c069296c7339a3 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151328780) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16374) StreamingKafkaITCase: IOException: error=13, Permission denied

2020-03-02 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049087#comment-17049087
 ] 

Robert Metzger commented on FLINK-16374:


The error does not occur all the time. This failure happened on the 
"AlibabaCI001-agent2" machine. I'm adding this in case it is a machine specific 
issue.

> StreamingKafkaITCase: IOException: error=13, Permission denied
> --
>
> Key: FLINK-16374
> URL: https://issues.apache.org/jira/browse/FLINK-16374
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> Build: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5792&view=logs&j=25197a20-5964-5b06-5716-045f87dc0ea9&t=0c53f4dc-c81e-5ebb-13b2-08f1994a2d32
> {code}
> 2020-03-02T05:13:23.4758068Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-03-02T05:13:55.8260013Z [ERROR] Tests run: 3, Failures: 0, Errors: 3, 
> Skipped: 0, Time elapsed: 32.346 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-03-02T05:13:55.8262664Z [ERROR] testKafka[0: 
> kafka-version:0.10.2.0](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)
>   Time elapsed: 9.217 s  <<< ERROR!
> 2020-03-02T05:13:55.8264067Z java.io.IOException: Cannot run program 
> "/tmp/junit5236495846374568650/junit4714535957173883866/bin/start-cluster.sh":
>  error=13, Permission denied
> 2020-03-02T05:13:55.8264733Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> 2020-03-02T05:13:55.8265242Z Caused by: java.io.IOException: error=13, 
> Permission denied
> 2020-03-02T05:13:55.8265717Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> 2020-03-02T05:13:55.8266083Z 
> 2020-03-02T05:13:55.8271420Z [ERROR] testKafka[1: 
> kafka-version:0.11.0.2](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)
>   Time elapsed: 11.228 s  <<< ERROR!
> 2020-03-02T05:13:55.8272670Z java.io.IOException: Cannot run program 
> "/tmp/junit8038960384540194088/junit1280636219654303027/bin/start-cluster.sh":
>  error=13, Permission denied
> 2020-03-02T05:13:55.8273343Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> 2020-03-02T05:13:55.8273847Z Caused by: java.io.IOException: error=13, 
> Permission denied
> 2020-03-02T05:13:55.8274418Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> 2020-03-02T05:13:55.8274768Z 
> 2020-03-02T05:13:55.8275429Z [ERROR] testKafka[2: 
> kafka-version:2.2.0](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)  
> Time elapsed: 11.89 s  <<< ERROR!
> 2020-03-02T05:13:55.8276386Z java.io.IOException: Cannot run program 
> "/tmp/junit5500905670445852005/junit4695208010500962520/bin/start-cluster.sh":
>  error=13, Permission denied
> 2020-03-02T05:13:55.8277257Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> 2020-03-02T05:13:55.8277760Z Caused by: java.io.IOException: error=13, 
> Permission denied
> 2020-03-02T05:13:55.8278228Z  at 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16374) StreamingKafkaITCase: IOException: error=13, Permission denied

2020-03-02 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-16374:
--

 Summary: StreamingKafkaITCase: IOException: error=13, Permission 
denied
 Key: FLINK-16374
 URL: https://issues.apache.org/jira/browse/FLINK-16374
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Tests
Reporter: Robert Metzger


Build: 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5792&view=logs&j=25197a20-5964-5b06-5716-045f87dc0ea9&t=0c53f4dc-c81e-5ebb-13b2-08f1994a2d32

{code}
2020-03-02T05:13:23.4758068Z [INFO] Running 
org.apache.flink.tests.util.kafka.StreamingKafkaITCase
2020-03-02T05:13:55.8260013Z [ERROR] Tests run: 3, Failures: 0, Errors: 3, 
Skipped: 0, Time elapsed: 32.346 s <<< FAILURE! - in 
org.apache.flink.tests.util.kafka.StreamingKafkaITCase
2020-03-02T05:13:55.8262664Z [ERROR] testKafka[0: 
kafka-version:0.10.2.0](org.apache.flink.tests.util.kafka.StreamingKafkaITCase) 
 Time elapsed: 9.217 s  <<< ERROR!
2020-03-02T05:13:55.8264067Z java.io.IOException: Cannot run program 
"/tmp/junit5236495846374568650/junit4714535957173883866/bin/start-cluster.sh": 
error=13, Permission denied
2020-03-02T05:13:55.8264733Zat 
org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
2020-03-02T05:13:55.8265242Z Caused by: java.io.IOException: error=13, 
Permission denied
2020-03-02T05:13:55.8265717Zat 
org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
2020-03-02T05:13:55.8266083Z 
2020-03-02T05:13:55.8271420Z [ERROR] testKafka[1: 
kafka-version:0.11.0.2](org.apache.flink.tests.util.kafka.StreamingKafkaITCase) 
 Time elapsed: 11.228 s  <<< ERROR!
2020-03-02T05:13:55.8272670Z java.io.IOException: Cannot run program 
"/tmp/junit8038960384540194088/junit1280636219654303027/bin/start-cluster.sh": 
error=13, Permission denied
2020-03-02T05:13:55.8273343Zat 
org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
2020-03-02T05:13:55.8273847Z Caused by: java.io.IOException: error=13, 
Permission denied
2020-03-02T05:13:55.8274418Zat 
org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
2020-03-02T05:13:55.8274768Z 
2020-03-02T05:13:55.8275429Z [ERROR] testKafka[2: 
kafka-version:2.2.0](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)  
Time elapsed: 11.89 s  <<< ERROR!
2020-03-02T05:13:55.8276386Z java.io.IOException: Cannot run program 
"/tmp/junit5500905670445852005/junit4695208010500962520/bin/start-cluster.sh": 
error=13, Permission denied
2020-03-02T05:13:55.8277257Zat 
org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
2020-03-02T05:13:55.8277760Z Caused by: java.io.IOException: error=13, 
Permission denied
2020-03-02T05:13:55.8278228Zat 
org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:85)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] GJL commented on issue #10858: [FLINK-15582] Change LegacySchedulerBatchSchedulingTest to DefaultSchedulerBatchSchedulingTest

2020-03-02 Thread GitBox
GJL commented on issue #10858: [FLINK-15582] Change 
LegacySchedulerBatchSchedulingTest to DefaultSchedulerBatchSchedulingTest
URL: https://github.com/apache/flink/pull/10858#issuecomment-593336711
 
 
   What will be our strategy when it comes to rewriting legacy tests? Do we 
already want to replace them at this point in time? Alternatively, we can 
duplicate all tests just in case that we won't be able to remove the legacy 
scheduler in 1.11.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16373) EmbeddedLeaderService: IllegalStateException: The RPC connection is already closed

2020-03-02 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-16373:
--

 Summary: EmbeddedLeaderService: IllegalStateException: The RPC 
connection is already closed
 Key: FLINK-16373
 URL: https://issues.apache.org/jira/browse/FLINK-16373
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.11.0
Reporter: Robert Metzger


In our CI system, I see a lot of these error messages:
{code}
09:52:41,108 [flink-akka.actor.default-dispatcher-2] INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor- Allocated slot 
for fd36e49b9105838fc7fb8fff442ade28.
09:52:41,108 [mini-cluster-io-thread-14] WARN  
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService  
- Error notifying leader listener about new leader
java.lang.IllegalStateException: The RPC connection is already closed
at 
org.apache.flink.util.Preconditions.checkState(Preconditions.java:195)
at 
org.apache.flink.runtime.registration.RegisteredRpcConnection.start(RegisteredRpcConnection.java:90)
at 
org.apache.flink.runtime.taskexecutor.JobLeaderService$JobManagerLeaderListener.notifyLeaderAddress(JobLeaderService.java:334)
at 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService$NotifyOfLeaderCall.run(EmbeddedLeaderService.java:515)
at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
09:52:41,109 [flink-akka.actor.default-dispatcher-4] INFO  
org.apache.flink.runtime.taskexecutor.JobLeaderService- Resolved 
JobManager address, beginning registration
{code}

Example cases 
- https://transfer.sh/BAzbF/20200302.17.tar.gz
- https://transfer.sh/8344E/20200302.19.tar.gz




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386310570
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/BufferProviderInputChannelBuilder.java
 ##
 @@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.partition.consumer;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.LocalConnectionManager;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.partition.InputChannelTestUtils;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
+
+import javax.annotation.Nullable;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Builder for special {@link RemoteInputChannel} that correspond to buffer 
request, allow users to
+ * set InputChannelId and release state.
+ */
+public class BufferProviderInputChannelBuilder {
+   private SingleInputGate inputGate = new 
SingleInputGateBuilder().build();
+   private InputChannelID id = new InputChannelID();
+   private int maxNumberOfBuffers = Integer.MAX_VALUE;
+   private int bufferSize = 32 * 1024;
+   private boolean isReleased = false;
+
+   public BufferProviderInputChannelBuilder setInputGate(SingleInputGate 
inputGate) {
+   this.inputGate = inputGate;
+   return this;
+   }
+
+   public BufferProviderInputChannelBuilder setId(InputChannelID id) {
+   this.id = id;
+   return this;
+   }
+
+   public BufferProviderInputChannelBuilder setMaxNumberOfBuffers(int 
maxNumberOfBuffers) {
+   this.maxNumberOfBuffers = maxNumberOfBuffers;
+   return this;
+   }
+
+   public BufferProviderInputChannelBuilder setBufferSize(int bufferSize) {
 
 Review comment:
   We can remove it if not used atm


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16262) Class loader problem with FlinkKafkaProducer.Semantic.EXACTLY_ONCE and usrlib directory

2020-03-02 Thread Gary Yao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049007#comment-17049007
 ] 

Gary Yao commented on FLINK-16262:
--

[~maguowei] It sounds reasonable to make this behavior configurable for the 
docker images. That should be a new ticket however. IIRC [~azagrebin] is 
currently consolidating our Dockerfiles.


> Class loader problem with FlinkKafkaProducer.Semantic.EXACTLY_ONCE and usrlib 
> directory
> ---
>
> Key: FLINK-16262
> URL: https://issues.apache.org/jira/browse/FLINK-16262
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
> Environment: openjdk:11-jre with a slightly modified Flink 1.10.0 
> build (nothing changed regarding Kafka and/or class loading).
>Reporter: Jürgen Kreileder
>Assignee: Guowei Ma
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We're using Docker images modeled after 
> [https://github.com/apache/flink/blob/master/flink-container/docker/Dockerfile]
>  (using Java 11)
> When I try to switch a Kafka producer from AT_LEAST_ONCE to EXACTLY_ONCE, the 
> taskmanager startup fails with:
> {code:java}
> 2020-02-24 18:25:16.389 INFO  o.a.f.r.t.Task                           Create 
> Case Fixer -> Sink: Findings local-krei04-kba-digitalweb-uc1 (1/1) 
> (72f7764c6f6c614e5355562ed3d27209) switched from RUNNING to FAILED.
> org.apache.kafka.common.config.ConfigException: Invalid value 
> org.apache.kafka.common.serialization.ByteArraySerializer for configuration 
> key.serializer: Class 
> org.apache.kafka.common.serialization.ByteArraySerializer could not be found.
>  at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:718)
>  at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:471)
>  at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:464)
>  at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62)
>  at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:75)
>  at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:396)
>  at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:326)
>  at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:298)
>  at 
> org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer.(FlinkKafkaInternalProducer.java:76)
>  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.lambda$abortTransactions$2(FlinkKafkaProducer.java:1107)
>  at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(Unknown 
> Source)
>  at java.base/java.util.HashMap$KeySpliterator.forEachRemaining(Unknown 
> Source)
>  at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
>  at java.base/java.util.stream.ForEachOps$ForEachTask.compute(Unknown Source)
>  at java.base/java.util.concurrent.CountedCompleter.exec(Unknown Source)
>  at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
>  at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown 
> Source)
>  at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source)
>  at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
>  at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown 
> Source){code}
> This looks like a class loading issue: If I copy our JAR to FLINK_LIB_DIR 
> instead of FLINK_USR_LIB_DIR, everything works fine.
> (AT_LEAST_ONCE producers works fine with the JAR in FLINK_USR_LIB_DIR)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16371) HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException'

2020-03-02 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-16371:


Assignee: Sivaprasanna Sethuraman

> HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException'
> -
>
> Key: FLINK-16371
> URL: https://issues.apache.org/jira/browse/FLINK-16371
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Sivaprasanna Sethuraman
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>
> When using CompressWriterFactory with Hadoop compression codec, the execution 
> fails with java.io.NotSerializableException. 
> I guess this is probably to do with the the instance creation for Hadoop's 
> CompressionCodec being done here at 
> [CompressWriterFactory.java#L59|https://github.com/apache/flink/blob/master/flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/CompressWriterFactory.java#L59]
>  and thus it has to be sent over the wire causing the exception to be thrown.
> So I did a quick test on my end by changing the way the CompressionCodec is 
> initialised and ran it on a Hadoop cluster, and it has been working just 
> fine. Will raise a PR in a day or so.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386310024
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, false, false);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   // 4 buffers required.
+   testRepartitionMessagesAndDecode(4, true, false, false);
+   }
+
+   /**
+* Verifies that NettyMessageDecoder works well with buffers sent to a 
released  and removed input channels.
+* For such channels, no Buffer is available to receive the data buffer 
in the message, and the data buffer
+* part should be discarded before reading the next message.
+*/
+   @Test
+   public void 
testDownstreamMessageDecodeWithReleasedAndRemovedInputChannel() throws 
Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, true, true);
+   }
+
+   
//--
+
+   private void testRepartitionMessagesAndDecode(
+   int numberOfBuffersInNormalChannel,
+   boolean hasEmptyBuffer,
+   boolean hasBufferForReleasedChannel,
+   boolean hasBufferForRemovedChannel) throws Exception {
+
+   EmbeddedChannel channel = 
createPartitionRequestClientHandler(numberOfBuffersInNormalChannel);
+
+   try {
+   List messages = 
createMessageList(hasEmptyBuffer, hasBufferForReleasedChannel, 
hasBufferForRemovedChannel);
+   repartitionMessagesAndVerifyDecoding(channel, messages);
+   } finally {
+   

[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386309746
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, false, false);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   // 4 buffers required.
+   testRepartitionMessagesAndDecode(4, true, false, false);
+   }
+
+   /**
+* Verifies that NettyMessageDecoder works well with buffers sent to a 
released  and removed input channels.
+* For such channels, no Buffer is available to receive the data buffer 
in the message, and the data buffer
+* part should be discarded before reading the next message.
+*/
+   @Test
+   public void 
testDownstreamMessageDecodeWithReleasedAndRemovedInputChannel() throws 
Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, true, true);
+   }
+
+   
//--
+
+   private void testRepartitionMessagesAndDecode(
+   int numberOfBuffersInNormalChannel,
+   boolean hasEmptyBuffer,
+   boolean hasBufferForReleasedChannel,
+   boolean hasBufferForRemovedChannel) throws Exception {
+
+   EmbeddedChannel channel = 
createPartitionRequestClientHandler(numberOfBuffersInNormalChannel);
+
+   try {
+   List messages = 
createMessageList(hasEmptyBuffer, hasBufferForReleasedChannel, 
hasBufferForRemovedChannel);
+   repartitionMessagesAndVerifyDecoding(channel, messages);
+   } finally {
+   

[jira] [Comment Edited] (FLINK-16371) HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException'

2020-03-02 Thread Sivaprasanna Sethuraman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049004#comment-17049004
 ] 

Sivaprasanna Sethuraman edited comment on FLINK-16371 at 3/2/20 10:22 AM:
--

Can someone please grant me access to Flink's Jira? I want to assign this 
ticket to myself.


was (Author: zenfenan):
Can someone please grant me access to this Jira? I want to assign it to myself.

> HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException'
> -
>
> Key: FLINK-16371
> URL: https://issues.apache.org/jira/browse/FLINK-16371
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Sivaprasanna Sethuraman
>Priority: Major
>
> When using CompressWriterFactory with Hadoop compression codec, the execution 
> fails with java.io.NotSerializableException. 
> I guess this is probably to do with the the instance creation for Hadoop's 
> CompressionCodec being done here at 
> [CompressWriterFactory.java#L59|https://github.com/apache/flink/blob/master/flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/CompressWriterFactory.java#L59]
>  and thus it has to be sent over the wire causing the exception to be thrown.
> So I did a quick test on my end by changing the way the CompressionCodec is 
> created and ran it on a Hadoop cluster, and it has been working just fine. 
> Will raise a PR in a day or so.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16371) HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException'

2020-03-02 Thread Sivaprasanna Sethuraman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaprasanna Sethuraman updated FLINK-16371:

Description: 
When using CompressWriterFactory with Hadoop compression codec, the execution 
fails with java.io.NotSerializableException. 

I guess this is probably to do with the the instance creation for Hadoop's 
CompressionCodec being done here at 
[CompressWriterFactory.java#L59|https://github.com/apache/flink/blob/master/flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/CompressWriterFactory.java#L59]
 and thus it has to be sent over the wire causing the exception to be thrown.

So I did a quick test on my end by changing the way the CompressionCodec is 
initialised and ran it on a Hadoop cluster, and it has been working just fine. 
Will raise a PR in a day or so.

  was:
When using CompressWriterFactory with Hadoop compression codec, the execution 
fails with java.io.NotSerializableException. 

I guess this is probably to do with the the instance creation for Hadoop's 
CompressionCodec being done here at 
[CompressWriterFactory.java#L59|https://github.com/apache/flink/blob/master/flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/CompressWriterFactory.java#L59]
 and thus it has to be sent over the wire causing the exception to be thrown.

So I did a quick test on my end by changing the way the CompressionCodec is 
created and ran it on a Hadoop cluster, and it has been working just fine. Will 
raise a PR in a day or so.


> HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException'
> -
>
> Key: FLINK-16371
> URL: https://issues.apache.org/jira/browse/FLINK-16371
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Sivaprasanna Sethuraman
>Priority: Major
>
> When using CompressWriterFactory with Hadoop compression codec, the execution 
> fails with java.io.NotSerializableException. 
> I guess this is probably to do with the the instance creation for Hadoop's 
> CompressionCodec being done here at 
> [CompressWriterFactory.java#L59|https://github.com/apache/flink/blob/master/flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/CompressWriterFactory.java#L59]
>  and thus it has to be sent over the wire causing the exception to be thrown.
> So I did a quick test on my end by changing the way the CompressionCodec is 
> initialised and ran it on a Hadoop cluster, and it has been working just 
> fine. Will raise a PR in a day or so.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16371) HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException'

2020-03-02 Thread Sivaprasanna Sethuraman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049004#comment-17049004
 ] 

Sivaprasanna Sethuraman commented on FLINK-16371:
-

Can someone please grant me access to this Jira? I want to assign it to myself.

> HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException'
> -
>
> Key: FLINK-16371
> URL: https://issues.apache.org/jira/browse/FLINK-16371
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Sivaprasanna Sethuraman
>Priority: Major
>
> When using CompressWriterFactory with Hadoop compression codec, the execution 
> fails with java.io.NotSerializableException. 
> I guess this is probably to do with the the instance creation for Hadoop's 
> CompressionCodec being done here at 
> [CompressWriterFactory.java#L59|https://github.com/apache/flink/blob/master/flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/CompressWriterFactory.java#L59]
>  and thus it has to be sent over the wire causing the exception to be thrown.
> So I did a quick test on my end by changing the way the CompressionCodec is 
> created and ran it on a Hadoop cluster, and it has been working just fine. 
> Will raise a PR in a day or so.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386306388
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, false, false);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   // 4 buffers required.
+   testRepartitionMessagesAndDecode(4, true, false, false);
+   }
+
+   /**
+* Verifies that NettyMessageDecoder works well with buffers sent to a 
released  and removed input channels.
+* For such channels, no Buffer is available to receive the data buffer 
in the message, and the data buffer
+* part should be discarded before reading the next message.
+*/
+   @Test
+   public void 
testDownstreamMessageDecodeWithReleasedAndRemovedInputChannel() throws 
Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, true, true);
+   }
+
+   
//--
+
+   private void testRepartitionMessagesAndDecode(
+   int numberOfBuffersInNormalChannel,
+   boolean hasEmptyBuffer,
+   boolean hasBufferForReleasedChannel,
+   boolean hasBufferForRemovedChannel) throws Exception {
+
+   EmbeddedChannel channel = 
createPartitionRequestClientHandler(numberOfBuffersInNormalChannel);
+
+   try {
+   List messages = 
createMessageList(hasEmptyBuffer, hasBufferForReleasedChannel, 
hasBufferForRemovedChannel);
+   repartitionMessagesAndVerifyDecoding(channel, messages);
+   } finally {
+   

[jira] [Updated] (FLINK-16372) Generate unique operator name

2020-03-02 Thread Zou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zou updated FLINK-16372:

Description: 
For now,  same type of operators has the same default name. For example, all 
map function will use "Map" as it's name by default. And users may not set 
names often, in that case, users may be confused by the metrics name in Flink 
UI.

So shall we generate unique name for each operator by default, like add a index 
suffix.

  was:
For now,  same type of operators has the same default names. For example, all 
map function will use "Map" as it's name by default. And users may not set 
names often, in that case, users may be confused by the metrics name in Flink 
UI.

So shall we generate unique name for each operator by default, like add a index 
suffix.


> Generate unique operator name
> -
>
> Key: FLINK-16372
> URL: https://issues.apache.org/jira/browse/FLINK-16372
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zou
>Priority: Major
>
> For now,  same type of operators has the same default name. For example, all 
> map function will use "Map" as it's name by default. And users may not set 
> names often, in that case, users may be confused by the metrics name in Flink 
> UI.
> So shall we generate unique name for each operator by default, like add a 
> index suffix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16372) Generate unique operator name

2020-03-02 Thread Zou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zou updated FLINK-16372:

Description: 
For now,  same type of operators has the same default names. For example, all 
map function will use "Map" as it's name by default. And users may not set 
names often, in that case, users may be confused by the metrics name in Flink 
UI.

So shall we generate unique name for each operator by default, like add a index 
suffix.

  was:
For now,  same types of operators have the same default names. For example, all 
map function will use "Map" as it's name by default. And users may not set 
names often, in that case, users may be confused by the metrics name in Flink 
UI.

So shall we generate unique name for each operator by default, like add a index 
suffix.


> Generate unique operator name
> -
>
> Key: FLINK-16372
> URL: https://issues.apache.org/jira/browse/FLINK-16372
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zou
>Priority: Major
>
> For now,  same type of operators has the same default names. For example, all 
> map function will use "Map" as it's name by default. And users may not set 
> names often, in that case, users may be confused by the metrics name in Flink 
> UI.
> So shall we generate unique name for each operator by default, like add a 
> index suffix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386305106
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
 
 Review comment:
   If we want to explain the meaning of `3`, maybe we can define a local var 
for it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16372) Generate unique operator name

2020-03-02 Thread Zou (Jira)
Zou created FLINK-16372:
---

 Summary: Generate unique operator name
 Key: FLINK-16372
 URL: https://issues.apache.org/jira/browse/FLINK-16372
 Project: Flink
  Issue Type: Improvement
Reporter: Zou


For now,  same types of operators have the same default names. For example, all 
map function will use "Map" as it's name by default. And users may not set 
names often, in that case, users may be confused by the metrics name in Flink 
UI.

So shall we generate unique name for each operator by default, like add a index 
suffix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16371) HadoopCompressionBulkWriter fails with 'java.io.NotSerializableException'

2020-03-02 Thread Sivaprasanna Sethuraman (Jira)
Sivaprasanna Sethuraman created FLINK-16371:
---

 Summary: HadoopCompressionBulkWriter fails with 
'java.io.NotSerializableException'
 Key: FLINK-16371
 URL: https://issues.apache.org/jira/browse/FLINK-16371
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.10.0
Reporter: Sivaprasanna Sethuraman


When using CompressWriterFactory with Hadoop compression codec, the execution 
fails with java.io.NotSerializableException. 

I guess this is probably to do with the the instance creation for Hadoop's 
CompressionCodec being done here at 
[CompressWriterFactory.java#L59|https://github.com/apache/flink/blob/master/flink-formats/flink-compress/src/main/java/org/apache/flink/formats/compress/CompressWriterFactory.java#L59]
 and thus it has to be sent over the wire causing the exception to be thrown.

So I did a quick test on my end by changing the way the CompressionCodec is 
created and ran it on a Hadoop cluster, and it has been working just fine. Will 
raise a PR in a day or so.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and read log by name for taskmanager

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11250: [FLINK-16302][rest]add log list and 
read log by name for taskmanager
URL: https://github.com/apache/flink/pull/11250#issuecomment-592336212
 
 
   
   ## CI report:
   
   * e474761e49927bb30bac6a297e992dc2ec98e01a UNKNOWN
   * 70e8ca9774fc4247657f5d6aecc43459229ba9bb Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151307447) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5801)
 
   * 54ac8f26c67344a0ed000c882afd66e68c8f6bd0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-03-02 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r386302580
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyMessageClientDecoderDelegateTest.java
 ##
 @@ -0,0 +1,324 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import 
org.apache.flink.runtime.io.network.partition.consumer.BufferProviderInputChannelBuilder;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.embedded.EmbeddedChannel;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.BufferResponse;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.ErrorResponse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests the client side message decoder.
+ */
+public class NettyMessageClientDecoderDelegateTest {
+   private static final NettyBufferPool ALLOCATOR = new NettyBufferPool(1);
+
+   private static final InputChannelID NORMAL_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID RELEASED_CHANNEL_ID = new 
InputChannelID();
+
+   private static final InputChannelID REMOVED_CHANNEL_ID = new 
InputChannelID();
+
+   /**
+* Verifies that the client side decoder works well for unreleased 
input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecode() throws Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, false, false);
+   }
+
+   /**
+* Verifies that the client side decoder works well for empty buffers. 
Empty buffers should not
+* consume data buffers of the input channels.
+*/
+   @Test
+   public void testDownstreamMessageDecodeWithEmptyBuffers() throws 
Exception {
+   // 4 buffers required.
+   testRepartitionMessagesAndDecode(4, true, false, false);
+   }
+
+   /**
+* Verifies that NettyMessageDecoder works well with buffers sent to a 
released  and removed input channels.
+* For such channels, no Buffer is available to receive the data buffer 
in the message, and the data buffer
+* part should be discarded before reading the next message.
+*/
+   @Test
+   public void 
testDownstreamMessageDecodeWithReleasedAndRemovedInputChannel() throws 
Exception {
+   // 3 buffers required.
+   testRepartitionMessagesAndDecode(3, false, true, true);
 
 Review comment:
   Can we only verify one case for a test? I mean separating the release and 
empty buffer cases.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11119: [FLINK-15396][json] Support to ignore parse errors for JSON format

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #9: [FLINK-15396][json] Support to 
ignore parse errors for JSON format
URL: https://github.com/apache/flink/pull/9#issuecomment-587365636
 
 
   
   ## CI report:
   
   * 68fbf9d61e9c89eb323d7eb5cc08d18f9f852065 UNKNOWN
   * baf7d0242e8aacb3cd46154c864201dbb7315c86 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/151318299) 
   * 1ecd2ee1b734a3dc868d20c8c2532447071f0233 UNKNOWN
   * 23df46401002fc6a2fd1800d76c069296c7339a3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11248: [FLINK-16299] Release containers recovered from previous attempt in w…

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11248: [FLINK-16299] Release containers 
recovered from previous attempt in w…
URL: https://github.com/apache/flink/pull/11248#issuecomment-592302068
 
 
   
   ## CI report:
   
   * b61c045eddf32b77b81238ed06cbd961351f2e3b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/151300896) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5797)
 
   * 72f4432f92749f7b8825a426a8ec8e1379aa76a2 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151326263) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5810)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11233: [FLINK-16194][k8s] Refactor the Kubernetes decorator design

2020-03-02 Thread GitBox
flinkbot edited a comment on issue #11233: [FLINK-16194][k8s] Refactor the 
Kubernetes decorator design
URL: https://github.com/apache/flink/pull/11233#issuecomment-591843120
 
 
   
   ## CI report:
   
   * 0017b8a8930024d0e9ce3355d050565a5007831d UNKNOWN
   * 978c6f00810a66a87325af6454e49e2f084e45be UNKNOWN
   * abfef134eb5940d7831dab4c794d654c2fa70263 UNKNOWN
   * d78c1b1b282a3d7fcfc797bf909dca2b5f2264e4 UNKNOWN
   * 78750a20196de5f01b01b59b1cd15c752bdee5c0 UNKNOWN
   * 08f07deb2243554d24e2e4171e6cb23a5f934cc8 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/151292878) Azure: 
[CANCELED](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5791)
 
   * b088e0e0530fa3178a547b26b7e950165181caaf Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/151326230) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5809)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16313) flink-state-processor-api: surefire execution unstable on Azure

2020-03-02 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17048993#comment-17048993
 ] 

Robert Metzger commented on FLINK-16313:


Another case, on master: 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5807&view=logs&s=0491af50-44b2-5583-ba82-45044b1ef751&j=41cba0bb-1271-5adb-01cc-4768f26a8311
 with transfer.sh logs: https://transfer.sh/8344E/20200302.19.tar.gz

> flink-state-processor-api: surefire execution unstable on Azure
> ---
>
> Key: FLINK-16313
> URL: https://issues.apache.org/jira/browse/FLINK-16313
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor, Tests
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> Log file: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5686&view=logs&j=41cba0bb-1271-5adb-01cc-4768f26a8311&t=44574c85-1cd0-5978-cccf-f0cf7e87a36a
> {code}
> 2020-02-27T12:36:35.2860111Z [INFO] flink-table-planner 
>  SUCCESS [01:47 min]
> 2020-02-27T12:36:35.2860966Z [INFO] flink-cep-scala 
>  SUCCESS [  5.041 s]
> 2020-02-27T12:36:35.2861740Z [INFO] flink-sql-client 
> ... SUCCESS [03:00 min]
> 2020-02-27T12:36:35.2862503Z [INFO] flink-state-processor-api 
> .. FAILURE [ 15.394 s]
> 2020-02-27T12:36:35.2863237Z [INFO] 
> 
> 2020-02-27T12:36:35.2863587Z [INFO] BUILD FAILURE
> 2020-02-27T12:36:35.2864071Z [INFO] 
> 
> 2020-02-27T12:36:35.2864428Z [INFO] Total time: 05:38 min
> 2020-02-27T12:36:35.2866349Z [INFO] Finished at: 2020-02-27T12:36:35+00:00
> 2020-02-27T12:36:35.9345815Z [INFO] Final Memory: 147M/2914M
> 2020-02-27T12:36:35.9347238Z [INFO] 
> 
> 2020-02-27T12:36:35.9355362Z [WARNING] The requested profile 
> "skip-webui-build" could not be activated because it does not exist.
> 2020-02-27T12:36:35.9367919Z [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test 
> (integration-tests) on project flink-state-processor-api_2.11: There are test 
> failures.
> 2020-02-27T12:36:35.9368804Z [ERROR] 
> 2020-02-27T12:36:35.9369489Z [ERROR] Please refer to 
> /__w/2/s/flink-libraries/flink-state-processing-api/target/surefire-reports 
> for the individual test results.
> 2020-02-27T12:36:35.9370249Z [ERROR] Please refer to dump files (if any 
> exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2020-02-27T12:36:35.9370713Z [ERROR] ExecutionException Error occurred in 
> starting fork, check output in log
> 2020-02-27T12:36:35.9371279Z [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException Error occurred in starting fork, check output in log
> 2020-02-27T12:36:35.9372275Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
> 2020-02-27T12:36:35.9372917Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457)
> 2020-02-27T12:36:35.9373498Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298)
> 2020-02-27T12:36:35.9374064Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
> 2020-02-27T12:36:35.9374636Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 2020-02-27T12:36:35.9375344Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 2020-02-27T12:36:35.9376194Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> 2020-02-27T12:36:35.9376791Z [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 2020-02-27T12:36:35.9377375Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 2020-02-27T12:36:35.9377898Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 2020-02-27T12:36:35.9378435Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> 2020-02-27T12:36:35.9379063Z [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
> 2020-02-27T12:36:35.9379709Z [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.ja

[GitHub] [flink] TisonKun commented on a change in pull request #11174: [FLINK-16199][sql] Support IS JSON predicate for blink planner

2020-03-02 Thread GitBox
TisonKun commented on a change in pull request #11174: [FLINK-16199][sql] 
Support IS JSON predicate for blink planner
URL: https://github.com/apache/flink/pull/11174#discussion_r386301412
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/FunctionGenerator.scala
 ##
 @@ -753,6 +753,54 @@ object FunctionGenerator {
 Seq(FLOAT, INTEGER),
 BuiltInMethods.TRUNCATE_FLOAT)
 
+  addSqlFunctionMethod(
+IS_JSON_VALUE,
+Seq(CHAR),
 
 Review comment:
   Unfortunately after I change `CHAR` to `VARCHAR` the test cases `'[]'` and 
so on failed with mismatch. Thus I manually add `CHAR` *and* `VARCHAR` in 
`FunctionGenerator`. I agree that we can do follow-ups to ease the overhead of 
combinations.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on issue #11174: [FLINK-16199][sql] Support IS JSON predicate for blink planner

2020-03-02 Thread GitBox
TisonKun commented on issue #11174: [FLINK-16199][sql] Support IS JSON 
predicate for blink planner
URL: https://github.com/apache/flink/pull/11174#issuecomment-593325287
 
 
   @wuchong thanks for your review! I've addressed comments. Please take 
another took :-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


<    1   2   3   4   5   6   >