[jira] [Comment Edited] (FLINK-16198) FileUtilsTest fails on Mac OS

2020-05-02 Thread Zhe Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17098164#comment-17098164
 ] 

Zhe Yu edited comment on FLINK-16198 at 5/3/20, 1:54 AM:
-

h3. Failed test case 1: FileUtilsTest.testCompressionOnRelativePath

*Root cause*: failed to create directory using relative path when preparing 
file for testing.

*Solution*: Since goal of this test case is to verify correctness of 
FileUtils.compressDirectory and FileUtils.expandDirectory
 Thus refactor verifyDirectoryCompression to prepare test data using absolute 
path while we will still use relative path to test FileUtils method will be 
sufficient. 
h3. Failed test case 2: FileUtilsTest.testDeleteDirectoryConcurrently

*Root cause:* FileUtils.deleteDirectory is not thread safe. This test case is 
not always failing. In my local env(MacOS), it failed 7,8 times out of 10. The 
test case is creating 3 level file hierarchy like below:

 a

   - b

      - e

   - c 

      - d

Then the test case started 3 threads. All of them are trying to iterate from 
root directory: a. Thus, race condition will happen. There will be a chance 2 
threads trying to delete /a/b/e/ at the same time. When this happens the test 
will file. Maybe I'm missing something given [this comment|#L248]

*Solution:* make [deleteFileOrDirectoryInternal|#L317] synchronized will 
protect the test from failing and 
[guardIfWindows|[https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/util/FileUtils.java#L389]]
 can be removed. But it also means concurrent deletion is disabled if 2 threads 
trying to delete from the same root. 

 

If above proposal sounds reasonable to you, feel free to assign this issue to 
me. I'm also more than glad to talk about both of them. 


was (Author: fishzhe):
h3. Failed test case 1: FileUtilsTest.testCompressionOnRelativePath

*Root cause*: failed to create directory using relative path when preparing 
file for testing.

*Solution*: Since goal of this test case is to verify correctness of 
FileUtils.compressDirectory and FileUtils.expandDirectory
 Thus refactor verifyDirectoryCompression to prepare test data using absolute 
path while we will still use relative path to test FileUtils method will be 
sufficient. 
h3. Failed test case 2: FileUtilsTest.testDeleteDirectoryConcurrently

*Root cause:* FileUtils.deleteDirectory is not thread safe. This test case is 
not always failing. In my local env(MacOS), it failed 7,8 times out of 10. The 
test case is creating 3 level file hierarchy like below:

 a

   - b

      - e

   - c 

      - d

Then the test case started 3 threads. All of them are trying to iterate from 
root directory: a. Thus, race condition will happen. There will be a chance 2 
threads trying to delete /a/b/e/ at the same time. When this happens the test 
will file. Maybe I'm missing something given [this comment|#L248]]

*Solution:* make [deleteFileOrDirectoryInternal|#L317]] synchronized will 
protect the test from failing. But it also means concurrent deletion is 
disabled if 2 threads trying to delete from the same root. 

 

If above proposal sounds reasonable to you, feel free to assign this issue to 
me. I'm also more than glad to talk about both of them. 

> FileUtilsTest fails on Mac OS
> -
>
> Key: FLINK-16198
> URL: https://issues.apache.org/jira/browse/FLINK-16198
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Affects Versions: 1.11.0
>Reporter: Andrey Zagrebin
>Priority: Blocker
>  Labels: starter
> Fix For: 1.11.0
>
>
> The following tests fail if run on Mac OS (IDE/maven).
>  
> FileUtilsTest.testCompressionOnRelativePath: 
> {code:java}
> java.nio.file.NoSuchFileException: 
> ../../../../../var/folders/67/v4yp_42d21j6_n8k1h556h0cgn/T/junit6496651678375117676/compressDir/rootDirjava.nio.file.NoSuchFileException:
>  
> ../../../../../var/folders/67/v4yp_42d21j6_n8k1h556h0cgn/T/junit6496651678375117676/compressDir/rootDir
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  at java.nio.file.Files.createDirectory(Files.java:674) at 
> org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:440)
>  at 
> org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:261)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> 

[jira] [Comment Edited] (FLINK-16198) FileUtilsTest fails on Mac OS

2020-05-02 Thread Zhe Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17098164#comment-17098164
 ] 

Zhe Yu edited comment on FLINK-16198 at 5/3/20, 1:49 AM:
-

h3. Failed test case 1: FileUtilsTest.testCompressionOnRelativePath

*Root cause*: failed to create directory using relative path when preparing 
file for testing.

*Solution*: Since goal of this test case is to verify correctness of 
FileUtils.compressDirectory and FileUtils.expandDirectory
 Thus refactor verifyDirectoryCompression to prepare test data using absolute 
path while we will still use relative path to test FileUtils method will be 
sufficient. 
h3. Failed test case 2: FileUtilsTest.testDeleteDirectoryConcurrently

*Root cause:* FileUtils.deleteDirectory is not thread safe. This test case is 
not always failing. In my local env(MacOS), it failed 7,8 times out of 10. The 
test case is creating 3 level file hierarchy like below:

 a

   - b

      - e

   - c 

      - d

Then the test case started 3 threads. All of them are trying to iterate from 
root directory: a. Thus, race condition will happen. There will be a chance 2 
threads trying to delete /a/b/e/ at the same time. When this happens the test 
will file. Maybe I'm missing something given [this comment|#L248]]

*Solution:* make [deleteFileOrDirectoryInternal|#L317]] synchronized will 
protect the test from failing. But it also means concurrent deletion is 
disabled if 2 threads trying to delete from the same root. 

 

If above proposal sounds reasonable to you, feel free to assign this issue to 
me. I'm also more than glad to talk about both of them. 


was (Author: fishzhe):
h3. Failed test case 1: FileUtilsTest.testCompressionOnRelativePath

*Root cause*: failed to create directory using relative path when preparing 
file for testing.

*Solution*: Since goal of this test case is to verify correctness of 
FileUtils.compressDirectory and FileUtils.expandDirectory
 Thus refactor verifyDirectoryCompression to prepare test data using absolute 
path while we will still use relative path to test FileUtils method will be 
sufficient. 
h3. Failed test case 2: FileUtilsTest.testDeleteDirectoryConcurrently

*Root cause:* FileUtils.deleteDirectory is not thread safe. This test case is 
not always failing. In my local env(MacOS), it failed 7,8 times out of 10. The 
test case is creating 3 level file hierarchy like below:

- a

   - b

      - e

   - c 

      - d

Then the test case started 3 threads. All of them are trying to iterate from 
root directory: a. Thus, race condition will happen. There will be a chance 2 
threads trying to delete /a/b/e/ at the same time. When this happens the test 
will file. Maybe I'm missing something given [this comment|#L248]]

*Solution:* make [deleteFileOrDirectoryInternal|#L317]] synchronized will 
protect the test from failing. But it also means concurrent deletion is 
disabled if 2 threads trying to delete from the same root. 

 

If above proposal sounds reasonable to you, feel free to assign this issue to 
me. I'm also more than glad to talk about both of them. 

> FileUtilsTest fails on Mac OS
> -
>
> Key: FLINK-16198
> URL: https://issues.apache.org/jira/browse/FLINK-16198
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Affects Versions: 1.11.0
>Reporter: Andrey Zagrebin
>Priority: Blocker
>  Labels: starter
> Fix For: 1.11.0
>
>
> The following tests fail if run on Mac OS (IDE/maven).
>  
> FileUtilsTest.testCompressionOnRelativePath: 
> {code:java}
> java.nio.file.NoSuchFileException: 
> ../../../../../var/folders/67/v4yp_42d21j6_n8k1h556h0cgn/T/junit6496651678375117676/compressDir/rootDirjava.nio.file.NoSuchFileException:
>  
> ../../../../../var/folders/67/v4yp_42d21j6_n8k1h556h0cgn/T/junit6496651678375117676/compressDir/rootDir
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  at java.nio.file.Files.createDirectory(Files.java:674) at 
> org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:440)
>  at 
> org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:261)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> 

[jira] [Commented] (FLINK-16198) FileUtilsTest fails on Mac OS

2020-05-02 Thread Zhe Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17098164#comment-17098164
 ] 

Zhe Yu commented on FLINK-16198:


h3. Failed test case 1: FileUtilsTest.testCompressionOnRelativePath

*Root cause*: failed to create directory using relative path when preparing 
file for testing.

*Solution*: Since goal of this test case is to verify correctness of 
FileUtils.compressDirectory and FileUtils.expandDirectory
 Thus refactor verifyDirectoryCompression to prepare test data using absolute 
path while we will still use relative path to test FileUtils method will be 
sufficient. 
h3. Failed test case 2: FileUtilsTest.testDeleteDirectoryConcurrently

*Root cause:* FileUtils.deleteDirectory is not thread safe. This test case is 
not always failing. In my local env(MacOS), it failed 7,8 times out of 10. The 
test case is creating 3 level file hierarchy like below:

- a

   - b

      - e

   - c 

      - d

Then the test case started 3 threads. All of them are trying to iterate from 
root directory: a. Thus, race condition will happen. There will be a chance 2 
threads trying to delete /a/b/e/ at the same time. When this happens the test 
will file. Maybe I'm missing something given [this 
comment|[https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/util/FileUtils.java#L248]]

*Solution:* make 
[deleteFileOrDirectoryInternal|[https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/util/FileUtils.java#L317]]
 synchronized will protect the test from failing. But it also means concurrent 
deletion is disabled if 2 threads trying to delete from the same root. 

 

If above proposal sounds reasonable to you, feel free to assign this issue to 
me. I'm also more than glad to talk about both of them. 

> FileUtilsTest fails on Mac OS
> -
>
> Key: FLINK-16198
> URL: https://issues.apache.org/jira/browse/FLINK-16198
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Affects Versions: 1.11.0
>Reporter: Andrey Zagrebin
>Priority: Blocker
>  Labels: starter
> Fix For: 1.11.0
>
>
> The following tests fail if run on Mac OS (IDE/maven).
>  
> FileUtilsTest.testCompressionOnRelativePath: 
> {code:java}
> java.nio.file.NoSuchFileException: 
> ../../../../../var/folders/67/v4yp_42d21j6_n8k1h556h0cgn/T/junit6496651678375117676/compressDir/rootDirjava.nio.file.NoSuchFileException:
>  
> ../../../../../var/folders/67/v4yp_42d21j6_n8k1h556h0cgn/T/junit6496651678375117676/compressDir/rootDir
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  at java.nio.file.Files.createDirectory(Files.java:674) at 
> org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:440)
>  at 
> org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:261)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 

[jira] [Comment Edited] (FLINK-16198) FileUtilsTest fails on Mac OS

2020-05-02 Thread Zhe Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17098164#comment-17098164
 ] 

Zhe Yu edited comment on FLINK-16198 at 5/3/20, 1:48 AM:
-

h3. Failed test case 1: FileUtilsTest.testCompressionOnRelativePath

*Root cause*: failed to create directory using relative path when preparing 
file for testing.

*Solution*: Since goal of this test case is to verify correctness of 
FileUtils.compressDirectory and FileUtils.expandDirectory
 Thus refactor verifyDirectoryCompression to prepare test data using absolute 
path while we will still use relative path to test FileUtils method will be 
sufficient. 
h3. Failed test case 2: FileUtilsTest.testDeleteDirectoryConcurrently

*Root cause:* FileUtils.deleteDirectory is not thread safe. This test case is 
not always failing. In my local env(MacOS), it failed 7,8 times out of 10. The 
test case is creating 3 level file hierarchy like below:

- a

   - b

      - e

   - c 

      - d

Then the test case started 3 threads. All of them are trying to iterate from 
root directory: a. Thus, race condition will happen. There will be a chance 2 
threads trying to delete /a/b/e/ at the same time. When this happens the test 
will file. Maybe I'm missing something given [this comment|#L248]]

*Solution:* make [deleteFileOrDirectoryInternal|#L317]] synchronized will 
protect the test from failing. But it also means concurrent deletion is 
disabled if 2 threads trying to delete from the same root. 

 

If above proposal sounds reasonable to you, feel free to assign this issue to 
me. I'm also more than glad to talk about both of them. 


was (Author: fishzhe):
h3. Failed test case 1: FileUtilsTest.testCompressionOnRelativePath

*Root cause*: failed to create directory using relative path when preparing 
file for testing.

*Solution*: Since goal of this test case is to verify correctness of 
FileUtils.compressDirectory and FileUtils.expandDirectory
 Thus refactor verifyDirectoryCompression to prepare test data using absolute 
path while we will still use relative path to test FileUtils method will be 
sufficient. 
h3. Failed test case 2: FileUtilsTest.testDeleteDirectoryConcurrently

*Root cause:* FileUtils.deleteDirectory is not thread safe. This test case is 
not always failing. In my local env(MacOS), it failed 7,8 times out of 10. The 
test case is creating 3 level file hierarchy like below:

- a

   - b

      - e

   - c 

      - d

Then the test case started 3 threads. All of them are trying to iterate from 
root directory: a. Thus, race condition will happen. There will be a chance 2 
threads trying to delete /a/b/e/ at the same time. When this happens the test 
will file. Maybe I'm missing something given [this 
comment|[https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/util/FileUtils.java#L248]]

*Solution:* make 
[deleteFileOrDirectoryInternal|[https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/util/FileUtils.java#L317]]
 synchronized will protect the test from failing. But it also means concurrent 
deletion is disabled if 2 threads trying to delete from the same root. 

 

If above proposal sounds reasonable to you, feel free to assign this issue to 
me. I'm also more than glad to talk about both of them. 

> FileUtilsTest fails on Mac OS
> -
>
> Key: FLINK-16198
> URL: https://issues.apache.org/jira/browse/FLINK-16198
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Affects Versions: 1.11.0
>Reporter: Andrey Zagrebin
>Priority: Blocker
>  Labels: starter
> Fix For: 1.11.0
>
>
> The following tests fail if run on Mac OS (IDE/maven).
>  
> FileUtilsTest.testCompressionOnRelativePath: 
> {code:java}
> java.nio.file.NoSuchFileException: 
> ../../../../../var/folders/67/v4yp_42d21j6_n8k1h556h0cgn/T/junit6496651678375117676/compressDir/rootDirjava.nio.file.NoSuchFileException:
>  
> ../../../../../var/folders/67/v4yp_42d21j6_n8k1h556h0cgn/T/junit6496651678375117676/compressDir/rootDir
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  at java.nio.file.Files.createDirectory(Files.java:674) at 
> org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:440)
>  at 
> org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:261)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> 

[jira] [Closed] (FLINK-17057) Add OpenSSL micro-benchmarks

2020-05-02 Thread Nico Kruber (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber closed FLINK-17057.
---
Fix Version/s: 1.11.0
   Resolution: Fixed

Fixed via b2e2699

> Add OpenSSL micro-benchmarks
> 
>
> Key: FLINK-17057
> URL: https://issues.apache.org/jira/browse/FLINK-17057
> Project: Flink
>  Issue Type: New Feature
>  Components: Benchmarks
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
> Fix For: 1.11.0
>
>
> Our JMH micro-benchmarks currently only run with Java's SSL implementation 
> but it would also be nice to have them evaluated with OpenSSL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17057) Add OpenSSL micro-benchmarks

2020-05-02 Thread Nico Kruber (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-17057:

Affects Version/s: (was: 1.11.0)

> Add OpenSSL micro-benchmarks
> 
>
> Key: FLINK-17057
> URL: https://issues.apache.org/jira/browse/FLINK-17057
> Project: Flink
>  Issue Type: New Feature
>  Components: Benchmarks
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>
> Our JMH micro-benchmarks currently only run with Java's SSL implementation 
> but it would also be nice to have them evaluated with OpenSSL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #11963: [FLINK-16408] Bind user code class loader to lifetime of Job on TaskExecutor

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11963:
URL: https://github.com/apache/flink/pull/11963#issuecomment-621969588


   
   ## CI report:
   
   * e07ec1bd9c216cdac0ac4faa79bc5509ab4129b6 UNKNOWN
   * 3c3f0a78079da08bfae6b2704e619a39a4f8e47f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=537)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * 18025611a435e08ed6ac626804f3b7529f7344c7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=538)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11554: [FLINK-15101][connector/common] Add SourceCoordinator implementation.

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11554:
URL: https://github.com/apache/flink/pull/11554#issuecomment-605459909


   
   ## CI report:
   
   * 14e9fe3bfdeeae480047848801243f9fbed03cb4 UNKNOWN
   * a11dbc2d6b25ff16ef3cff4ecc751538d6867e68 UNKNOWN
   * 360f3e0be00e518c2f268f26aab06d6f27764acf Travis: 
[CANCELED](https://travis-ci.com/github/flink-ci/flink/builds/163393154) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=539)
 
   * 5b241a0386f6ae029759e076afef6b155b10b328 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * a03c41aa7757aa950ba2121887a5a9227b6b438d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=535)
 
   * 18025611a435e08ed6ac626804f3b7529f7344c7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=538)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11554: [FLINK-15101][connector/common] Add SourceCoordinator implementation.

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11554:
URL: https://github.com/apache/flink/pull/11554#issuecomment-605459909


   
   ## CI report:
   
   * 14e9fe3bfdeeae480047848801243f9fbed03cb4 UNKNOWN
   * 6a0a147d96dc517574efb6a97786b8b7e8e9c10c Travis: 
[CANCELED](https://travis-ci.com/github/flink-ci/flink/builds/160993263) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7752)
 
   * a11dbc2d6b25ff16ef3cff4ecc751538d6867e68 UNKNOWN
   * 360f3e0be00e518c2f268f26aab06d6f27764acf Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/163393154) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=539)
 
   * 5b241a0386f6ae029759e076afef6b155b10b328 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11554: [FLINK-15101][connector/common] Add SourceCoordinator implementation.

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11554:
URL: https://github.com/apache/flink/pull/11554#issuecomment-605459909


   
   ## CI report:
   
   * 14e9fe3bfdeeae480047848801243f9fbed03cb4 UNKNOWN
   * 6a0a147d96dc517574efb6a97786b8b7e8e9c10c Travis: 
[CANCELED](https://travis-ci.com/github/flink-ci/flink/builds/160993263) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7752)
 
   * a11dbc2d6b25ff16ef3cff4ecc751538d6867e68 UNKNOWN
   * 360f3e0be00e518c2f268f26aab06d6f27764acf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin commented on pull request #11554: [FLINK-15101][connector/common] Add SourceCoordinator implementation.

2020-05-02 Thread GitBox


becketqin commented on pull request #11554:
URL: https://github.com/apache/flink/pull/11554#issuecomment-622981830


   Thanks for the review @StephanEwen. I just updated the patch. 
   
   Re: (1) Yes, keeping the code style consistent make sense. I did not find 
`DataOutputStream`, though. Do you mean `DataOutputViewStreamWrapper`?
   
   Re: (2) Sounds good to me.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * 563a1b12312b1b2de2b8426272e4975f046156d8 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163355418) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=533)
 
   * a03c41aa7757aa950ba2121887a5a9227b6b438d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=535)
 
   * 18025611a435e08ed6ac626804f3b7529f7344c7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=538)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11963: [FLINK-16408] Bind user code class loader to lifetime of Job on TaskExecutor

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11963:
URL: https://github.com/apache/flink/pull/11963#issuecomment-621969588


   
   ## CI report:
   
   * e07ec1bd9c216cdac0ac4faa79bc5509ab4129b6 UNKNOWN
   * 994fb5984ec6312126d76b75206d0023e8ae4212 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=528)
 
   * 3c3f0a78079da08bfae6b2704e619a39a4f8e47f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=537)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * 563a1b12312b1b2de2b8426272e4975f046156d8 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163355418) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=533)
 
   * a03c41aa7757aa950ba2121887a5a9227b6b438d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=535)
 
   * 18025611a435e08ed6ac626804f3b7529f7344c7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 4f7399680818f5d29e917e17720e00900822a43d UNKNOWN
   * 25bf7d665d0b232621648b7d517376a60aab4311 UNKNOWN
   * b5e4853968886a623cb083fe71933fa2ce7f8932 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=536)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #11959: [FLINK-16997][table-common] Add new factory interfaces and discovery utilities

2020-05-02 Thread GitBox


wuchong commented on a change in pull request #11959:
URL: https://github.com/apache/flink/pull/11959#discussion_r418975276



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
##
@@ -0,0 +1,570 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.factories;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.DelegatingConfiguration;
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.connector.format.ScanFormat;
+import org.apache.flink.table.connector.format.SinkFormat;
+import org.apache.flink.table.connector.sink.DynamicTableSink;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.utils.EncodingUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.ServiceConfigurationError;
+import java.util.ServiceLoader;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/**
+ * Utility for working with {@link Factory}s.
+ */
+@Internal
+public final class FactoryUtil {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(FactoryUtil.class);
+
+   public static final ConfigOption PROPERTY_VERSION = 
ConfigOptions.key("property-version")
+   .intType()
+   .defaultValue(1)
+   .withDescription(
+   "Version of the overall property design. This option is 
meant for future backwards compatibility.");
+
+   public static final ConfigOption CONNECTOR = 
ConfigOptions.key("connector")
+   .stringType()
+   .noDefaultValue()
+   .withDescription(
+   "Uniquely identifies the connector of a dynamic table 
that is used for accessing data in " +
+   "an external system. Its value is used during table 
source and table sink discovery.");
+
+   public static final String FORMAT_PREFIX = "format.";
+
+   public static final String KEY_FORMAT_PREFIX = "key.format.";
+
+   public static final String VALUE_FORMAT_PREFIX = "value.format.";
+
+   /**
+* Creates a {@link DynamicTableSource} from a {@link CatalogTable}.
+*
+* It considers {@link Catalog#getFactory()} if provided.
+*/
+   public static DynamicTableSource createTableSource(
+   @Nullable Catalog catalog,

Review comment:
   IIUC, this method is called in `CatalogSourceTable` to create table 
source from the catalog table. However, in `CatalogSourceTable` or even the 
precedent `CatalogSchemaTable`, we can't get the instance of `Catalog`, because 
we don't want to reference the whole `Catalog` in tables. So this method maybe 
not handh to use. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #11959: [FLINK-16997][table-common] Add new factory interfaces and discovery utilities

2020-05-02 Thread GitBox


wuchong commented on a change in pull request #11959:
URL: https://github.com/apache/flink/pull/11959#discussion_r418975276



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
##
@@ -0,0 +1,570 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.factories;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.DelegatingConfiguration;
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.connector.format.ScanFormat;
+import org.apache.flink.table.connector.format.SinkFormat;
+import org.apache.flink.table.connector.sink.DynamicTableSink;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.utils.EncodingUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.ServiceConfigurationError;
+import java.util.ServiceLoader;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/**
+ * Utility for working with {@link Factory}s.
+ */
+@Internal
+public final class FactoryUtil {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(FactoryUtil.class);
+
+   public static final ConfigOption PROPERTY_VERSION = 
ConfigOptions.key("property-version")
+   .intType()
+   .defaultValue(1)
+   .withDescription(
+   "Version of the overall property design. This option is 
meant for future backwards compatibility.");
+
+   public static final ConfigOption CONNECTOR = 
ConfigOptions.key("connector")
+   .stringType()
+   .noDefaultValue()
+   .withDescription(
+   "Uniquely identifies the connector of a dynamic table 
that is used for accessing data in " +
+   "an external system. Its value is used during table 
source and table sink discovery.");
+
+   public static final String FORMAT_PREFIX = "format.";
+
+   public static final String KEY_FORMAT_PREFIX = "key.format.";
+
+   public static final String VALUE_FORMAT_PREFIX = "value.format.";
+
+   /**
+* Creates a {@link DynamicTableSource} from a {@link CatalogTable}.
+*
+* It considers {@link Catalog#getFactory()} if provided.
+*/
+   public static DynamicTableSource createTableSource(
+   @Nullable Catalog catalog,

Review comment:
   IIUC, this method is called in `CatalogSourceTable` to create table 
source from the catalog table. However, in `CatalogSourceTable` or even the 
precedent `CatalogSchemaTable`, we can't get the instance of `Catalog`, because 
we don't want to reference the whole `Catalog` in tables. So this method maybe 
not handy to use. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11963: [FLINK-16408] Bind user code class loader to lifetime of Job on TaskExecutor

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11963:
URL: https://github.com/apache/flink/pull/11963#issuecomment-621969588


   
   ## CI report:
   
   * e07ec1bd9c216cdac0ac4faa79bc5509ab4129b6 UNKNOWN
   * 994fb5984ec6312126d76b75206d0023e8ae4212 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=528)
 
   * 3c3f0a78079da08bfae6b2704e619a39a4f8e47f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 4f7399680818f5d29e917e17720e00900822a43d UNKNOWN
   * 44c8e741ef3e7c17492736d441369a56646b6713 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=498)
 
   * 25bf7d665d0b232621648b7d517376a60aab4311 UNKNOWN
   * b5e4853968886a623cb083fe71933fa2ce7f8932 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=536)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11687: [FLINK-16536][network][checkpointing] Implement InputChannel state recovery for unaligned checkpoint

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11687:
URL: https://github.com/apache/flink/pull/11687#issuecomment-611445542


   
   ## CI report:
   
   * 5bad018f00f85a7359345187a12d7938aa510d25 UNKNOWN
   * d51ce7f47381d99b843278cd701dcff223761a0b UNKNOWN
   * cba096ae3d8eba4a0d39c64659f76ad10a62be27 UNKNOWN
   * 23818c4b30eb9714154080253a090fc63d983925 UNKNOWN
   * 9b6278f15a4fc65d78671a145305df2c7310cc6a UNKNOWN
   * 4f7399680818f5d29e917e17720e00900822a43d UNKNOWN
   * 44c8e741ef3e7c17492736d441369a56646b6713 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=498)
 
   * 25bf7d665d0b232621648b7d517376a60aab4311 UNKNOWN
   * b5e4853968886a623cb083fe71933fa2ce7f8932 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-14356) Introduce "single-field" format to (de)serialize message to a single field

2020-05-02 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17097991#comment-17097991
 ] 

Jark Wu commented on FLINK-14356:
-

FYI: KSQL also has a similar feature: 
https://github.com/confluentinc/ksql/blob/master/design-proposals/klip-3-serialization-of-single-fields.md

> Introduce "single-field" format to (de)serialize message to a single field
> --
>
> Key: FLINK-14356
> URL: https://issues.apache.org/jira/browse/FLINK-14356
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: jinfeng
>Assignee: jinfeng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> I want to use flink sql to write kafka messages directly to hdfs. The 
> serialization and deserialization of messages are not involved in the middle. 
>  The bytes of the message directly convert the first field of Row.  However, 
> the current RowSerializationSchema does not support the conversion of bytes 
> to VARBINARY. Can we add some special RowSerializationSchema and 
> RowDerializationSchema ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * 563a1b12312b1b2de2b8426272e4975f046156d8 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163355418) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=533)
 
   * a03c41aa7757aa950ba2121887a5a9227b6b438d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=535)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * 563a1b12312b1b2de2b8426272e4975f046156d8 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163355418) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=533)
 
   * a03c41aa7757aa950ba2121887a5a9227b6b438d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * 563a1b12312b1b2de2b8426272e4975f046156d8 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163355418) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=533)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on a change in pull request #11959: [FLINK-16997][table-common] Add new factory interfaces and discovery utilities

2020-05-02 Thread GitBox


dawidwys commented on a change in pull request #11959:
URL: https://github.com/apache/flink/pull/11959#discussion_r418938345



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/Catalog.java
##
@@ -48,6 +50,21 @@
 @PublicEvolving
 public interface Catalog {
 
+   /**
+* Returns a factory for creating instances from catalog objects.
+*
+* This method enables bypassing the discovery process. Implementers 
can directly pass internal
+* catalog-specific objects to their own factory. For example, a custom 
{@link CatalogTable} can
+* be processed by a custom {@link DynamicTableFactory}.
+*
+* Because all factories are interfaces, the returned {@link 
Factory} instance can implement multiple
+* supported extension points. An {@code instanceof} check is performed 
by the caller that checks
+* whether a required factory is implemented; otherwise the discovery 
process is used.
+*/
+   default Optional getFactory() {

Review comment:
   Shall we deprecate the other `getTableFactory`? If not can we describe 
the relationship of the two methods? How do they differ?

##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/format/SinkFormat.java
##
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.connector.format;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.connector.sink.DynamicTableSink;
+import org.apache.flink.table.connector.source.ScanTableSource;
+import org.apache.flink.table.types.DataType;
+
+/**
+ * A {@link Format} for a {@link DynamicTableSink}.
+ *
+ * @param  runtime interface needed by the table sink
+ */
+@Internal
+public interface SinkFormat extends Format {

Review comment:
   Shouldn't the format factories be also `PublicEvolving`?

##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/CatalogBaseTable.java
##
@@ -28,13 +29,24 @@
  * key-value pairs defining the properties of the table.
  */
 public interface CatalogBaseTable {
+
/**
-* Get the properties of the table.
-*
-* @return property map of the table/view
+* @deprecated Use {@link #getOptions()}.
 */
+   @Deprecated
Map getProperties();
 
+   /**
+* Returns a map of string-based options.
+*
+* In case of {@link CatalogTable}, these options may determine the 
kind of connector and its

Review comment:
   nit: may or will?

##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/format/Format.java
##
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.connector.format;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.table.connector.ChangelogMode;
+import org.apache.flink.table.connector.sink.DynamicTableSink;
+import org.apache.flink.table.connector.source.ScanTableSource;
+import org.apache.flink.table.factories.DynamicTableFactory;
+
+/**
+ * Base interface for connector formats.
+ *
+ * Depending on the kind of external system, a connector might support 
different encodings for
+ * reading and 

[jira] [Closed] (FLINK-16103) Translate "Configuration" page of "Table API & SQL" into Chinese

2020-05-02 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-16103.
---
Fix Version/s: 1.11.0
   Resolution: Fixed

Resolved in master (1.11.0): 18af2a1c157ef0e45dd8a49ff54928496c296d32

> Translate "Configuration" page of "Table API & SQL" into Chinese
> 
>
> Key: FLINK-16103
> URL: https://issues.apache.org/jira/browse/FLINK-16103
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Delin Zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/config.html
> The markdown file is located in {{flink/docs/dev/table/config.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #11977: Merge pull request #1 from apache/master

2020-05-02 Thread GitBox


flinkbot commented on pull request #11977:
URL: https://github.com/apache/flink/pull/11977#issuecomment-622942062


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 08be7a69b1e7c935701b6a6f2d5454b43115d17f (Sat May 02 
11:54:54 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] kukuzidian commented on pull request #11977: Merge pull request #1 from apache/master

2020-05-02 Thread GitBox


kukuzidian commented on pull request #11977:
URL: https://github.com/apache/flink/pull/11977#issuecomment-622941773


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] kukuzidian opened a new pull request #11977: Merge pull request #1 from apache/master

2020-05-02 Thread GitBox


kukuzidian opened a new pull request #11977:
URL: https://github.com/apache/flink/pull/11977


   更新master
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] edu05 commented on pull request #11952: [FLINK-16638][runtime][checkpointing] Flink checkStateMappingCompleteness doesn't include UserDefinedOperatorIDs

2020-05-02 Thread GitBox


edu05 commented on pull request #11952:
URL: https://github.com/apache/flink/pull/11952#issuecomment-622941641


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * 563a1b12312b1b2de2b8426272e4975f046156d8 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163355418) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=533)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 0d84af72bc6f7159452da67f34d8825a0d040d02 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/163340294) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=531)
 
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * 563a1b12312b1b2de2b8426272e4975f046156d8 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/163355418) Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=533)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 0d84af72bc6f7159452da67f34d8825a0d040d02 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/163340294) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=531)
 
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   * 563a1b12312b1b2de2b8426272e4975f046156d8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on a change in pull request #11951: [FLINK-17420][table sql / api]Cannot alias Tuple and Row fields when converting DataStream to Table

2020-05-02 Thread GitBox


dawidwys commented on a change in pull request #11951:
URL: https://github.com/apache/flink/pull/11951#discussion_r418937589



##
File path: 
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/typeutils/FieldInfoUtilsTest.java
##
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.typeutils;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.types.DataType;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+
+import static org.apache.flink.table.api.Expressions.$;
+
+/**
+ * Test suite for {@link FieldInfoUtils}.
+ */
+public class FieldInfoUtilsTest {
+
+   private static final RowTypeInfo typeInfo = new RowTypeInfo(
+   new TypeInformation[]{Types.INT, Types.LONG, Types.STRING},
+   new String[]{"f0", "f1", "f2"});
+
+   @Test
+   public void testByPositionMode() {
+   FieldInfoUtils.TypeInfoSchema schema = 
FieldInfoUtils.getFieldsInfo(
+   typeInfo,
+   new Expression[]{$("aa"), $("bb"), $("cc")});
+
+   Assert.assertEquals("[aa, bb, cc]", 
Arrays.asList(schema.getFieldNames()).toString());
+   Assert.assertArrayEquals(new DataType[]{DataTypes.INT(), 
DataTypes.BIGINT(), DataTypes.STRING()}, schema.getFieldTypes());
+   }
+
+   @Test
+   public void testByNameModeReorder() {
+   FieldInfoUtils.TypeInfoSchema schema = 
FieldInfoUtils.getFieldsInfo(
+   typeInfo,
+   new Expression[]{$("f2"), $("f1"), $("f0")});
+
+   Assert.assertEquals("[f2, f1, f0]", 
Arrays.asList(schema.getFieldNames()).toString());
+   Assert.assertArrayEquals(new DataType[]{DataTypes.STRING(), 
DataTypes.BIGINT(), DataTypes.INT()}, schema.getFieldTypes());
+   }
+
+   @Test
+   public void testByNameModeReorderAndRename() {

Review comment:
   
   
   Can you also add a test for extracting fields in byPosition mode when there 
is a rowtime/proctime attribute?
   
   select($("a"), $("b"), $("timestamp").rowtime().as("rowtime")) -> this 
should still use the by position mode. I know it might be a bit confusing but 
it is the behavior since very early.
   

##
File path: 
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/typeutils/FieldInfoUtilsTest.java
##
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.typeutils;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.types.DataType;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+
+import static org.apache.flink.table.api.Expressions.$;
+
+/**
+ * Test suite for {@link FieldInfoUtils}.
+ */
+public class FieldInfoUtilsTest {

Review comment:
   Could you add tests to check if it works for `PojoTypeInfo` as well?


[GitHub] [flink] flinkbot edited a comment on pull request #11976: FIX-17376:Deprecated methods and associated code removed

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11976:
URL: https://github.com/apache/flink/pull/11976#issuecomment-622850792


   
   ## CI report:
   
   * 132df2f6a6d1974603021c30e8800df2c3070df0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=532)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11976: FIX-17376:Deprecated methods and associated code removed

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11976:
URL: https://github.com/apache/flink/pull/11976#issuecomment-622850792


   
   ## CI report:
   
   * 132df2f6a6d1974603021c30e8800df2c3070df0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=532)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #11976: FIX-17376:Deprecated methods and associated code removed

2020-05-02 Thread GitBox


flinkbot commented on pull request #11976:
URL: https://github.com/apache/flink/pull/11976#issuecomment-622850792


   
   ## CI report:
   
   * 132df2f6a6d1974603021c30e8800df2c3070df0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-17376) Remove deprecated state access methods

2020-05-02 Thread Manish Ghildiyal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17097860#comment-17097860
 ] 

Manish Ghildiyal edited comment on FLINK-17376 at 5/2/20, 7:49 AM:
---

PR made:

[https://github.com/apache/flink/pull/11976]


was (Author: manish.c.ghildi...@gmail.com):
PR made [here|[https://github.com/apache/flink/pull/11976]]

> Remove deprecated state access methods
> --
>
> Key: FLINK-17376
> URL: https://issues.apache.org/jira/browse/FLINK-17376
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Stephan Ewen
>Assignee: Manish Ghildiyal
>Priority: Critical
>  Labels: starter
> Fix For: 1.11.0
>
>
> Some methods for state access in the DataStream API are deprecated for three 
> years (yes, indeed!) and are still around and confusing users. We should 
> finally remove them.
> The methods are
>   - RuntimeContext.getFoldingState(...)
>   - OperatorStateStore.getSerializableListState(...)
>   - OperatorStateStore.getOperatorState(...)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17376) Remove deprecated state access methods

2020-05-02 Thread Manish Ghildiyal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17097860#comment-17097860
 ] 

Manish Ghildiyal edited comment on FLINK-17376 at 5/2/20, 7:48 AM:
---

PR made [here|[https://github.com/apache/flink/pull/11976]]


was (Author: manish.c.ghildi...@gmail.com):
PR made [here|[https://github.com/apache/flink/pull/11976]]

> Remove deprecated state access methods
> --
>
> Key: FLINK-17376
> URL: https://issues.apache.org/jira/browse/FLINK-17376
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Stephan Ewen
>Assignee: Manish Ghildiyal
>Priority: Critical
>  Labels: starter
> Fix For: 1.11.0
>
>
> Some methods for state access in the DataStream API are deprecated for three 
> years (yes, indeed!) and are still around and confusing users. We should 
> finally remove them.
> The methods are
>   - RuntimeContext.getFoldingState(...)
>   - OperatorStateStore.getSerializableListState(...)
>   - OperatorStateStore.getOperatorState(...)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17376) Remove deprecated state access methods

2020-05-02 Thread Manish Ghildiyal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17097860#comment-17097860
 ] 

Manish Ghildiyal edited comment on FLINK-17376 at 5/2/20, 7:47 AM:
---

PR made [here|https://github.com/apache/flink/pull/11976]]


was (Author: manish.c.ghildi...@gmail.com):
PR made [[here|https://github.com/apache/flink/pull/11976]|http://example.com]

> Remove deprecated state access methods
> --
>
> Key: FLINK-17376
> URL: https://issues.apache.org/jira/browse/FLINK-17376
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Stephan Ewen
>Assignee: Manish Ghildiyal
>Priority: Critical
>  Labels: starter
> Fix For: 1.11.0
>
>
> Some methods for state access in the DataStream API are deprecated for three 
> years (yes, indeed!) and are still around and confusing users. We should 
> finally remove them.
> The methods are
>   - RuntimeContext.getFoldingState(...)
>   - OperatorStateStore.getSerializableListState(...)
>   - OperatorStateStore.getOperatorState(...)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17376) Remove deprecated state access methods

2020-05-02 Thread Manish Ghildiyal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17097860#comment-17097860
 ] 

Manish Ghildiyal edited comment on FLINK-17376 at 5/2/20, 7:47 AM:
---

PR made [here|[https://github.com/apache/flink/pull/11976]]


was (Author: manish.c.ghildi...@gmail.com):
PR made [here|https://github.com/apache/flink/pull/11976]]

> Remove deprecated state access methods
> --
>
> Key: FLINK-17376
> URL: https://issues.apache.org/jira/browse/FLINK-17376
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Stephan Ewen
>Assignee: Manish Ghildiyal
>Priority: Critical
>  Labels: starter
> Fix For: 1.11.0
>
>
> Some methods for state access in the DataStream API are deprecated for three 
> years (yes, indeed!) and are still around and confusing users. We should 
> finally remove them.
> The methods are
>   - RuntimeContext.getFoldingState(...)
>   - OperatorStateStore.getSerializableListState(...)
>   - OperatorStateStore.getOperatorState(...)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17376) Remove deprecated state access methods

2020-05-02 Thread Manish Ghildiyal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17097860#comment-17097860
 ] 

Manish Ghildiyal commented on FLINK-17376:
--

PR made [here|[https://github.com/apache/flink/pull/11976]]

> Remove deprecated state access methods
> --
>
> Key: FLINK-17376
> URL: https://issues.apache.org/jira/browse/FLINK-17376
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Stephan Ewen
>Assignee: Manish Ghildiyal
>Priority: Critical
>  Labels: starter
> Fix For: 1.11.0
>
>
> Some methods for state access in the DataStream API are deprecated for three 
> years (yes, indeed!) and are still around and confusing users. We should 
> finally remove them.
> The methods are
>   - RuntimeContext.getFoldingState(...)
>   - OperatorStateStore.getSerializableListState(...)
>   - OperatorStateStore.getOperatorState(...)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17376) Remove deprecated state access methods

2020-05-02 Thread Manish Ghildiyal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17097860#comment-17097860
 ] 

Manish Ghildiyal edited comment on FLINK-17376 at 5/2/20, 7:46 AM:
---

PR made [[here|https://github.com/apache/flink/pull/11976]|http://example.com]


was (Author: manish.c.ghildi...@gmail.com):
PR made [here|[https://github.com/apache/flink/pull/11976]]

> Remove deprecated state access methods
> --
>
> Key: FLINK-17376
> URL: https://issues.apache.org/jira/browse/FLINK-17376
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Stephan Ewen
>Assignee: Manish Ghildiyal
>Priority: Critical
>  Labels: starter
> Fix For: 1.11.0
>
>
> Some methods for state access in the DataStream API are deprecated for three 
> years (yes, indeed!) and are still around and confusing users. We should 
> finally remove them.
> The methods are
>   - RuntimeContext.getFoldingState(...)
>   - OperatorStateStore.getSerializableListState(...)
>   - OperatorStateStore.getOperatorState(...)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #11976: FIX-17376:Deprecated methods and associated code removed

2020-05-02 Thread GitBox


flinkbot commented on pull request #11976:
URL: https://github.com/apache/flink/pull/11976#issuecomment-622813184


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 132df2f6a6d1974603021c30e8800df2c3070df0 (Sat May 02 
07:45:35 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] mghildiy opened a new pull request #11976: FIX-17376:Deprecated methods and associated code removed

2020-05-02 Thread GitBox


mghildiy opened a new pull request #11976:
URL: https://github.com/apache/flink/pull/11976


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11975: [FLINK-17496][kinesis] Upgrade amazon-kinesis-producer to 0.14.0

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11975:
URL: https://github.com/apache/flink/pull/11975#issuecomment-622662116


   
   ## CI report:
   
   * 89d8665ddbcdd11cff8652f39fea4d0601f809ac Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=530)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * 0d84af72bc6f7159452da67f34d8825a0d040d02 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/163340294) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=531)
 
   * 066795205734add3b142a92c687c98b25253985e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17028) Introduce a new HBase connector with new property keys

2020-05-02 Thread molsion mo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17097847#comment-17097847
 ] 

molsion mo commented on FLINK-17028:


Thx . That is the way i think about it. FLINK-16987 need to complete before 
this task can continue 

  

> Introduce a new HBase connector with new property keys
> --
>
> Key: FLINK-17028
> URL: https://issues.apache.org/jira/browse/FLINK-17028
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Reporter: Jark Wu
>Priority: Major
>
> This new Kafka connector should use new interfaces proposed by FLIP-95, e.g. 
> DynamicTableSource, DynamicTableSink, and Factory.
> The new proposed keys :
> ||Old key||New key||Note||
> |connector.type|connector| |
> |connector.version|N/A|merged into 'connector' key|
> |connector.table-name|table-name| |
> |connector.zookeeper.quorum|zookeeper.quorum| |
> |connector.zookeeper.znode.parent|zookeeper.znode-parent| |
> |connector.write.buffer-flush.max-size|sink.buffer-flush.max-size| |
> |connector.write.buffer-flush.max-rows|sink.buffer-flush.max-rows| |
> |connector.write.buffer-flush.interval|sink.buffer-flush.interval| |
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] curcur commented on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


curcur commented on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-622771835


   @AHeise and @pnowojski , I have updated the PR with respect to the API 
change suggested above. The API change is listed as the fourth commit 
"[FLINK-15670][connector] Kafka Shuffle API Part". The updated PR also includes 
amends suggested by Arvid in previous reviews. I left a reply for comments not 
yet resolved.  
   
   Now, most of the code change is wrapped in the package 
"org.apache.flink.streaming.connectors.kafka.shuffle" to avoid effects on other 
parts.
   
   1. I think now the only thing a bit struggling is "TwoPhaseCommitFunction". 
If I do not provide a watermark entry in that abstract function, I will end up 
exposing "currentTransactionHolder" as well as "TransactionHolder#handle". Your 
call.
   
   2. I divide the "persistentKeyBy" to two functions: "writeKeyBy" and 
"readKeyBy". Hence people can call "readKeyBy" directly to reuse the written 
data. You can take a look at the API to see whether the changes make sense.
   
   Thanks in advance!!
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #11959: [FLINK-16997][table-common] Add new factory interfaces and discovery utilities

2020-05-02 Thread GitBox


wuchong commented on a change in pull request #11959:
URL: https://github.com/apache/flink/pull/11959#discussion_r418923036



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/format/Format.java
##
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.connector.format;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.table.connector.ChangelogMode;
+import org.apache.flink.table.connector.sink.DynamicTableSink;
+import org.apache.flink.table.connector.source.ScanTableSource;
+import org.apache.flink.table.factories.DynamicTableFactory;
+
+/**
+ * Base interface for connector formats.
+ *
+ * Depending on the kind of external system, a connector might support 
different encodings for
+ * reading and writing rows. This interface is an intermediate representation 
before constructing actual
+ * runtime implementation.
+ *
+ * Formats can be distinguished along two dimensions:
+ * 
+ * Context in which the format is applied (e.g. {@link 
ScanTableSource} or {@link DynamicTableSink}).
+ * Runtime implementation interface that is required (e.g. {@link 
DeserializationSchema} or
+ * some bulk interface).
+ * 
+ *
+ * A {@link DynamicTableFactory} can search for a format that it is 
accepted by the connector.
+ *
+ * @see ScanFormat
+ * @see SinkFormat
+ *
+ * @param  underlying runtime interface
+ */
+@Internal
+public interface Format {
+
+   /**
+* Returns the set of changes that a connector (and transitively the 
planner) can expect during
+* runtime.
+*/
+   ChangelogMode createChangelogMode();

Review comment:
   `getChangelogMode()` ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #11959: [FLINK-16997][table-common] Add new factory interfaces and discovery utilities

2020-05-02 Thread GitBox


wuchong commented on a change in pull request #11959:
URL: https://github.com/apache/flink/pull/11959#discussion_r418922875



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java
##
@@ -0,0 +1,570 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.factories;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.DelegatingConfiguration;
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.catalog.Catalog;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.connector.format.ScanFormat;
+import org.apache.flink.table.connector.format.SinkFormat;
+import org.apache.flink.table.connector.sink.DynamicTableSink;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.utils.EncodingUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.ServiceConfigurationError;
+import java.util.ServiceLoader;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+/**
+ * Utility for working with {@link Factory}s.
+ */
+@Internal

Review comment:
   Shall we mark the `FactoryUtil` as `PublicEvolving`? This class appears 
in many Javadocs of public interfaces.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-02 Thread GitBox


flinkbot edited a comment on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-613252882


   
   ## CI report:
   
   * a9c54057daa5bb907302534b04be5f4742d1b586 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/162928464) Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=473)
 
   * 0d84af72bc6f7159452da67f34d8825a0d040d02 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org