[GitHub] [flink] Myasuka commented on a diff in pull request #20860: [FLINK-29347] [runtime] Fix ByteStreamStateHandle#read return -1 when read count is 0

2022-09-22 Thread GitBox


Myasuka commented on code in PR #20860:
URL: https://github.com/apache/flink/pull/20860#discussion_r978298212


##
flink-runtime/src/test/java/org/apache/flink/runtime/state/memory/ByteStreamStateHandleTest.java:
##
@@ -172,4 +172,15 @@ public void testBulkReadINdexOutOfBounds() throws 
IOException {
 // expected
 }
 }
+
+@Test
+public void testStreamWithEmptyByteArray() throws IOException {
+final byte[] data = new byte[0];
+final ByteStreamStateHandle handle = new ByteStreamStateHandle("name", 
data);
+
+FSDataInputStream in = handle.openInputStream();

Review Comment:
   This input stream could be closed properly.



##
flink-runtime/src/test/java/org/apache/flink/runtime/state/memory/ByteStreamStateHandleTest.java:
##
@@ -172,4 +172,15 @@ public void testBulkReadINdexOutOfBounds() throws 
IOException {
 // expected
 }
 }
+
+@Test
+public void testStreamWithEmptyByteArray() throws IOException {
+final byte[] data = new byte[0];
+final ByteStreamStateHandle handle = new ByteStreamStateHandle("name", 
data);
+
+FSDataInputStream in = handle.openInputStream();
+in.seek(0);

Review Comment:
   Why we need to seek to the 1st position?



##
flink-runtime/src/test/java/org/apache/flink/runtime/state/memory/ByteStreamStateHandleTest.java:
##
@@ -172,4 +172,15 @@ public void testBulkReadINdexOutOfBounds() throws 
IOException {
 // expected
 }
 }
+
+@Test
+public void testStreamWithEmptyByteArray() throws IOException {
+final byte[] data = new byte[0];
+final ByteStreamStateHandle handle = new ByteStreamStateHandle("name", 
data);
+
+FSDataInputStream in = handle.openInputStream();
+in.seek(0);
+byte[] dataGot = new byte[1];
+assertEquals(0, in.read(dataGot, 0, 0));

Review Comment:
   Do we need to test `assertEquals(0, in.read());`?



##
flink-runtime/src/main/java/org/apache/flink/runtime/state/memory/ByteStreamStateHandle.java:
##
@@ -148,7 +148,7 @@ public int read(byte[] b, int off, int len) throws 
IOException {
 index += bytesToCopy;
 return bytesToCopy;
 } else {
-return -1;
+return len == 0 ? 0 : -1;

Review Comment:
   Do we need to change the returning result of `#read()`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] syhily commented on pull request #20886: [FLINK-26203][Connectors / Pulsar][docs] Introduce Flink Pulsar SQL Connector

2022-09-22 Thread GitBox


syhily commented on PR #20886:
URL: https://github.com/apache/flink/pull/20886#issuecomment-1255857702

   Thanks for your contribution. I'll review this ASAP.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #377: [FLINK-28979] Add owner reference to flink deployment object

2022-09-22 Thread GitBox


gyfora commented on code in PR #377:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/377#discussion_r978309410


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/AbstractFlinkResourceReconciler.java:
##
@@ -426,4 +431,17 @@ protected boolean flinkVersionChanged(SPEC oldSpec, SPEC 
newSpec) {
 }
 return false;
 }
+
+private void setOwnerReference(CR owner, Configuration deployConfig) {
+final Map ownerReference =
+Map.of(
+"apiVersion", owner.getApiVersion(),
+"kind", owner.getKind(),
+"name", owner.getMetadata().getName(),
+"uid", owner.getMetadata().getUid(),
+"blockOwnerDeletion", "false",
+"controller", "false");
+deployConfig.set(
+KubernetesConfigOptions.JOB_MANAGER_OWNER_REFERENCE, 
List.of(ownerReference));
+}

Review Comment:
   Or the `getDeployConfig` method of the `FlinkConfigManager` it might be 
easier to add there :)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20890: [FLINK-29093][table] Fix InternalCompilerException in LookupJoinITCas…

2022-09-22 Thread GitBox


flinkbot commented on PR #20890:
URL: https://github.com/apache/flink/pull/20890#issuecomment-1255849462

   
   ## CI report:
   
   * 7e9d69912530982333b4833d83c06294d32aad80 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #377: [FLINK-28979] Add owner reference to flink deployment object

2022-09-22 Thread GitBox


gyfora commented on code in PR #377:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/377#discussion_r978307344


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/AbstractFlinkResourceReconciler.java:
##
@@ -426,4 +431,17 @@ protected boolean flinkVersionChanged(SPEC oldSpec, SPEC 
newSpec) {
 }
 return false;
 }
+
+private void setOwnerReference(CR owner, Configuration deployConfig) {
+final Map ownerReference =
+Map.of(
+"apiVersion", owner.getApiVersion(),
+"kind", owner.getKind(),
+"name", owner.getMetadata().getName(),
+"uid", owner.getMetadata().getUid(),
+"blockOwnerDeletion", "false",
+"controller", "false");
+deployConfig.set(
+KubernetesConfigOptions.JOB_MANAGER_OWNER_REFERENCE, 
List.of(ownerReference));
+}

Review Comment:
   Feels like the `FlinkConfigBuilder` might be a better place for this, then 
we can be sure that it's universally set in a single place.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #20870: [FLINK-29375][rpc] Move getSelfGateway() into RpcService

2022-09-22 Thread GitBox


XComp commented on PR #20870:
URL: https://github.com/apache/flink/pull/20870#issuecomment-1255847688

   > I'm not too sold on that particular diagram and tried to create a 
revision, but dear god these online UML renderers are horrible.
   
   :-D Yeah, the class diagram in general might not be the best solution. I use 
Intellij's PlantUML to edit diagram code.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-29383) Add additionalPrinterColumns definition (PrinterColumn annotation) for some status fields

2022-09-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-29383.
--
Fix Version/s: kubernetes-operator-1.2.0
   Resolution: Fixed

merged to main

> Add additionalPrinterColumns definition (PrinterColumn annotation) for some 
> status fields
> -
>
> Key: FLINK-29383
> URL: https://issues.apache.org/jira/browse/FLINK-29383
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Xin Hao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.2.0
>
>
> We should add additionalPrinterColumns definitions in the CRD so that we can 
> use
> {code:java}
> k get flinksessionjob -o wide
> {code}
> to see the session jobs statuses.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-29383) Add additionalPrinterColumns definition (PrinterColumn annotation) for some status fields

2022-09-22 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608569#comment-17608569
 ] 

Gyula Fora edited comment on FLINK-29383 at 9/23/22 6:36 AM:
-

merged to main fe1356edc29318dbb6c96309a775749ac5a64b09


was (Author: gyfora):
merged to main

> Add additionalPrinterColumns definition (PrinterColumn annotation) for some 
> status fields
> -
>
> Key: FLINK-29383
> URL: https://issues.apache.org/jira/browse/FLINK-29383
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Xin Hao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.2.0
>
>
> We should add additionalPrinterColumns definitions in the CRD so that we can 
> use
> {code:java}
> k get flinksessionjob -o wide
> {code}
> to see the session jobs statuses.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gyfora merged pull request #378: [FLINK-29383] Add PrinterColumn annotation for status fields

2022-09-22 Thread GitBox


gyfora merged PR #378:
URL: https://github.com/apache/flink-kubernetes-operator/pull/378


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] SmirAlex opened a new pull request, #20890: [FLINK-29093][table] Fix InternalCompilerException in LookupJoinITCas…

2022-09-22 Thread GitBox


SmirAlex opened a new pull request, #20890:
URL: https://github.com/apache/flink/pull/20890

   
   
   ## What is the purpose of the change
   
   Fixed unstable test LookupJoinITCase with Full cache
   
   
   ## Brief change log
   
 - Avoided concurrent creation of `Projection` object from generated code
 - Fixed closing logic, so now `close` method waits until current reload is 
over+ there can't be another reload after closing
 - Reset resource counter in LookupJoin IT cases before each test, so if 
one will be failed, other ones will not be affected by that
   
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as LookupJoinITCase.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented?  not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29373) DataStream to table not support BigDecimalTypeInfo

2022-09-22 Thread hk__lrzy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608567#comment-17608567
 ] 

hk__lrzy commented on FLINK-29373:
--

[~lzljs3620320] can you show more detail about this issue, maybe i can work on 
this issue can try to fix it.

> DataStream to table not support BigDecimalTypeInfo
> --
>
> Key: FLINK-29373
> URL: https://issues.apache.org/jira/browse/FLINK-29373
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0
>Reporter: hk__lrzy
>Priority: Major
> Attachments: image-2022-09-21-15-12-11-082.png, 
> image-2022-09-22-18-08-44-385.png
>
>
> When we try to transfrom datastream to table, *TypeInfoDataTypeConverter* 
> will try to convert *TypeInformation* to {*}DataType{*}, but if datastream's 
> produce types contains {*}BigDecimalTypeInfo{*}, *TypeInfoDataTypeConverter* 
> will final convert it to {*}RawDataType{*},then when we want tranform table 
> to datastream again, exception will hapend, and show the data type not match.
> Blink planner also will has this exception.
> !image-2022-09-22-18-08-44-385.png!
>  
> {code:java}
> Query schema: [f0: RAW('java.math.BigDecimal', '...')]
> Sink schema:  [f0: RAW('java.math.BigDecimal', ?)]{code}
> how to recurrent
> {code:java}
> // code placeholder
> StreamExecutionEnvironment executionEnvironment = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> EnvironmentSettings.Builder envBuilder = EnvironmentSettings.newInstance()
> .inStreamingMode();
> StreamTableEnvironment streamTableEnvironment = 
> StreamTableEnvironment.create(executionEnvironment, envBuilder.build());
> FromElementsFunction fromElementsFunction = new FromElementsFunction(new 
> BigDecimal(1.11D));
> DataStreamSource dataStreamSource = 
> executionEnvironment.addSource(fromElementsFunction, new 
> BigDecimalTypeInfo(10, 8));
> streamTableEnvironment.createTemporaryView("tmp", dataStreamSource);
> Table table = streamTableEnvironment.sqlQuery("select * from tmp");
> streamTableEnvironment.toRetractStream(table, 
> table.getSchema().toRowType());{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29144) Enable multiple jar entries for jarURI

2022-09-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-29144.
--
Resolution: Fixed

Resolved by 8f53441a4978eeb38dc5ef229c179cc60598ce87

> Enable multiple jar entries for jarURI
> --
>
> Key: FLINK-29144
> URL: https://issues.apache.org/jira/browse/FLINK-29144
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Arseniy Tashoyan
>Priority: Major
>
> The setting _job.jarURI_ accepts a string with the path to the jar-file:
> {code:yaml}
> job:
>   jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
> {code}
> This could be improved to accept a list of jars:
> {code:yaml}
> job:
>   jarURIs:
>   - local:///opt/flink/examples/streaming/StateMachineExample.jar
>   - local:///opt/common/scala-logging.jar
> {code}
> This could also be improved to accept one or more directories with jars:
> {code:yaml}
> job:
>   jarDirs:
>   - local:///opt/app/lib
>   - local:///opt/common/lib
> {code}
> The order of entries in the list defines the order of jars in the classpath.
> Internally, Flink Kubernetes Operator uses the property _pipeline.jars_ - see 
> [FlinkConfigBuilder.java 
> |https://github.com/apache/flink-kubernetes-operator/blob/main/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java#L259]:
> {code:java}
> effectiveConfig.set(PipelineOptions.JARS, 
> Collections.singletonList(uri.toString()));
> {code}
> The property _pipeline.jars_ allows to pass more than one jar entry.
> This improvement allows to avoid building a fat-jar. Instead we could provide 
> directories with normal jars.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28181) Add support in FlinkSessionJob to submit job using jar available in jobManager's classpath

2022-09-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-28181:
---
Fix Version/s: kubernetes-operator-1.2.0

> Add support in FlinkSessionJob to submit job using jar available in 
> jobManager's classpath
> --
>
> Key: FLINK-28181
> URL: https://issues.apache.org/jira/browse/FLINK-28181
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Jeesmon Jacob
>Assignee: Jeesmon Jacob
>Priority: Major
> Fix For: kubernetes-operator-1.2.0
>
>
> Currently FlinkSessionJob needs to download job jar from remote endpoint 
> (http/s3/etc.) and submit it to jobManager for starting the job. There is no 
> built-in support for starting a job using a jar that is already available in 
> jobManager's docker image. This ticket is created to support submitting a job 
> using a jar that is available in jobManager's classpath.
> We have a need for team specific session cluster where all jobs are bundled 
> in a docker image with many different configurations. For these job jars to 
> be copied to a remote endpoint create additional maintenance and release 
> overhead.
> Slack discussion thread: 
> https://apache-flink.slack.com/archives/C03G7LJTS2G/p1655495665285339?thread_ts=1655495313.473359&cid=C03G7LJTS2G



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29144) Enable multiple jar entries for jarURI

2022-09-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-29144:
---
Fix Version/s: kubernetes-operator-1.2.0

> Enable multiple jar entries for jarURI
> --
>
> Key: FLINK-29144
> URL: https://issues.apache.org/jira/browse/FLINK-29144
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Arseniy Tashoyan
>Priority: Major
> Fix For: kubernetes-operator-1.2.0
>
>
> The setting _job.jarURI_ accepts a string with the path to the jar-file:
> {code:yaml}
> job:
>   jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
> {code}
> This could be improved to accept a list of jars:
> {code:yaml}
> job:
>   jarURIs:
>   - local:///opt/flink/examples/streaming/StateMachineExample.jar
>   - local:///opt/common/scala-logging.jar
> {code}
> This could also be improved to accept one or more directories with jars:
> {code:yaml}
> job:
>   jarDirs:
>   - local:///opt/app/lib
>   - local:///opt/common/lib
> {code}
> The order of entries in the list defines the order of jars in the classpath.
> Internally, Flink Kubernetes Operator uses the property _pipeline.jars_ - see 
> [FlinkConfigBuilder.java 
> |https://github.com/apache/flink-kubernetes-operator/blob/main/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java#L259]:
> {code:java}
> effectiveConfig.set(PipelineOptions.JARS, 
> Collections.singletonList(uri.toString()));
> {code}
> The property _pipeline.jars_ allows to pass more than one jar entry.
> This improvement allows to avoid building a fat-jar. Instead we could provide 
> directories with normal jars.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-28181) Add support in FlinkSessionJob to submit job using jar available in jobManager's classpath

2022-09-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-28181.
--
Resolution: Fixed

Resolved by 8f53441a4978eeb38dc5ef229c179cc60598ce87

> Add support in FlinkSessionJob to submit job using jar available in 
> jobManager's classpath
> --
>
> Key: FLINK-28181
> URL: https://issues.apache.org/jira/browse/FLINK-28181
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Jeesmon Jacob
>Assignee: Jeesmon Jacob
>Priority: Major
>
> Currently FlinkSessionJob needs to download job jar from remote endpoint 
> (http/s3/etc.) and submit it to jobManager for starting the job. There is no 
> built-in support for starting a job using a jar that is already available in 
> jobManager's docker image. This ticket is created to support submitting a job 
> using a jar that is available in jobManager's classpath.
> We have a need for team specific session cluster where all jobs are bundled 
> in a docker image with many different configurations. For these job jars to 
> be copied to a remote endpoint create additional maintenance and release 
> overhead.
> Slack discussion thread: 
> https://apache-flink.slack.com/archives/C03G7LJTS2G/p1655495665285339?thread_ts=1655495313.473359&cid=C03G7LJTS2G



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29288) Can't start a job with a jar in the system classpath

2022-09-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-29288:
---
Fix Version/s: kubernetes-operator-1.2.0

> Can't start a job with a jar in the system classpath
> 
>
> Key: FLINK-29288
> URL: https://issues.apache.org/jira/browse/FLINK-29288
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Yaroslav Tkachenko
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.2.0
>
>
> I'm using the latest (unreleased) version of the Kubernetes operator.
> It looks like currently, it's impossible to use it with a job jar file in the 
> system classpath (/opt/flink/lib). *jarURI* is required and it's always 
> passed as a *pipeline.jars* parameter to the Flink process. In practice, it 
> means that the same class is loaded twice: once by the system classloader and 
> another time by the user classloader. This leads to exceptions like this:
> {quote}java.lang.LinkageError: loader constraint violation: when resolving 
> method 'XXX' the class loader org.apache.flink.util.ChildFirstClassLoader 
> @47a5b70d of the current class, YYY, and the class loader 'app' for the 
> method's defining class, ZZZ, have different Class objects for the type AAA 
> used in the signature
> {quote}
> In my opinion, jarURI must be made optional even for the application mode. In 
> this case, it's assumed that it's already available in the system classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29288) Can't start a job with a jar in the system classpath

2022-09-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-29288.
--
Resolution: Fixed

merged to main 8f53441a4978eeb38dc5ef229c179cc60598ce87

> Can't start a job with a jar in the system classpath
> 
>
> Key: FLINK-29288
> URL: https://issues.apache.org/jira/browse/FLINK-29288
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Yaroslav Tkachenko
>Priority: Major
>  Labels: pull-request-available
>
> I'm using the latest (unreleased) version of the Kubernetes operator.
> It looks like currently, it's impossible to use it with a job jar file in the 
> system classpath (/opt/flink/lib). *jarURI* is required and it's always 
> passed as a *pipeline.jars* parameter to the Flink process. In practice, it 
> means that the same class is loaded twice: once by the system classloader and 
> another time by the user classloader. This leads to exceptions like this:
> {quote}java.lang.LinkageError: loader constraint violation: when resolving 
> method 'XXX' the class loader org.apache.flink.util.ChildFirstClassLoader 
> @47a5b70d of the current class, YYY, and the class loader 'app' for the 
> method's defining class, ZZZ, have different Class objects for the type AAA 
> used in the signature
> {quote}
> In my opinion, jarURI must be made optional even for the application mode. In 
> this case, it's assumed that it's already available in the system classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] PatrickRen commented on a diff in pull request #20884: [FLINK-29389][docs] Update documentation of JDBC and HBase lookup table for new caching options

2022-09-22 Thread GitBox


PatrickRen commented on code in PR #20884:
URL: https://github.com/apache/flink/pull/20884#discussion_r978302806


##
docs/content/docs/connectors/table/hbase.md:
##
@@ -188,20 +188,48 @@ Connector Options
   Whether async lookup are enabled. If true, the lookup will be async. 
Note, async only supports hbase-2.2 connector.
 
 
-  lookup.cache.max-rows
+  lookup.cache
   optional
   yes
-  -1
+  NONE
+  EnumPossible values: NONE, PARTIAL
+  The cache strategy for the lookup table. Currently supports NONE (no 
caching) and PARTIAL (caching entries on lookup operation in external 
database).
+
+
+  lookup.partial-cache.max-rows

Review Comment:
   Thanks for the feedback! Of course users don't need to read the code to get 
the mapping. Both deprecated and new options are descriptive enough in the doc 
to be translated between. Anyhow I get your point to make the migration more 
smooth. I'll add a new section to show which deprecated option the new one is 
mapping to. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora merged pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-22 Thread GitBox


gyfora merged PR #370:
URL: https://github.com/apache/flink-kubernetes-operator/pull/370


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20889: [BP-1.16][FLINK-28890][table] Fix semantic of latestLoadTime in caching lookup function

2022-09-22 Thread GitBox


flinkbot commented on PR #20889:
URL: https://github.com/apache/flink/pull/20889#issuecomment-1255840516

   
   ## CI report:
   
   * c5c0cf0f99c3d23d388e0c24da4e726227ddd3bc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] PatrickRen opened a new pull request, #20889: [BP-1.16][FLINK-28890][table] Fix semantic of latestLoadTime in caching lookup function

2022-09-22 Thread GitBox


PatrickRen opened a new pull request, #20889:
URL: https://github.com/apache/flink/pull/20889

   Unchanged backport of #20885 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] yunfengzhou-hub commented on a diff in pull request #156: [FLINK-29323] Refine Transformer for VectorAssembler

2022-09-22 Thread GitBox


yunfengzhou-hub commented on code in PR #156:
URL: https://github.com/apache/flink-ml/pull/156#discussion_r978291706


##
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/vectorassembler/VectorAssemblerParams.java:
##
@@ -21,11 +21,31 @@
 import org.apache.flink.ml.common.param.HasHandleInvalid;
 import org.apache.flink.ml.common.param.HasInputCols;
 import org.apache.flink.ml.common.param.HasOutputCol;
+import org.apache.flink.ml.param.IntArrayParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+
+import java.util.Arrays;
 
 /**
  * Params of {@link VectorAssembler}.
  *
  * @param  The class type of this instance.
  */
 public interface VectorAssemblerParams
-extends HasInputCols, HasOutputCol, HasHandleInvalid {}
+extends HasInputCols, HasOutputCol, HasHandleInvalid {
+Param SIZES =

Review Comment:
   In referent to the `splitsArray` parameter in Bucketizer and 
`VectorSizeHint` in Spark VectorAssembler, do you think it would be better to 
rename this parameter to `vectorSizeArray` or `elementSizeArray`?



##
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/vectorassembler/VectorAssembler.java:
##
@@ -74,38 +74,65 @@ public Table[] transform(Table... inputs) {
 DataStream output =
 tEnv.toDataStream(inputs[0])
 .flatMap(
-new AssemblerFunc(getInputCols(), 
getHandleInvalid()),
+new AssemblerFunction(
+getInputCols(), getHandleInvalid(), 
getSizes()),
 outputTypeInfo);
 Table outputTable = tEnv.fromDataStream(output);
 return new Table[] {outputTable};
 }
 
-private static class AssemblerFunc implements FlatMapFunction {
+private static class AssemblerFunction implements FlatMapFunction {
 private final String[] inputCols;
 private final String handleInvalid;
+private final int[] sizeArray;
 
-public AssemblerFunc(String[] inputCols, String handleInvalid) {
+public AssemblerFunction(String[] inputCols, String handleInvalid, 
int[] sizeArray) {
 this.inputCols = inputCols;
 this.handleInvalid = handleInvalid;
+this.sizeArray = sizeArray;
 }
 
 @Override
 public void flatMap(Row value, Collector out) {
 int nnz = 0;
 int vectorSize = 0;
 try {
-for (String inputCol : inputCols) {
+for (int i = 0; i < inputCols.length; ++i) {
+String inputCol = inputCols[i];
 Object object = value.getField(inputCol);
 Preconditions.checkNotNull(object, "Input column value 
should not be null.");
 if (object instanceof Number) {
+Preconditions.checkArgument(

Review Comment:
   There might be performance issues if we perform these checks for each 
record. Can we try to avoid this? For example, can the assembling process fail 
if the input data size does match with expected, so that no check is needed?



##
flink-ml-examples/src/main/java/org/apache/flink/ml/examples/feature/VectorAssemblerExample.java:
##
@@ -56,7 +56,8 @@ public static void main(String[] args) {
 VectorAssembler vectorAssembler =
 new VectorAssembler()
 .setInputCols("vec", "num", "sparseVec")
-.setOutputCol("assembledVec");
+.setOutputCol("assembledVec")
+.setSizes(2, 1, 5);

Review Comment:
   Let's update VectorAssembler's markdown document accordingly, including its 
parameter list, examples and possibly descriptions.



##
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/vectorassembler/VectorAssemblerParams.java:
##
@@ -21,11 +21,31 @@
 import org.apache.flink.ml.common.param.HasHandleInvalid;
 import org.apache.flink.ml.common.param.HasInputCols;
 import org.apache.flink.ml.common.param.HasOutputCol;
+import org.apache.flink.ml.param.IntArrayParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+
+import java.util.Arrays;
 
 /**

Review Comment:
   Would it be better to add some document describing how users need to tackle 
the sizes parameter? For example, Spark has some descriptions of the function 
of VectorSizeHint in the JavaDoc of `VectorAssembler.handleInvalid`. Maybe we 
can add similar descriptions to the JavaDoc of `VectorAssembler`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28890) Incorrect semantic of latestLoadTime in CachingLookupFunction and CachingAsyncLookupFunction

2022-09-22 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608559#comment-17608559
 ] 

Qingsheng Ren commented on FLINK-28890:
---

Fixed on master: 24c685a58ef72db4c64c90e37056a07eb562be15

> Incorrect semantic of latestLoadTime in CachingLookupFunction and 
> CachingAsyncLookupFunction
> 
>
> Key: FLINK-28890
> URL: https://issues.apache.org/jira/browse/FLINK-28890
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> The semantic of latestLoadTime in CachingLookupFunction and 
> CachingAsyncLookupFunction is not correct, which should be the time spent for 
> the latest load operation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-elasticsearch] liyubin117 commented on pull request #39: [FLINK-29042][Connectors/ElasticSearch] Support lookup join for es connector

2022-09-22 Thread GitBox


liyubin117 commented on PR #39:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/39#issuecomment-1255836068

   @MartijnVisser Hi. Could you please give a review? Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] PatrickRen merged pull request #20885: [FLINK-28890][table] Fix semantic of latestLoadTime in caching lookup function

2022-09-22 Thread GitBox


PatrickRen merged PR #20885:
URL: https://github.com/apache/flink/pull/20885


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] PatrickRen commented on pull request #20885: [FLINK-28890][table] Fix semantic of latestLoadTime in caching lookup function

2022-09-22 Thread GitBox


PatrickRen commented on PR #20885:
URL: https://github.com/apache/flink/pull/20885#issuecomment-1255835762

   CI has passed on my own Azure pipeline: 
https://dev.azure.com/renqs/Apache%20Flink/_build/results?buildId=403&view=results


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] yunfengzhou-hub commented on a diff in pull request #157: [FLINK-28906] Support windowing in AgglomerativeClustering

2022-09-22 Thread GitBox


yunfengzhou-hub commented on code in PR #157:
URL: https://github.com/apache/flink-ml/pull/157#discussion_r978288171


##
docs/content/docs/operators/clustering/agglomerativeclustering.md:
##
@@ -49,15 +49,16 @@ format of the merging information is
 
 ### Parameters
 
-| Key   | Default| Type| Required | Description

 |
-|:--|:---|:|:-|:|
-| numClusters   | `2`| Integer | no   | The max number of 
clusters to create. 
  |
-| distanceThreshold | `null` | Double  | no   | Threshold to 
decide whether two clusters should be merged.   
   |
-| linkage   | `"ward"`   | String  | no   | Criterion for 
computing distance between two clusters. Supported values: `'ward', 'complete', 
'single', 'average'`. |
-| computeFullTree   | `false`| Boolean | no   | Whether computes 
the full tree after convergence.
   |
-| distanceMeasure   | `"euclidean"`  | String  | no   | Distance measure. 
Supported values: `'euclidean', 'manhattan', 'cosine'`. 
  |
-| featuresCol   | `"features"`   | String  | no   | Features column 
name.   
|
-| predictionCol | `"prediction"` | String  | no   | Prediction column 
name.   
  |
+| Key   | Default   | Type| Required | Description 
 |
+| : | : | :-- | :--- | 
:--- |
+| numClusters   | `2`   | Integer | no   | The max 
number of clusters to create.|
+| distanceThreshold | `null`| Double  | no   | Threshold 
to decide whether two clusters should be merged.   |
+| linkage   | `"ward"`  | String  | no   | Criterion 
for computing distance between two clusters. Supported values: `'ward', 
'complete', 'single', 'average'`. |
+| computeFullTree   | `false`   | Boolean | no   | Whether 
computes the full tree after convergence.|
+| distanceMeasure   | `"euclidean"` | String  | no   | Distance 
measure. Supported values: `'euclidean', 'manhattan', 'cosine'`. |
+| window| `BoundedWindow.get()` | Window  | no   | How 
elements would be sliced into batches and fed into the Stage. |

Review Comment:
   I have reformatted the doc with IntelliJ idea's auto format function, as how 
this doc has been formatted before this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-22 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608552#comment-17608552
 ] 

Yang Wang commented on FLINK-29315:
---

[~chesnay] Do you have any other suggestions? I admit that replacing the 
build-in {{ls}} command is a temporary hack, but I do not find any other 
solutions until now.

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] fredia commented on pull request #20860: [FLINK-29347] [runtime] Fix ByteStreamStateHandle#read return -1 when read count is 0

2022-09-22 Thread GitBox


fredia commented on PR #20860:
URL: https://github.com/apache/flink/pull/20860#issuecomment-1255821002

   @Shenjiaqi Thanks for this PR, LGTM. 
   Could you please add the ticket id and module name `[FLINK-29347] [runtime]` 
to the commit message?
   
   @Myasuka Would you like to take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] ijuma commented on pull request #20526: [FLINK-28060][kafka] Bump Kafka to 3.2.1

2022-09-22 Thread GitBox


ijuma commented on PR #20526:
URL: https://github.com/apache/flink/pull/20526#issuecomment-1255795806

   Fyi, Kafka 3.2.3 was released. We recommend upgrading to that.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Tartarus0zm closed pull request #20858: [FLINK-29219][table]fix CREATE TABLE AS statement blocks SQL client's execution

2022-09-22 Thread GitBox


Tartarus0zm closed pull request #20858: [FLINK-29219][table]fix CREATE TABLE AS 
statement blocks SQL client's execution
URL: https://github.com/apache/flink/pull/20858


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Tartarus0zm commented on pull request #20858: [FLINK-29219][table]fix CREATE TABLE AS statement blocks SQL client's execution

2022-09-22 Thread GitBox


Tartarus0zm commented on PR #20858:
URL: https://github.com/apache/flink/pull/20858#issuecomment-1255770862

   @lsyldliu  ok, thanks for your work


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on a diff in pull request #20884: [FLINK-29389][docs] Update documentation of JDBC and HBase lookup table for new caching options

2022-09-22 Thread GitBox


lsyldliu commented on code in PR #20884:
URL: https://github.com/apache/flink/pull/20884#discussion_r978244951


##
docs/content/docs/connectors/table/hbase.md:
##
@@ -188,20 +188,48 @@ Connector Options
   Whether async lookup are enabled. If true, the lookup will be async. 
Note, async only supports hbase-2.2 connector.
 
 
-  lookup.cache.max-rows
+  lookup.cache
   optional
   yes
-  -1
+  NONE
+  EnumPossible values: NONE, PARTIAL
+  The cache strategy for the lookup table. Currently supports NONE (no 
caching) and PARTIAL (caching entries on lookup operation in external 
database).
+
+
+  lookup.partial-cache.max-rows

Review Comment:
If users don't know the alternatives to old options, how do you push them 
to migrate to new options? Reading the code? This will be more expensive due to 
users can't easily get the old and new options mapping. Before we drop the old 
options in the code, I think we should make user can get the mapping easily, so 
the migration will be more smooth. Maybe refer to the 
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/deployment/config/#deprecated-options?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on a diff in pull request #20884: [FLINK-29389][docs] Update documentation of JDBC and HBase lookup table for new caching options

2022-09-22 Thread GitBox


lsyldliu commented on code in PR #20884:
URL: https://github.com/apache/flink/pull/20884#discussion_r978244951


##
docs/content/docs/connectors/table/hbase.md:
##
@@ -188,20 +188,48 @@ Connector Options
   Whether async lookup are enabled. If true, the lookup will be async. 
Note, async only supports hbase-2.2 connector.
 
 
-  lookup.cache.max-rows
+  lookup.cache
   optional
   yes
-  -1
+  NONE
+  EnumPossible values: NONE, PARTIAL
+  The cache strategy for the lookup table. Currently supports NONE (no 
caching) and PARTIAL (caching entries on lookup operation in external 
database).
+
+
+  lookup.partial-cache.max-rows

Review Comment:
If users don't know the alternatives to old options, how do you push them 
to migrate to new options? Reading the code? This will be more expensive due to 
users can't easily get the old and new options mapping. Before we drop the old 
options in the code, I think we should make user can get the mapping easily, so 
the migration will be more smooth. Maybe refer to the Deprecated Options 
[#](https://nightlies.apache.org/flink/flink-docs-master/zh/docs/deployment/config/#deprecated-options)?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on a diff in pull request #20884: [FLINK-29389][docs] Update documentation of JDBC and HBase lookup table for new caching options

2022-09-22 Thread GitBox


lsyldliu commented on code in PR #20884:
URL: https://github.com/apache/flink/pull/20884#discussion_r978244951


##
docs/content/docs/connectors/table/hbase.md:
##
@@ -188,20 +188,48 @@ Connector Options
   Whether async lookup are enabled. If true, the lookup will be async. 
Note, async only supports hbase-2.2 connector.
 
 
-  lookup.cache.max-rows
+  lookup.cache
   optional
   yes
-  -1
+  NONE
+  EnumPossible values: NONE, PARTIAL
+  The cache strategy for the lookup table. Currently supports NONE (no 
caching) and PARTIAL (caching entries on lookup operation in external 
database).
+
+
+  lookup.partial-cache.max-rows

Review Comment:
If users don't know the alternatives to old options, how do you push them 
to migrate to new options? Reading the code? This will be more expensive due to 
users can't easily get the old and new options mapping. Before we drop the old 
options in the code, I think we should make user can get the mapping easily, so 
the migration will be more smooth. Maybe refer to the `Deprecated Options 
[#](https://nightlies.apache.org/flink/flink-docs-master/zh/docs/deployment/config/#deprecated-options)`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-22 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608486#comment-17608486
 ] 

Qingsheng Ren edited comment on FLINK-29315 at 9/23/22 3:20 AM:


Another two instances: 

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41259&view=logs&j=4eda0b4a-bd0d-521a-0916-8285b9be9bb5&t=2ff6d5fa-53a6-53ac-bff7-fa524ea361a9&l=46735]

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41258&view=logs&j=4eda0b4a-bd0d-521a-0916-8285b9be9bb5&t=2ff6d5fa-53a6-53ac-bff7-fa524ea361a9&l=46342]


was (Author: renqs):
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41259&view=logs&j=4eda0b4a-bd0d-521a-0916-8285b9be9bb5&t=2ff6d5fa-53a6-53ac-bff7-fa524ea361a9&l=46735]

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-22 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608486#comment-17608486
 ] 

Qingsheng Ren commented on FLINK-29315:
---

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41259&view=logs&j=4eda0b4a-bd0d-521a-0916-8285b9be9bb5&t=2ff6d5fa-53a6-53ac-bff7-fa524ea361a9&l=46735]

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-22 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608487#comment-17608487
 ] 

Qingsheng Ren commented on FLINK-29315:
---

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41259&view=logs&j=4eda0b4a-bd0d-521a-0916-8285b9be9bb5&t=2ff6d5fa-53a6-53ac-bff7-fa524ea361a9&l=46735]

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-22 Thread Qingsheng Ren (Jira)


[ https://issues.apache.org/jira/browse/FLINK-29315 ]


Qingsheng Ren deleted comment on FLINK-29315:
---

was (Author: renqs):
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41259&view=logs&j=4eda0b4a-bd0d-521a-0916-8285b9be9bb5&t=2ff6d5fa-53a6-53ac-bff7-fa524ea361a9&l=46735]

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] PatrickRen commented on a diff in pull request #20884: [FLINK-29389][docs] Update documentation of JDBC and HBase lookup table for new caching options

2022-09-22 Thread GitBox


PatrickRen commented on code in PR #20884:
URL: https://github.com/apache/flink/pull/20884#discussion_r978237058


##
docs/content/docs/connectors/table/hbase.md:
##
@@ -188,20 +188,48 @@ Connector Options
   Whether async lookup are enabled. If true, the lookup will be async. 
Note, async only supports hbase-2.2 connector.
 
 
-  lookup.cache.max-rows
+  lookup.cache
   optional
   yes
-  -1
+  NONE
+  EnumPossible values: NONE, PARTIAL
+  The cache strategy for the lookup table. Currently supports NONE (no 
caching) and PARTIAL (caching entries on lookup operation in external 
database).
+
+
+  lookup.partial-cache.max-rows

Review Comment:
   I think keeping some deprecated options in the doc is a bit weird too, as we 
should "push" users to switch to new options asap. We keep these deprecated 
options in the code just for backward compatibility. Users can always refer to 
doc of previous versions to get the definition of those deprecated options. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] PatrickRen commented on a diff in pull request #20884: [FLINK-29389][docs] Update documentation of JDBC and HBase lookup table for new caching options

2022-09-22 Thread GitBox


PatrickRen commented on code in PR #20884:
URL: https://github.com/apache/flink/pull/20884#discussion_r978232542


##
docs/content/docs/connectors/table/jdbc.md:
##
@@ -211,30 +211,48 @@ Connector Options
   https://jdbc.postgresql.org/documentation/head/query.html#query-with-cursor";>Postgres,
 may require this to be set to false in order to stream results.
 
 
-  lookup.cache.max-rows
+  lookup.cache
+  optional
+  yes
+  NONE
+  EnumPossible values: NONE, PARTIAL

Review Comment:
   I'm afraid this could confuse users that we list an unavailable option in 
the doc, which increases the cost of explanation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29373) DataStream to table not support BigDecimalTypeInfo

2022-09-22 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608478#comment-17608478
 ] 

Jingsong Lee commented on FLINK-29373:
--

{code:java}
/**
 * Converter from {@link TypeInformation} to {@link DataType}.
 *
 * {@link DataType} is richer than {@link TypeInformation} as it also 
includes details about the
 * {@link LogicalType}. Therefore, some details will be added implicitly during 
the conversion. The
 * conversion from {@link DataType} to {@link TypeInformation} is provided by 
the planner.
 *
 * The behavior of this converter can be summarized as follows:
 *
 * 
 *   All subclasses of {@link TypeInformation} are mapped to {@link 
LogicalType} including
 *   nullability that is aligned with serializers.
 *   {@link TupleTypeInfoBase} is translated into {@link RowType} or {@link 
StructuredType}.
 *   {@link BigDecimal} is converted to {@code DECIMAL(38, 18)} by default.
 *   The order of {@link PojoTypeInfo} fields is determined by the 
converter.
 *   {@link GenericTypeInfo} and other type information that cannot be 
mapped to a logical type
 *   is converted to {@link RawType} by considering the current 
configuration.
 *   {@link TypeInformation} that originated from Table API will keep its 
{@link DataType}
 *   information when implementing {@link DataTypeQueryable}.
 * 
 */
@Internal
public final class TypeInfoDataTypeConverter
{code}

The validation in DynamicSinkUtils should be adjusted. The alignment of these 
different information in DataType and TypeInformation should be ensured.

> DataStream to table not support BigDecimalTypeInfo
> --
>
> Key: FLINK-29373
> URL: https://issues.apache.org/jira/browse/FLINK-29373
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.0
>Reporter: hk__lrzy
>Priority: Major
> Attachments: image-2022-09-21-15-12-11-082.png, 
> image-2022-09-22-18-08-44-385.png
>
>
> When we try to transfrom datastream to table, *TypeInfoDataTypeConverter* 
> will try to convert *TypeInformation* to {*}DataType{*}, but if datastream's 
> produce types contains {*}BigDecimalTypeInfo{*}, *TypeInfoDataTypeConverter* 
> will final convert it to {*}RawDataType{*},then when we want tranform table 
> to datastream again, exception will hapend, and show the data type not match.
> Blink planner also will has this exception.
> !image-2022-09-22-18-08-44-385.png!
>  
> {code:java}
> Query schema: [f0: RAW('java.math.BigDecimal', '...')]
> Sink schema:  [f0: RAW('java.math.BigDecimal', ?)]{code}
> how to recurrent
> {code:java}
> // code placeholder
> StreamExecutionEnvironment executionEnvironment = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> EnvironmentSettings.Builder envBuilder = EnvironmentSettings.newInstance()
> .inStreamingMode();
> StreamTableEnvironment streamTableEnvironment = 
> StreamTableEnvironment.create(executionEnvironment, envBuilder.build());
> FromElementsFunction fromElementsFunction = new FromElementsFunction(new 
> BigDecimal(1.11D));
> DataStreamSource dataStreamSource = 
> executionEnvironment.addSource(fromElementsFunction, new 
> BigDecimalTypeInfo(10, 8));
> streamTableEnvironment.createTemporaryView("tmp", dataStreamSource);
> Table table = streamTableEnvironment.sqlQuery("select * from tmp");
> streamTableEnvironment.toRetractStream(table, 
> table.getSchema().toRowType());{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29349) Use state ttl instead of timer to clean up state in proctime unbounded over aggregate

2022-09-22 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608479#comment-17608479
 ] 

Jingsong Lee commented on FLINK-29349:
--

[~lincoln.86xy] yes, you can

> Use state ttl instead of timer to clean up state in proctime unbounded over 
> aggregate
> -
>
> Key: FLINK-29349
> URL: https://issues.apache.org/jira/browse/FLINK-29349
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0, 1.15.2
>Reporter: lincoln lee
>Priority: Major
> Fix For: 1.17.0
>
>
> Currently we rely on the timer based state cleaning  in proctime  over 
> aggregate, this can be optimized to use state ttl for a more efficienct way



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29042) Support lookup join for es connector

2022-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29042:
---
Labels: pull-request-available  (was: )

> Support lookup join for es connector
> 
>
> Key: FLINK-29042
> URL: https://issues.apache.org/jira/browse/FLINK-29042
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.16.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
>
> Now es connector could only be used as a sink, but in many business 
> scenarios, we treat es as a index database, we should support to make it 
> lookupable in flink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-elasticsearch] boring-cyborg[bot] commented on pull request #39: [FLINK-29042][Connectors/ElasticSearch] Support lookup join for es connector

2022-09-22 Thread GitBox


boring-cyborg[bot] commented on PR #39:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/39#issuecomment-1255746950

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-elasticsearch] liyubin117 opened a new pull request, #39: [FLINK-29042][Connectors/ElasticSearch] Support lookup join for es connector

2022-09-22 Thread GitBox


liyubin117 opened a new pull request, #39:
URL: https://github.com/apache/flink-connector-elasticsearch/pull/39

   Now es connector could only be used as a sink, but in many business 
scenarios, we treat es as a index database, we should support to make it 
lookupable in flink.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] yuzelin commented on a diff in pull request #20887: [FLINK-29229][hive] Fix ObjectStore leak when different users has dif…

2022-09-22 Thread GitBox


yuzelin commented on code in PR #20887:
URL: https://github.com/apache/flink/pull/20887#discussion_r978214084


##
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/endpoint/hive/HiveServer2Endpoint.java:
##
@@ -296,22 +296,27 @@ public TOpenSessionResp OpenSession(TOpenSessionReq 
tOpenSessionReq) throws TExc
 tOpenSessionReq.getConfiguration() == null
 ? Collections.emptyMap()
 : tOpenSessionReq.getConfiguration();
-Map sessionConfig = new HashMap<>();
-sessionConfig.put(TABLE_SQL_DIALECT.key(), SqlDialect.HIVE.name());
-sessionConfig.put(RUNTIME_MODE.key(), 
RuntimeExecutionMode.BATCH.name());
-sessionConfig.put(TABLE_DML_SYNC.key(), "true");
-HiveConf conf = HiveCatalog.createHiveConf(hiveConfPath, null);
-// set variables to HiveConf or Session's conf
-setVariables(conf, sessionConfig, originSessionConf);
 
+HiveConf conf = new HiveConf(hiveConf);
 Catalog hiveCatalog =
 new HiveCatalog(
 catalogName,
 
getUsedDefaultDatabase(originSessionConf).orElse(defaultDatabase),
 conf,
 HiveShimLoader.getHiveVersion(),
 allowEmbedded);
+// Trigger the creation of the HiveMetaStoreClient to use the same 
HiveConf. If the
+// initial HiveConf is different, it will trigger the 
PersistenceManagerFactory to close
+// all the alive PersistenceManager in the ObjectStore, which may 
get error like
+// "PersistenceManager is closed" in the later connection.

Review Comment:
   The exact exception message is “Persistence Manager has been closed”



##
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/endpoint/hive/HiveServer2Endpoint.java:
##
@@ -321,6 +326,7 @@ public TOpenSessionResp OpenSession(TOpenSessionReq 
tOpenSessionReq) throws TExc
 .setDefaultCatalog(catalogName)
 .addSessionConfig(sessionConfig)
 .build());
+// set variables to HiveConf or Session's conf

Review Comment:
   I think this comment line (329) should lay before line 315 0r 319. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

2022-09-22 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608471#comment-17608471
 ] 

Huang Xingbo commented on FLINK-29315:
--

I think it may be very difficult to find the root cause of this problem. +1 for 
solving this problem by this way temporarily considering we don’t have the 
passed CI for a week.

> HDFSTest#testBlobServerRecovery fails on CI
> ---
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.16.0, 1.15.2
>Reporter: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect 
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures 
> (2 failures)
> Sep 15 09:11:22   java.lang.AssertionError: Test failed Error while 
> running command to get file permissions : java.io.IOException: Cannot run 
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22   at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22   at 
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:485)
> Sep 15 09:11:22   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22   at 
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Sep 15 09:11:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22   at 
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22   ... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22   java.lang.NullPointerException: 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] wuchong commented on pull request #20653: [FLINK-29020][docs] add document for CTAS feature

2022-09-22 Thread GitBox


wuchong commented on PR #20653:
URL: https://github.com/apache/flink/pull/20653#issuecomment-1255730398

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wuchong commented on pull request #20869: [FLINK-29219][table] Fix CREATE TABLE AS statement blocks SQL client's execution

2022-09-22 Thread GitBox


wuchong commented on PR #20869:
URL: https://github.com/apache/flink/pull/20869#issuecomment-1255730305

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wuchong commented on pull request #20880: [FLINK-29219][table] Fix CREATE TABLE AS statement blocks SQL client's execution

2022-09-22 Thread GitBox


wuchong commented on PR #20880:
URL: https://github.com/apache/flink/pull/20880#issuecomment-1255730186

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29400) Default Value of env.log.max in documentation is incorrect

2022-09-22 Thread Hang HOU (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608467#comment-17608467
 ] 

Hang HOU commented on FLINK-29400:
--

Hi ,if no more modify like "env.log.max" in conf/flink-conf.yaml  ,i think the 
default is 10,cause "DEFAULT_ENV_LOG_MAX" in config.sh is in use.Then influence 
 "appender.rolling.strategy.max"  or "appender.main.strategy.max" in log4j* 
config files.
So,i guess the document should update:D

> Default Value of env.log.max in documentation is incorrect
> --
>
> Key: FLINK-29400
> URL: https://issues.apache.org/jira/browse/FLINK-29400
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Dhruv Patel
>Priority: Minor
>
> The default value of env.log.max is 10 as per the code in master 
> ([https://github.com/apache/flink/blob/master/flink-dist/src/main/flink-bin/bin/config.sh#L137).]
> However the Flink Documentation says the default value is 5 
> (https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#env-log-max)
>  which is incorrect 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-web] HuangXingBo commented on a diff in pull request #569: Release Flink 1.14.6

2022-09-22 Thread GitBox


HuangXingBo commented on code in PR #569:
URL: https://github.com/apache/flink-web/pull/569#discussion_r978215248


##
_posts/2022-09-08-release-1.14.6.md:
##
@@ -0,0 +1,105 @@
+---
+layout: post
+title:  "Apache Flink 1.14.6 Release Announcement"
+date: 2022-09-08T00:00:00.000Z

Review Comment:
   Good catch. When the PR is merged, it is necessary to change all the time 
uniformly, so I will modify it at that time.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-18346) Support partition pruning for lookup table source

2022-09-22 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608466#comment-17608466
 ] 

Jingsong Lee commented on FLINK-18346:
--

cc: [~godfreyhe]


> Support partition pruning for lookup table source
> -
>
> Key: FLINK-18346
> URL: https://issues.apache.org/jira/browse/FLINK-18346
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Priority: Major
>
> Especially for Filesystem lookup table source, it stores all records in 
> memory, if there is partition pruning, for partitioned table, can reduce 
> memory effectively for lookup table source.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] yuzelin closed pull request #20861: [FLINK-29274][hive] Fix unstable tests in HiveServer2EndpointITCase

2022-09-22 Thread GitBox


yuzelin closed pull request #20861: [FLINK-29274][hive] Fix unstable tests in 
HiveServer2EndpointITCase
URL: https://github.com/apache/flink/pull/20861


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on pull request #20858: [FLINK-29219][table]fix CREATE TABLE AS statement blocks SQL client's execution

2022-09-22 Thread GitBox


lsyldliu commented on PR #20858:
URL: https://github.com/apache/flink/pull/20858#issuecomment-1255719282

   @Tartarus0zm close this pr?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on a diff in pull request #20884: [FLINK-29389][docs] Update documentation of JDBC and HBase lookup table for new caching options

2022-09-22 Thread GitBox


lsyldliu commented on code in PR #20884:
URL: https://github.com/apache/flink/pull/20884#discussion_r978203393


##
docs/content/docs/connectors/table/jdbc.md:
##
@@ -211,30 +211,48 @@ Connector Options
   https://jdbc.postgresql.org/documentation/head/query.html#query-with-cursor";>Postgres,
 may require this to be set to false in order to stream results.
 
 
-  lookup.cache.max-rows
+  lookup.cache
+  optional
+  yes
+  NONE
+  EnumPossible values: NONE, PARTIAL

Review Comment:
   I think here we should list the three enum values, but explain only support 
the prior two strategies. WDYT?



##
docs/content/docs/connectors/table/hbase.md:
##
@@ -188,20 +188,48 @@ Connector Options
   Whether async lookup are enabled. If true, the lookup will be async. 
Note, async only supports hbase-2.2 connector.
 
 
-  lookup.cache.max-rows
+  lookup.cache
   optional
   yes
-  -1
+  NONE
+  EnumPossible values: NONE, PARTIAL

Review Comment:
   ditto



##
docs/content/docs/connectors/table/hbase.md:
##
@@ -188,20 +188,48 @@ Connector Options
   Whether async lookup are enabled. If true, the lookup will be async. 
Note, async only supports hbase-2.2 connector.
 
 
-  lookup.cache.max-rows
+  lookup.cache
   optional
   yes
-  -1
+  NONE
+  EnumPossible values: NONE, PARTIAL
+  The cache strategy for the lookup table. Currently supports NONE (no 
caching) and PARTIAL (caching entries on lookup operation in external 
database).
+
+
+  lookup.partial-cache.max-rows

Review Comment:
   Refer to 
https://github.com/apache/flink/blob/bf81768ff564c5bf4a57cb33c6f5126b83b28fb5/flink-connectors/flink-connector-hbase-base/src/main/java/org/apache/flink/connector/hbase/table/HBaseConnectorOptions.java#L98,
 I think we should also explain the old compatible options instead of dropping 
it directly. Other deprecated options are similar.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] luoyuxia commented on pull request #20883: [WIP] validate for https://github.com/apache/flink/pull/20882

2022-09-22 Thread GitBox


luoyuxia commented on PR #20883:
URL: https://github.com/apache/flink/pull/20883#issuecomment-1255702462

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] zezaeoh commented on pull request #377: [FLINK-28979] add owner reference to flink deployment object

2022-09-22 Thread GitBox


zezaeoh commented on PR #377:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/377#issuecomment-1255669662

   It looks like e2e test failed because of connection time out from maven repo 
😢 
   Could you give me a one more try? @gyfora 
   
   
![image](https://user-images.githubusercontent.com/41815516/191870677-7fb597a8-b916-4a74-8169-f82f8229bb97.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on pull request #20870: [FLINK-29375][rpc] Move getSelfGateway() into RpcService

2022-09-22 Thread GitBox


zentol commented on PR #20870:
URL: https://github.com/apache/flink/pull/20870#issuecomment-1255654652

   > I came up with another diagram (see 
[gist](https://gist.github.com/XComp/1ee7e935e38209afd774cd7d23af0833)) that 
visualizes the relationships between the different classes of the RPC system to 
get into that topic again :-D
   
   I'm not too sold on that particular diagram and tried to create a revision, 
but _dear god_ these online UML renderers are _horrible_.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29340) ResourceManagerTest#testProcessResourceRequirementsWhenRecoveryFinished prone to race condition

2022-09-22 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-29340:
-
Priority: Minor  (was: Major)

> ResourceManagerTest#testProcessResourceRequirementsWhenRecoveryFinished prone 
> to race condition
> ---
>
> Key: FLINK-29340
> URL: https://issues.apache.org/jira/browse/FLINK-29340
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.16.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> The test incorrectly assumes that the {{declareRequiredResources}} has 
> already been run when calling {{runInMainThread}}, while the RPC could still 
> be in flight.
> This can result in the test failing because within runInMainThread the test 
> assumes that completing the readyToServeFuture will immediately result in the 
> processing of resources, due to this workflow having been set up within 
> delcareRequiredResources. Without it it will just fail because the completion 
> of the future has in practice no effect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29340) ResourceManagerTest#testProcessResourceRequirementsWhenRecoveryFinished prone to race condition

2022-09-22 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-29340.

Fix Version/s: 1.17.0
   Resolution: Fixed

master: a54b2a8674e4df6345968e538636a7960112bb9e

> ResourceManagerTest#testProcessResourceRequirementsWhenRecoveryFinished prone 
> to race condition
> ---
>
> Key: FLINK-29340
> URL: https://issues.apache.org/jira/browse/FLINK-29340
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.16.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> The test incorrectly assumes that the {{declareRequiredResources}} has 
> already been run when calling {{runInMainThread}}, while the RPC could still 
> be in flight.
> This can result in the test failing because within runInMainThread the test 
> assumes that completing the readyToServeFuture will immediately result in the 
> processing of resources, due to this workflow having been set up within 
> delcareRequiredResources. Without it it will just fail because the completion 
> of the future has in practice no effect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol merged pull request #20871: [FLINK-29340][coordination][tests] Avoid selfGateway implementation d…

2022-09-22 Thread GitBox


zentol merged PR #20871:
URL: https://github.com/apache/flink/pull/20871


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-29397) Race condition in StreamTask can lead to NPE if changelog is disabled

2022-09-22 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608399#comment-17608399
 ] 

Chesnay Schepler edited comment on FLINK-29397 at 9/22/22 11:17 PM:


master: 162db046e1c63e4610393d14cd9843962321915e
1.16: a8979b29e084641a5160768f48400682a5d79bbb
1.15: a5ae2fa25522084c1616a8d11e3c1d1152f08b29


was (Author: zentol):
master: 162db046e1c63e4610393d14cd9843962321915e
1.16: a8979b29e084641a5160768f48400682a5d79bbb
1.15: TBD

> Race condition in StreamTask can lead to NPE if changelog is disabled
> -
>
> Key: FLINK-29397
> URL: https://issues.apache.org/jira/browse/FLINK-29397
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.15.3
>
>
> {{StreamTask#processInput}} contains a branch where the 
> changelogWriterAvailabilityProvider is accessed without a null check; this 
> field however is nullable in case the changelog is disabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29397) Race condition in StreamTask can lead to NPE if changelog is disabled

2022-09-22 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-29397.

Resolution: Fixed

> Race condition in StreamTask can lead to NPE if changelog is disabled
> -
>
> Key: FLINK-29397
> URL: https://issues.apache.org/jira/browse/FLINK-29397
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.15.3
>
>
> {{StreamTask#processInput}} contains a branch where the 
> changelogWriterAvailabilityProvider is accessed without a null check; this 
> field however is nullable in case the changelog is disabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-29397) Race condition in StreamTask can lead to NPE if changelog is disabled

2022-09-22 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608399#comment-17608399
 ] 

Chesnay Schepler edited comment on FLINK-29397 at 9/22/22 11:14 PM:


master: 162db046e1c63e4610393d14cd9843962321915e
1.16: a8979b29e084641a5160768f48400682a5d79bbb
1.15: TBD


was (Author: zentol):
master: 162db046e1c63e4610393d14cd9843962321915e
1.16: TBD
1.15: TBD

> Race condition in StreamTask can lead to NPE if changelog is disabled
> -
>
> Key: FLINK-29397
> URL: https://issues.apache.org/jira/browse/FLINK-29397
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.15.3
>
>
> {{StreamTask#processInput}} contains a branch where the 
> changelogWriterAvailabilityProvider is accessed without a null check; this 
> field however is nullable in case the changelog is disabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-29378) Misleading logging in Execution for failed state trannsitions

2022-09-22 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608264#comment-17608264
 ] 

Chesnay Schepler edited comment on FLINK-29378 at 9/22/22 11:14 PM:


master: 5766d50dc1401b1269ec83e670c2d21257e20fc5
1.16: 5ba6525db731aa62a770a9a7971b8a1cbf12d9fb


was (Author: zentol):
master: 5766d50dc1401b1269ec83e670c2d21257e20fc5

> Misleading logging in Execution for failed state trannsitions
> -
>
> Key: FLINK-29378
> URL: https://issues.apache.org/jira/browse/FLINK-29378
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> {code}
> String.format(
> "Concurrent unexpected state transition of task %s to %s while deployment 
> was in progress.",
>  getVertexWithAttempt(), currentState);
> {code}
> {{to}} is not the target state.
> This whole line needs improvements; log the current, expected and target 
> state. Additionally I'd suggest to log the attempt ID which is much easier to 
> correlate with other messages (like what the TM actually submits).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29378) Misleading logging in Execution for failed state trannsitions

2022-09-22 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-29378.

Resolution: Fixed

> Misleading logging in Execution for failed state trannsitions
> -
>
> Key: FLINK-29378
> URL: https://issues.apache.org/jira/browse/FLINK-29378
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> {code}
> String.format(
> "Concurrent unexpected state transition of task %s to %s while deployment 
> was in progress.",
>  getVertexWithAttempt(), currentState);
> {code}
> {{to}} is not the target state.
> This whole line needs improvements; log the current, expected and target 
> state. Additionally I'd suggest to log the attempt ID which is much easier to 
> correlate with other messages (like what the TM actually submits).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29400) Default Value of env.log.max in documentation is incorrect

2022-09-22 Thread Dhruv Patel (Jira)
Dhruv Patel created FLINK-29400:
---

 Summary: Default Value of env.log.max in documentation is incorrect
 Key: FLINK-29400
 URL: https://issues.apache.org/jira/browse/FLINK-29400
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Dhruv Patel


The default value of env.log.max is 10 as per the code in master 
([https://github.com/apache/flink/blob/master/flink-dist/src/main/flink-bin/bin/config.sh#L137).]

However the Flink Documentation says the default value is 5 
(https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#env-log-max)
 which is incorrect 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29399) TableITCase is unstable

2022-09-22 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-29399:


 Summary: TableITCase is unstable
 Key: FLINK-29399
 URL: https://issues.apache.org/jira/browse/FLINK-29399
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner, Tests
Affects Versions: 1.16.0
Reporter: Chesnay Schepler




{code:java}
val it = tableResult.collect()
it.close()
val jobStatus =
  try {
Some(tableResult.getJobClient.get().getJobStatus.get())
  } catch {
// ignore the exception,
// because the MiniCluster maybe already been shut down when getting 
job status
case _: Throwable => None
  }
if (jobStatus.isDefined) {
  assertNotEquals(jobStatus.get, JobStatus.RUNNING)
}
{code}

There's no guarantee that the cancellation already went through. The test 
should periodically poll the job status until another state is reached.
Or even better, use the new collect API, call execute in a separate thread, 
close the iterator and wait for the thread to terminate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29393) Upgrade Kubernetes operator examples to use the latest Flink base image

2022-09-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-29393.
--
Resolution: Fixed

merged to main 66987fb9b4d6f7a315024cff27dac12886b1ee88

> Upgrade Kubernetes operator examples to use the latest Flink base image
> ---
>
> Key: FLINK-29393
> URL: https://issues.apache.org/jira/browse/FLINK-29393
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Gabor Somogyi
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.2.0
>
>
> We should update all the examples to refer to the latest Flink base image 
> (1.15.2) before the release



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gyfora merged pull request #381: [FLINK-29393] Upgrade Kubernetes operator examples to use 1.15.2 Flink base image

2022-09-22 Thread GitBox


gyfora merged PR #381:
URL: https://github.com/apache/flink-kubernetes-operator/pull/381


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-28966) Check backward compatibility against both 1.1.0 and 1.0.0 (all released versions)

2022-09-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-28966.
--
Resolution: Fixed

merged to main f41025da98d21c4e8404b6554e9c3d6a6ce0a4fb

> Check backward compatibility against both 1.1.0 and 1.0.0 (all released 
> versions)
> -
>
> Key: FLINK-28966
> URL: https://issues.apache.org/jira/browse/FLINK-28966
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Matyas Orhidi
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.2.0
>
>
> We should not only check CRD compatibility against 1.0.0 but all released 
> versions. We could also think of a way to add this check automatically when 
> we release a new version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gyfora merged pull request #380: [FLINK-28966] Check backward compatibility against both 1.1.0 and 1.0…

2022-09-22 Thread GitBox


gyfora merged PR #380:
URL: https://github.com/apache/flink-kubernetes-operator/pull/380


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29398) Utilize Rack Awareness in Flink Consumer

2022-09-22 Thread Jeremy DeGroot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608412#comment-17608412
 ] 

Jeremy DeGroot commented on FLINK-29398:


I'll provide a little further justification and background for this in a 
comment. At my job we were tasked with reducing our AWS spend, and one place we 
found that could be improved was Inter-AZ bandwidth. We implemented something 
similar to what I describe above, and realized significant savings (bringing 
our billable bandwidth from 60% of the total down to 20%). It seems likely 
other people would also like to save money in this fashion. If this gets taken 
up, we'd also be willing to provide our implementation as a basis for 
development.

> Utilize Rack Awareness in Flink Consumer
> 
>
> Key: FLINK-29398
> URL: https://issues.apache.org/jira/browse/FLINK-29398
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy DeGroot
>Priority: Major
>
> [KIP-708|https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams]
>  was implemented some time ago in Kafka. This allows brokers and consumers to 
> communicate about the rack (or AWS Availability Zone) they're located in. 
> Reading from a local broker can save money in bandwidth and improve latency 
> for your consumers.
> Flink Kafka consumers currently cannot easily rack awareness if they're 
> deployed across multiple racks or availability zones, because they have no 
> control over which rack the Task Manager they'll be assigned to may be in. 
> This improvement proposes that a Kafka Consumer could be configured with a 
> callback or Future that could be run when it's being configured on the task 
> manager, that will set the appropriate value at runtime if a value is 
> provided. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29398) Utilize Rack Awareness in Flink Consumer

2022-09-22 Thread Jeremy DeGroot (Jira)
Jeremy DeGroot created FLINK-29398:
--

 Summary: Utilize Rack Awareness in Flink Consumer
 Key: FLINK-29398
 URL: https://issues.apache.org/jira/browse/FLINK-29398
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Reporter: Jeremy DeGroot


[KIP-708|https://cwiki.apache.org/confluence/display/KAFKA/KIP-708%3A+Rack+awareness+for+Kafka+Streams]
 was implemented some time ago in Kafka. This allows brokers and consumers to 
communicate about the rack (or AWS Availability Zone) they're located in. 
Reading from a local broker can save money in bandwidth and improve latency for 
your consumers.

Flink Kafka consumers currently cannot easily rack awareness if they're 
deployed across multiple racks or availability zones, because they have no 
control over which rack the Task Manager they'll be assigned to may be in. 

This improvement proposes that a Kafka Consumer could be configured with a 
callback or Future that could be run when it's being configured on the task 
manager, that will set the appropriate value at runtime if a value is provided. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29364) Root cause of Exceptions thrown in the SourceReader start() method gets "swallowed".

2022-09-22 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov updated FLINK-29364:
--
Description: 
If an exception is thrown in the {_}SourceReader{_}'s _start()_ method, its 
root cause does not get captured.

The details are still available here: 
[Task.java#L758|https://github.com/apache/flink/blob/bccecc23067eb7f18e20bade814be73393401be5/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L758]

But the execution falls through to 
[Task.java#L780|https://github.com/apache/flink/blob/bccecc23067eb7f18e20bade814be73393401be5/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L780]
  and discards the root cause of
canceling the source invokable without recording the actual reason.

 

Hot to reproduce: 
[DataGeneratorSourceITCase.java#L117|https://github.com/afedulov/flink/blob/3df7669fcc6ba08c5147195b80cc97ac1481ec8c/flink-tests/src/test/java/org/apache/flink/api/connector/source/lib/DataGeneratorSourceITCase.java#L117]
 

  was:
If an exception is thrown in the {_}SourceReader{_}'s _start()_ method, its 
root cause does not get captured.

The details are still available here: 
[Task.java#L758|https://github.com/apache/flink/blob/bccecc23067eb7f18e20bade814be73393401be5/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L758]

But the execution falls through to 
[Task.java#L780|https://github.com/apache/flink/blob/bccecc23067eb7f18e20bade814be73393401be5/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L780]
  and discards the root cause of
canceling the source invokable without recording the actual reason.

 

Hot to reproduce: 
[DataGeneratorSourceITCase.java#L117|https://github.com/apache/flink/blob/3df7669fcc6ba08c5147195b80cc97ac1481ec8c/flink-tests/src/test/java/org/apache/flink/api/connector/source/lib/DataGeneratorSourceITCase.java#L117]
 


> Root cause of Exceptions thrown in the SourceReader start() method gets 
> "swallowed".
> 
>
> Key: FLINK-29364
> URL: https://issues.apache.org/jira/browse/FLINK-29364
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.15.2
>Reporter: Alexander Fedulov
>Priority: Major
>
> If an exception is thrown in the {_}SourceReader{_}'s _start()_ method, its 
> root cause does not get captured.
> The details are still available here: 
> [Task.java#L758|https://github.com/apache/flink/blob/bccecc23067eb7f18e20bade814be73393401be5/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L758]
> But the execution falls through to 
> [Task.java#L780|https://github.com/apache/flink/blob/bccecc23067eb7f18e20bade814be73393401be5/flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java#L780]
>   and discards the root cause of
> canceling the source invokable without recording the actual reason.
>  
> Hot to reproduce: 
> [DataGeneratorSourceITCase.java#L117|https://github.com/afedulov/flink/blob/3df7669fcc6ba08c5147195b80cc97ac1481ec8c/flink-tests/src/test/java/org/apache/flink/api/connector/source/lib/DataGeneratorSourceITCase.java#L117]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29397) Race condition in StreamTask can lead to NPE if changelog is disabled

2022-09-22 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17608399#comment-17608399
 ] 

Chesnay Schepler commented on FLINK-29397:
--

master: 162db046e1c63e4610393d14cd9843962321915e
1.16: TBD
1.15: TBD

> Race condition in StreamTask can lead to NPE if changelog is disabled
> -
>
> Key: FLINK-29397
> URL: https://issues.apache.org/jira/browse/FLINK-29397
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.15.3
>
>
> {{StreamTask#processInput}} contains a branch where the 
> changelogWriterAvailabilityProvider is accessed without a null check; this 
> field however is nullable in case the changelog is disabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol merged pull request #20888: [FLINK-29397][runtime] Check if changelog provider is null

2022-09-22 Thread GitBox


zentol merged PR #20888:
URL: https://github.com/apache/flink/pull/20888


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29397) Race condition in StreamTask can lead to NPE if changelog is disabled

2022-09-22 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-29397:
-
Affects Version/s: 1.15.0
   (was: 1.16.0)

> Race condition in StreamTask can lead to NPE if changelog is disabled
> -
>
> Key: FLINK-29397
> URL: https://issues.apache.org/jira/browse/FLINK-29397
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> {{StreamTask#processInput}} contains a branch where the 
> changelogWriterAvailabilityProvider is accessed without a null check; this 
> field however is nullable in case the changelog is disabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29397) Race condition in StreamTask can lead to NPE if changelog is disabled

2022-09-22 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-29397:
-
Fix Version/s: 1.15.3

> Race condition in StreamTask can lead to NPE if changelog is disabled
> -
>
> Key: FLINK-29397
> URL: https://issues.apache.org/jira/browse/FLINK-29397
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.15.3
>
>
> {{StreamTask#processInput}} contains a branch where the 
> changelogWriterAvailabilityProvider is accessed without a null check; this 
> field however is nullable in case the changelog is disabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29394) Flink k8s operator observe Flink job restart count

2022-09-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-29394:
---
Affects Version/s: kubernetes-operator-1.1.0
   (was: 1.15.2)

> Flink k8s operator observe Flink job restart count
> --
>
> Key: FLINK-29394
> URL: https://issues.apache.org/jira/browse/FLINK-29394
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Gabor Somogyi
>Assignee: Gabor Somogyi
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29394) Flink k8s operator observe Flink job restart count

2022-09-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-29394:
---
Issue Type: New Feature  (was: Improvement)

> Flink k8s operator observe Flink job restart count
> --
>
> Key: FLINK-29394
> URL: https://issues.apache.org/jira/browse/FLINK-29394
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Gabor Somogyi
>Assignee: Gabor Somogyi
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] HuangZhenQiu commented on a diff in pull request #375: [FLINK-29327][operator] remove operator config from job runtime config before d…

2022-09-22 Thread GitBox


HuangZhenQiu commented on code in PR #375:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/375#discussion_r977872055


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java:
##
@@ -624,7 +626,7 @@ protected ClusterClient 
getClusterClient(Configuration conf) throws Exce
 conf, clusterId, (c, e) -> new 
StandaloneClientHAServices(restServerAddress));
 }
 
-private JarRunResponseBody runJar(
+protected JarRunResponseBody runJar(

Review Comment:
   Done



##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java:
##
@@ -666,7 +668,7 @@ private JarRunResponseBody runJar(
 }
 }
 
-private JarUploadResponseBody uploadJar(
+protected JarUploadResponseBody uploadJar(

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on pull request #378: [FLINK-29383] Add PrinterColumn annotation for status fields

2022-09-22 Thread GitBox


gyfora commented on PR #378:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/378#issuecomment-1255253489

   > No objections. How would we accommodate another request that prefers some 
different columns though :)
   
   We could then start to prioritize some fields and limit it to a fairly small 
number of total extra fields. But so far people did not seem to be too 
interested in these print columns :) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gaborgsomogyi commented on a diff in pull request #381: [FLINK-29393] Upgrade Kubernetes operator examples to use 1.15.2 Flink base image

2022-09-22 Thread GitBox


gaborgsomogyi commented on code in PR #381:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/381#discussion_r977845730


##
examples/flink-sql-runner-example/Dockerfile:
##
@@ -16,7 +16,7 @@
 # limitations under the License.
 

 
-FROM flink:1.15.1
+FROM flink:1.15.2

Review Comment:
   Fixed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #381: [FLINK-29393] Upgrade Kubernetes operator examples to use 1.15.2 Flink base image

2022-09-22 Thread GitBox


gyfora commented on code in PR #381:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/381#discussion_r977844254


##
examples/flink-sql-runner-example/Dockerfile:
##
@@ -16,7 +16,7 @@
 # limitations under the License.
 

 
-FROM flink:1.15.1
+FROM flink:1.15.2

Review Comment:
   I think this should be `flink:1.15` for simplicity?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] tweise commented on pull request #378: [FLINK-29383] Add PrinterColumn annotation for status fields

2022-09-22 Thread GitBox


tweise commented on PR #378:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/378#issuecomment-1255242895

   No objections. How would we accommodate another request that prefers some 
different columns though :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-22 Thread GitBox


gyfora commented on PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#issuecomment-1255241160

   Thank you for the great work @sap1ens , I will merge this once the CI passes!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #375: [FLINK-29327][operator] remove operator config from job runtime config before d…

2022-09-22 Thread GitBox


gyfora commented on code in PR #375:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/375#discussion_r977837521


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java:
##
@@ -666,7 +668,7 @@ private JarRunResponseBody runJar(
 }
 }
 
-private JarUploadResponseBody uploadJar(
+protected JarUploadResponseBody uploadJar(

Review Comment:
   please undo this change I think it's not necessary anymore



##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java:
##
@@ -624,7 +626,7 @@ protected ClusterClient 
getClusterClient(Configuration conf) throws Exce
 conf, clusterId, (c, e) -> new 
StandaloneClientHAServices(restServerAddress));
 }
 
-private JarRunResponseBody runJar(
+protected JarRunResponseBody runJar(

Review Comment:
   please undo this change I think it's not necessary anymore



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on pull request #378: [FLINK-29383] Add PrinterColumn annotation for status fields

2022-09-22 Thread GitBox


gyfora commented on PR #378:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/378#issuecomment-1255232775

   @tweise any objections against merging this based on the additional input?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] tedhtchang closed pull request #313: [FLINK-27852][docs] OLM installation and development documentation

2022-09-22 Thread GitBox


tedhtchang closed pull request #313: [FLINK-27852][docs] OLM installation and  
development documentation
URL: https://github.com/apache/flink-kubernetes-operator/pull/313


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] sap1ens commented on pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-22 Thread GitBox


sap1ens commented on PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#issuecomment-1255216631

   @gyfora I squashed the commits, this is ready to be merged. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] sap1ens commented on a diff in pull request #370: [FLINK-29288] Make it possible to use job jars in the system classpath

2022-09-22 Thread GitBox


sap1ens commented on code in PR #370:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/370#discussion_r977817257


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/validation/DefaultValidator.java:
##
@@ -180,10 +180,6 @@ private Optional validateJobSpec(
 return Optional.empty();
 }
 
-if (StringUtils.isNullOrWhitespaceOnly(job.getJarURI())) {
-return Optional.of("Jar URI must be defined");
-}

Review Comment:
   Alright, I added the jar after starting the container, thanks @jeesmon for 
telling me that :) 
   
   Thanks for checking!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gaborgsomogyi commented on pull request #381: [FLINK-29393] Upgrade Kubernetes operator examples to use 1.15.2 Flink base image

2022-09-22 Thread GitBox


gaborgsomogyi commented on PR #381:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/381#issuecomment-1255195133

   cc @gyfora 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gaborgsomogyi opened a new pull request, #381: [FLINK-29393] Upgrade Kubernetes operator examples to use 1.15.2 Flink base image

2022-09-22 Thread GitBox


gaborgsomogyi opened a new pull request, #381:
URL: https://github.com/apache/flink-kubernetes-operator/pull/381

   ## What is the purpose of the change
   
   There were several places where 1.15.1 was hardcoded. In this PR I've 
changed them to the latest 1.15.2.
   
   ## Brief change log
   
   Changed version from 1.15.1 to 1.15.2.
   
   ## Verifying this change
   
   * Manually checked jar URIs.
   * Existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): yes
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
no
 - Core observer or reconciler logic that is regularly executed: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29393) Upgrade Kubernetes operator examples to use the latest Flink base image

2022-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29393:
---
Labels: pull-request-available  (was: )

> Upgrade Kubernetes operator examples to use the latest Flink base image
> ---
>
> Key: FLINK-29393
> URL: https://issues.apache.org/jira/browse/FLINK-29393
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Gabor Somogyi
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.2.0
>
>
> We should update all the examples to refer to the latest Flink base image 
> (1.15.2) before the release



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] HuangZhenQiu commented on a diff in pull request #375: [FLINK-29327][operator] remove operator config from job runtime config before d…

2022-09-22 Thread GitBox


HuangZhenQiu commented on code in PR #375:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/375#discussion_r977795734


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java:
##
@@ -182,7 +184,13 @@ public JobID submitJobToSessionCluster(
 throws Exception {
 // we generate jobID in advance to help deduplicate job submission.
 var jobID = FlinkUtils.generateSessionJobFixedJobID(meta);
-runJar(spec.getJob(), jobID, uploadJar(meta, spec, conf), conf, 
savepoint);
+Configuration runtimeConfig = removeOperatorConfigs(conf);
+runJar(
+spec.getJob(),
+jobID,
+uploadJar(meta, spec, runtimeConfig),
+runtimeConfig,
+savepoint);

Review Comment:
   Thanks for the suggestion. Revised acccordingly.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] afedulov commented on pull request #20865: [FLINK-14896][connectors/kinesis] Shade and relocate Jackson dependen…

2022-09-22 Thread GitBox


afedulov commented on PR #20865:
URL: https://github.com/apache/flink/pull/20865#issuecomment-1255169612

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] HuangZhenQiu commented on a diff in pull request #20875: [FLINK-29363][runtime-web] allow fully redirection in web dashboard

2022-09-22 Thread GitBox


HuangZhenQiu commented on code in PR #20875:
URL: https://github.com/apache/flink/pull/20875#discussion_r976801184


##
flink-runtime-web/web-dashboard/src/app/app.interceptor.ts:
##
@@ -39,6 +46,16 @@ export class AppInterceptor implements HttpInterceptor {
 
 return next.handle(req.clone({ withCredentials: true })).pipe(
   catchError(res => {
+if (
+  res instanceof HttpResponseBase &&
+  (res.status == HttpStatusCode.MovedPermanently ||
+res.status == HttpStatusCode.TemporaryRedirect ||
+res.status == HttpStatusCode.SeeOther) &&

Review Comment:
   The code path is mainly to fetching job metadata. Multiple Choices, Use 
Proxy, Unused are not fit for the scenarios or data type. But I am open to add 
more status code to make it more robust.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >