[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28515)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25394) [Flink-ML] Upgrade log4j to 2.17.0 to address CVE-2021-45105

2021-12-22 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25394:

Component/s: Library / Machine Learning

> [Flink-ML] Upgrade log4j to 2.17.0 to address CVE-2021-45105
> 
>
> Key: FLINK-25394
> URL: https://issues.apache.org/jira/browse/FLINK-25394
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / Machine Learning
>Affects Versions: ml-2.0.0
>Reporter: Abdelrahman
>Assignee: Abdelrahman
>Priority: Critical
>  Labels: pull-request-available
> Fix For: ml-2.0.0
>
>
> Apache Log4j2 versions 2.0-alpha1 through 2.16.0 did not protect from 
> uncontrolled recursion from self-referential lookups. When the logging 
> configuration uses a non-default Pattern Layout with a Context Lookup (for 
> example, $${ctx:loginId}), attackers with control over Thread Context Map 
> (MDC) input data can craft malicious input data that contains a recursive 
> lookup, resulting in a StackOverflowError that will terminate the process. 
> This is also known as a DOS (Denial of Service) attack.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-24481) Translate buffer debloat documenation to chinese

2021-12-22 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-24481.

Fix Version/s: 1.15.0
   1.14.3
   Resolution: Done

Merged via
- master (1.15): e0c26760bc359211a504f02962419da85b0db502
- release-1.14: a9d43eec627d10799d71cf0a82a2f0a183402dda

> Translate buffer debloat documenation to chinese
> 
>
> Key: FLINK-24481
> URL: https://issues.apache.org/jira/browse/FLINK-24481
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation, Runtime / Network
>Affects Versions: 1.14.0
>Reporter: Anton Kalashnikov
>Assignee: Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.14.3
>
>
> It needs to translate the documentation of the buffer debloat to chinese. The 
> original documentation was introduced here - 
> https://issues.apache.org/jira/browse/FLINK-23458



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] xintongsong closed pull request #17953: [FLINK-24481][docs] Translate buffer debloat documenation to chinese

2021-12-22 Thread GitBox


xintongsong closed pull request #17953:
URL: https://github.com/apache/flink/pull/17953


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on pull request #17953: [FLINK-24481][docs] Translate buffer debloat documenation to chinese

2021-12-22 Thread GitBox


xintongsong commented on pull request #17953:
URL: https://github.com/apache/flink/pull/17953#issuecomment-1000103787


   Merging as the PR passes compiling and doc_404_check in CI. Failures are 
irrelevant.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18176: [FLINK-20286][connector-files] Support directory watching in filesystem table source

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18176:
URL: https://github.com/apache/flink/pull/18176#issuecomment-999490583


   
   ## CI report:
   
   * a27a76f5f93e5ec6bf436d14cd62312b278f54f4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28477)
 
   * 4296115e87ccfbff7a2e4e7b88d5e5250f941dee Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28517)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-25261) Changelog not truncated on materialization

2021-12-22 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464286#comment-17464286
 ] 

Yuan Mei edited comment on FLINK-25261 at 12/23/21, 7:31 AM:
-

{quote}1. Correct (as I wrote above in the context of this ticket I mean 
in-memory data only)
2. As I wrote above in the context of this ticket I mean in-memory data only
{quote}
If truncating only means truncating the in-memory part, would this API be 
general enough for other implementations as well? (Like in-memory 
implementation, Kafka based implementation)
{quote}3. In-memory means any in-memory object not needed for checkpointing 
anymore, depending on DSTL implementation (reference to file or an in-memory 
byte array to-be -flushed); To discard already flushed but not included into 
any checkpoint changes, we have three options: a) shared/private state 
ownership and TM-side registry (FLINK-23139); b) TM-side registry only for 
changelog; c) rely on FLINK-24852 (which will likely be needed by FLINK-25395). 
I propose to postpone this decision until we decide on FLINK-25395 and state 
ownership. Note that this only happens if pre-emptive upload is enabled 
(otherwise, state changes are always associated with some checkpoint)
{quote}
I am fine to postpone this decision.
{quote}4. Materialization is independent, but handling its result is 
"synchronized" with checkpointing by using Task mailbox; so it's only the 
writer.truncate() method that should take ongoing checkpoints into account; and 
this is the with the current FS writer.
{quote}
Let's consider this:

New CP
{-}{{-}}|{{-}}{-}--{-}{{-}}{{-}}|{{-}}{{-}}{-}---> (In Mem Log)
Materialization up to (but not complete yet)

1). Materialization triggered

2). CP triggered (the uploading part is not in the task thread)

3). Materialization finished and put truncating action into the mailbox

4). task thread truncate the log

5). checkpoint complete

=

Also, if I am understanding correctly, now we put clean-up into three different 
places:

1). State Cleanup upon checkpoint subsumption

2). In-memory part clean-up upon materialization completes

3). DFS files cleanup (not included in any state) TBD.

Again, would this abstraction general enough to support using other 
implmentation? It is fragile to me.


was (Author: ym):
{quote}1. Correct (as I wrote above in the context of this ticket I mean 
in-memory data only)
2. As I wrote above in the context of this ticket I mean in-memory data only
{quote}
If truncating only means truncating the in-memory part, would this API be 
general enough for other implementations as well? (Like in-memory 
implementation, Kafka based implementation)
{quote}3. In-memory means any in-memory object not needed for checkpointing 
anymore, depending on DSTL implementation (reference to file or an in-memory 
byte array to-be -flushed); To discard already flushed but not included into 
any checkpoint changes, we have three options: a) shared/private state 
ownership and TM-side registry (FLINK-23139); b) TM-side registry only for 
changelog; c) rely on FLINK-24852 (which will likely be needed by FLINK-25395). 
I propose to postpone this decision until we decide on FLINK-25395 and state 
ownership. Note that this only happens if pre-emptive upload is enabled 
(otherwise, state changes are always associated with some checkpoint)
{quote}
I am fine to postpone this decision.
{quote}4. Materialization is independent, but handling its result is 
"synchronized" with checkpointing by using Task mailbox; so it's only the 
writer.truncate() method that should take ongoing checkpoints into account; and 
this is the with the current FS writer.
{quote}
Let's consider this:

New CP
{-}{{-}}|{{-}}{-}---{-}{{-}}|{{-}}{-}> (In Mem Log)
Materialization up to (but not complete yet)

1). Materialization triggered

2). CP triggered (the uploading part is not in the task thread)

3). Materialization finished and put truncating action into the mailbox

4). task thread truncate the log

5). checkpoint complete

=

Also, if I am understanding correctly, now we put clean-up into three different 
places:

1). State Cleanup upon checkpoint subsumption

2). In-memory part clean-up upon materialization completes

3). DFS files cleanup (not included in any state) TBD.

Again, would this abstraction general enough to support using other 
implmentation? It is a bit fragile to me.

> Changelog not truncated on materialization
> --
>
> Key: FLINK-25261
> URL: https://issues.apache.org/jira/browse/FLINK-25261
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: 

[jira] [Comment Edited] (FLINK-25261) Changelog not truncated on materialization

2021-12-22 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464286#comment-17464286
 ] 

Yuan Mei edited comment on FLINK-25261 at 12/23/21, 7:30 AM:
-

{quote}1. Correct (as I wrote above in the context of this ticket I mean 
in-memory data only)
2. As I wrote above in the context of this ticket I mean in-memory data only
{quote}
If truncating only means truncating the in-memory part, would this API be 
general enough for other implementations as well? (Like in-memory 
implementation, Kafka based implementation)
{quote}3. In-memory means any in-memory object not needed for checkpointing 
anymore, depending on DSTL implementation (reference to file or an in-memory 
byte array to-be -flushed); To discard already flushed but not included into 
any checkpoint changes, we have three options: a) shared/private state 
ownership and TM-side registry (FLINK-23139); b) TM-side registry only for 
changelog; c) rely on FLINK-24852 (which will likely be needed by FLINK-25395). 
I propose to postpone this decision until we decide on FLINK-25395 and state 
ownership. Note that this only happens if pre-emptive upload is enabled 
(otherwise, state changes are always associated with some checkpoint)
{quote}
I am fine to postpone this decision.
{quote}4. Materialization is independent, but handling its result is 
"synchronized" with checkpointing by using Task mailbox; so it's only the 
writer.truncate() method that should take ongoing checkpoints into account; and 
this is the with the current FS writer.
{quote}
Let's consider this:

New CP
{-}{{-}}|{{-}}{-}---{-}{{-}}|{{-}}{-}> (In Mem Log)
Materialization up to (but not complete yet)

1). Materialization triggered

2). CP triggered (the uploading part is not in the task thread)

3). Materialization finished and put truncating action into the mailbox

4). task thread truncate the log

5). checkpoint complete

=

Also, if I am understanding correctly, now we put clean-up into three different 
places:

1). State Cleanup upon checkpoint subsumption

2). In-memory part clean-up upon materialization completes

3). DFS files cleanup (not included in any state) TBD.

Again, would this abstraction general enough to support using other 
implmentation? It is a bit fragile to me.


was (Author: ym):
{quote}1. Correct (as I wrote above in the context of this ticket I mean 
in-memory data only)
2. As I wrote above in the context of this ticket I mean in-memory data only
{quote}
If truncating only means truncating the in-memory part, would this API be 
general enough for other implementations as well? (Like in-memory 
implementation, Kafka based implementation)
{quote}3. In-memory means any in-memory object not needed for checkpointing 
anymore, depending on DSTL implementation (reference to file or an in-memory 
byte array to-be -flushed); To discard already flushed but not included into 
any checkpoint changes, we have three options: a) shared/private state 
ownership and TM-side registry (FLINK-23139); b) TM-side registry only for 
changelog; c) rely on FLINK-24852 (which will likely be needed by FLINK-25395). 
I propose to postpone this decision until we decide on FLINK-25395 and state 
ownership. Note that this only happens if pre-emptive upload is enabled 
(otherwise, state changes are always associated with some checkpoint)
{quote}
I am fine to postpone this decision.
{quote}4. Materialization is independent, but handling its result is 
"synchronized" with checkpointing by using Task mailbox; so it's only the 
writer.truncate() method that should take ongoing checkpoints into account; and 
this is the with the current FS writer.
{quote}
Let's consider this:

New CP
-{-}|{-}-{-}|{-}-> (In Mem Log)
Materialization up to (but not complete yet)

1). Materialization triggered

2). CP triggered (the uploading part is not in the task thread)

3). Materialization finished and put truncating action into the mailbox

4). task thread truncate the log

5). checkpoint complete

=

Also, if I am understanding correctly, now we put clean-up into three different 
places now:

1). State Cleanup upon checkpoint subsumption

2). In-memory part clean-up upon materialization completes

3). DFS files cleanup (not included in any state) TBD.

Again, would this abstraction general enough to support using other 
implmentation? It is a bit fragile to me.

> Changelog not truncated on materialization
> --
>
> Key: FLINK-25261
> URL: https://issues.apache.org/jira/browse/FLINK-25261
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>   

[GitHub] [flink] flinkbot edited a comment on pull request #18176: [FLINK-20286][connector-files] Support directory watching in filesystem table source

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18176:
URL: https://github.com/apache/flink/pull/18176#issuecomment-999490583


   
   ## CI report:
   
   * a27a76f5f93e5ec6bf436d14cd62312b278f54f4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28477)
 
   * 4296115e87ccfbff7a2e4e7b88d5e5250f941dee UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18172: [hotfix][doc] Change TableEnvironment attribute reference error

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18172:
URL: https://github.com/apache/flink/pull/18172#issuecomment-999323746


   
   ## CI report:
   
   * edab20c2aa2b2d60566d2504509055575c730ba6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28506)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25411) JsonRowSerializationSchema unable to parse TIMESTAMP_LTZ fields

2021-12-22 Thread Surendra Lalwani (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464332#comment-17464332
 ] 

Surendra Lalwani commented on FLINK-25411:
--

[~wenlong.lwl] JsonRowSerializationSchema is deprecated in 1.15.0 and we are 
currently using 1.13.3 so is it not expected to work in 1.13.3, I will try 
using JsonRowDataSerializationSchema if it suffice our needs. Will also attach 
an example for your reference.

> JsonRowSerializationSchema unable to parse TIMESTAMP_LTZ fields
> ---
>
> Key: FLINK-25411
> URL: https://issues.apache.org/jira/browse/FLINK-25411
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Surendra Lalwani
>Priority: Minor
>
> While I try to fire a simple query Select current_timestamp from table_name , 
> it gives error that Could not serialize row and asks me to add shaded flink 
> dependency for jsr-310. Seems like in the Serializer , the JavaTimeModule is 
> not added



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18148: [FLINK-25372] Add thread dump feature for jobmanager

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18148:
URL: https://github.com/apache/flink/pull/18148#issuecomment-997327937


   
   ## CI report:
   
   * bbd01131c7f4a96245fc030b135d2b0033211331 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28504)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18068: [FLINK-25105][checkpoint] Enables final checkpoint by default

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18068:
URL: https://github.com/apache/flink/pull/18068#issuecomment-989975508


   
   ## CI report:
   
   * f0b1ef0bd1e2babc2093a57b0b6b619c99cb551e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28457)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on pull request #18068: [FLINK-25105][checkpoint] Enables final checkpoint by default

2021-12-22 Thread GitBox


gaoyunhaii commented on pull request #18068:
URL: https://github.com/apache/flink/pull/18068#issuecomment-190817


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Myasuka commented on a change in pull request #17833: [FLINK-24785][runtime] Relocate RocksDB's log under flink log directory by default

2021-12-22 Thread GitBox


Myasuka commented on a change in pull request #17833:
URL: https://github.com/apache/flink/pull/17833#discussion_r774361782



##
File path: 
flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksDBStateBackendConfigTest.java
##
@@ -97,6 +97,18 @@ public void testDefaultsInSync() throws Exception {
 assertEquals(defaultIncremental, 
backend.isIncrementalCheckpointsEnabled());
 }
 
+@Test
+public void testDefaultDbLogDir() throws Exception {
+final EmbeddedRocksDBStateBackend backend = new 
EmbeddedRocksDBStateBackend();
+final File logFile = File.createTempFile(getClass().getSimpleName() + 
"-", ".log");
+// set the environment variable 'log.file' with the Flink log file 
location
+System.setProperty("log.file", logFile.getPath());
+assertEquals(
+logFile.getParent(),
+
backend.createOptionsAndResourceContainer().getDbOptions().dbLogDir());

Review comment:
   The created resouce container is native objects and we must ensure to 
close the `RocksDBResourceContainer` in the end of the test to avoid unexpected 
exception. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25424) Checkpointing is currently not supported for operators that implement InputSelectable

2021-12-22 Thread Zhipeng Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhipeng Zhang updated FLINK-25424:
--
Description: 
Flink DataStream runtime does not support checkpointing for operators that 
implements `InputSelectable` interface.

It throws an UnsupportedOperationExcetpion when checking the StreamJobGraph — 
"Checkpointing is currently not supported for operators that implement 
InputSelectable".

 

The machine learning use case is as follows:

We have a two-input operator A. The first input of A is the model parameters, 
the second input of A is the training data. 

In this case, we have to read the model parameters first, then we read the 
training data. That is we need to finish consuming the first input before we 
consume the second.

  was:
Flink DataStream runtime does not support checkpointing for operators that 
implements `InputSelectable` interface.

It throws an UnsupportedOperationExcetpion when checking the StreamJobGraph — 
"Checkpointing is currently not supported for operators that implement 
InputSelectable".

 

The use case is as follows:

We have a two-input operator A. The first input of A is the model parameters, 
the second input of A is the training data. 

In this case, we have to read the model parameters first, then we read the 
training data. That is we need to finish consuming the first input before we 
consume the second.


> Checkpointing is currently not supported for operators that implement 
> InputSelectable
> -
>
> Key: FLINK-25424
> URL: https://issues.apache.org/jira/browse/FLINK-25424
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing
>Reporter: Zhipeng Zhang
>Priority: Major
>
> Flink DataStream runtime does not support checkpointing for operators that 
> implements `InputSelectable` interface.
> It throws an UnsupportedOperationExcetpion when checking the StreamJobGraph — 
> "Checkpointing is currently not supported for operators that implement 
> InputSelectable".
>  
> The machine learning use case is as follows:
> We have a two-input operator A. The first input of A is the model parameters, 
> the second input of A is the training data. 
> In this case, we have to read the model parameters first, then we read the 
> training data. That is we need to finish consuming the first input before we 
> consume the second.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25424) Checkpointing is currently not supported for operators that implement InputSelectable

2021-12-22 Thread Zhipeng Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhipeng Zhang updated FLINK-25424:
--
Description: 
Flink DataStream runtime does not support checkpointing for operators that 
implements `InputSelectable` interface.

It throws an UnsupportedOperationExcetpion when checking the StreamJobGraph — 
"Checkpointing is currently not supported for operators that implement 
InputSelectable".

 

The use case is as follows:

We have a two-input operator A. The first input of A is the model parameters, 
the second input of A is the training data. 

In this case, we have to read the model parameters first, then we read the 
training data. That is we need to finish consuming the first input before we 
consume the second.

  was:
Flink DataStream runtime does not support checkpointing for operators that 
implements `InputSelectable` interface.

 

It throw an UnsupportedOperationExcetpion when checking the StreamJobGraph — 
"Checkpointing is currently not supported for operators that implement 
InputSelectable".


> Checkpointing is currently not supported for operators that implement 
> InputSelectable
> -
>
> Key: FLINK-25424
> URL: https://issues.apache.org/jira/browse/FLINK-25424
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing
>Reporter: Zhipeng Zhang
>Priority: Major
>
> Flink DataStream runtime does not support checkpointing for operators that 
> implements `InputSelectable` interface.
> It throws an UnsupportedOperationExcetpion when checking the StreamJobGraph — 
> "Checkpointing is currently not supported for operators that implement 
> InputSelectable".
>  
> The use case is as follows:
> We have a two-input operator A. The first input of A is the model parameters, 
> the second input of A is the training data. 
> In this case, we have to read the model parameters first, then we read the 
> training data. That is we need to finish consuming the first input before we 
> consume the second.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18186: [FLINK-25418][python] install dependencies completely offline

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18186:
URL: https://github.com/apache/flink/pull/18186#issuecomment-184952


   
   ## CI report:
   
   * 83454a55d776ac06f945c9c5e71bd5655e534d2a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28516)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18179: [FLINK-8518][Table SQL / API] Add support for extract Epoch

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18179:
URL: https://github.com/apache/flink/pull/18179#issuecomment-999539512


   
   ## CI report:
   
   * 180adc2e9c1adb543eb7406ea3b511570d0b5c67 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28481)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18179: [FLINK-8518][Table SQL / API] Add support for extract Epoch

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18179:
URL: https://github.com/apache/flink/pull/18179#issuecomment-999539512


   
   ## CI report:
   
   * 180adc2e9c1adb543eb7406ea3b511570d0b5c67 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28481)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18186: [FLINK-25418][python] install dependencies completely offline

2021-12-22 Thread GitBox


flinkbot commented on pull request #18186:
URL: https://github.com/apache/flink/pull/18186#issuecomment-184952


   
   ## CI report:
   
   * 83454a55d776ac06f945c9c5e71bd5655e534d2a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18186: [FLINK-25418][python] install dependencies completely offline

2021-12-22 Thread GitBox


flinkbot commented on pull request #18186:
URL: https://github.com/apache/flink/pull/18186#issuecomment-184421


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 83454a55d776ac06f945c9c5e71bd5655e534d2a (Thu Dec 23 
06:50:06 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25409) Add cache metric to LookupFunction

2021-12-22 Thread Jing Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464326#comment-17464326
 ] 

Jing Zhang commented on FLINK-25409:


[~straw]+1 on your proposal to introduce a common abstract class for those who 
has cache implementations.

> Add cache metric to LookupFunction
> --
>
> Key: FLINK-25409
> URL: https://issues.apache.org/jira/browse/FLINK-25409
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Ecosystem
>Reporter: Yuan Zhu
>Priority: Major
>
> Since we encounter performance problem when lookup join in production env 
> frequently, adding metrics to monitor Lookup function cache is very helpful 
> to troubleshoot.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] snuyanzin commented on pull request #18179: [FLINK-8518][Table SQL / API] Add support for extract Epoch

2021-12-22 Thread GitBox


snuyanzin commented on pull request #18179:
URL: https://github.com/apache/flink/pull/18179#issuecomment-183709


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25418) The dir_cache is specified in the flink task. When there is no network, you will still download the python third-party library

2021-12-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25418:
---
Labels: pull-request-available  (was: )

> The dir_cache is specified in the flink task. When there is no network, you 
> will still download the python third-party library
> --
>
> Key: FLINK-25418
> URL: https://issues.apache.org/jira/browse/FLINK-25418
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.0, 1.14.0
> Environment: python3.7
> flink1.13.1
>Reporter: yangcai
>Assignee: Ada Wong
>Priority: Major
>  Labels: pull-request-available
>
> Specified in Python code 
> set_python_requirements(requirements_cache_dir=dir_cache)
> During task execution, priority will be given to downloading Python 
> third-party packages from the network,Can I directly use the python package 
> in the cache I specify when I specify the cache value and don't want the task 
> task to download the python package from the network



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] deadwind4 opened a new pull request #18186: [FLINK-25418][python] install dependencies completely offline

2021-12-22 Thread GitBox


deadwind4 opened a new pull request #18186:
URL: https://github.com/apache/flink/pull/18186


   ## What is the purpose of the change
   
   When we invoke set_python_requirements(requirements_cache_dir=dir_cache) to 
set cache_dir, the dependencies will be priorly downloaded from the network.
   we should install the dependencies completely offline.
   
   ## Brief change log
   
 - *add the '--no-index' option in 
PythonEnvironmentManagerUtils#pipInstallRequirements method*
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23687) Introduce partitioned lookup join to enforce input of LookupJoin to hash shuffle by lookup keys

2021-12-22 Thread Jing Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464325#comment-17464325
 ] 

Jing Zhang commented on FLINK-23687:


[~lincoln.86xy] and [~lzljs3620320] Thanks a lot for your advices. I would 
create a FLIP on those improvements ASAP.

> Introduce partitioned lookup join to enforce input of LookupJoin to hash 
> shuffle by lookup keys
> ---
>
> Key: FLINK-23687
> URL: https://issues.apache.org/jira/browse/FLINK-23687
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Add Sql query hint to enable LookupJoin shuffle by join key of left input



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-23944) PulsarSourceITCase.testTaskManagerFailure is instable

2021-12-22 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464323#comment-17464323
 ] 

Huang Xingbo commented on FLINK-23944:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28452&view=logs&j=298e20ef-7951-5965-0e79-ea664ddc435e&t=d4c90338-c843-57b0-3232-10ae74f00347

> PulsarSourceITCase.testTaskManagerFailure is instable
> -
>
> Key: FLINK-23944
> URL: https://issues.apache.org/jira/browse/FLINK-23944
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.14.0
>Reporter: Dian Fu
>Assignee: Yufan Sheng
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.14.3
>
>
> [https://dev.azure.com/dianfu/Flink/_build/results?buildId=430&view=logs&j=f3dc9b18-b77a-55c1-591e-264c46fe44d1&t=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d]
> It's from my personal azure pipeline, however, I'm pretty sure that I have 
> not touched any code related to this. 
> {code:java}
> Aug 24 10:44:13 [ERROR] testTaskManagerFailure{TestEnvironment, 
> ExternalContext, ClusterControllable}[1] Time elapsed: 258.397 s <<< FAILURE! 
> Aug 24 10:44:13 java.lang.AssertionError: Aug 24 10:44:13 Aug 24 10:44:13 
> Expected: Records consumed by Flink should be identical to test data and 
> preserve the order in split Aug 24 10:44:13 but: Mismatched record at 
> position 7: Expected '0W6SzacX7MNL4xLL3BZ8C3ljho4iCydbvxIl' but was 
> 'wVi5JaJpNvgkDEOBRC775qHgw0LyRW2HBxwLmfONeEmr' Aug 24 10:44:13 at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) Aug 24 10:44:13 
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) Aug 24 
> 10:44:13 at 
> org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testTaskManagerFailure(SourceTestSuiteBase.java:271)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28515)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18173: [FLINK-25407][network] Fix the deadlock issue caused by LocalBufferPool#reserveSegments

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18173:
URL: https://github.com/apache/flink/pull/18173#issuecomment-999423914


   
   ## CI report:
   
   * e6ce82f112650272bd718f24a8815f5a2db40f9b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28471)
 
   * 8fc8c93037dc8d6049456c62c95b217e76df4c2f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28514)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25422) Azure pipelines are failing due to Python tests unable to install dependencies

2021-12-22 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-25422:
-
Component/s: API / Python

> Azure pipelines are failing due to Python tests unable to install dependencies
> --
>
> Key: FLINK-25422
> URL: https://issues.apache.org/jira/browse/FLINK-25422
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.8, 1.13.6, 1.14.3
>Reporter: Martijn Visser
>Assignee: Huang Xingbo
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.8, 1.13.6, 1.14.3
>
>
> {code:java}
> Dec 22 16:10:02 Command "/__w/1/s/flink-python/.tox/py38/bin/python -u -c 
> "import setuptools, 
> tokenize;__file__='/tmp/pip-install-zwy7_7or/numpy/setup.py';f=getattr(tokenize,
>  'open', open)(__file__);code=f.read().replace('\r\n', 
> '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record 
> /tmp/pip-record-emfnsngu/install-record.txt 
> --single-version-externally-managed --compile --install-headers 
> /__w/1/s/flink-python/.tox/py38/include/site/python3.8/numpy" failed with 
> error code 1 in /tmp/pip-install-zwy7_7or/numpy/
> Dec 22 16:10:02 You are using pip version 10.0.1, however version 21.3.1 is 
> available.
> Dec 22 16:10:02 You should consider upgrading via the 'pip install --upgrade 
> pip' command.
> Dec 22 16:10:02 
> Dec 22 16:10:02 === log end 
> 
> Dec 22 16:10:02 ERROR: could not install deps [pytest, apache-beam==2.27.0, 
> cython==0.29.16, grpcio>=1.17.0,<=1.26.0, grpcio-tools>=1.3.5,<=1.14.2, 
> apache-flink-libraries]; v = 
> InvocationError("/__w/1/s/flink-python/dev/install_command.sh pytest 
> apache-beam==2.27.0 cython==0.29.16 'grpcio>=1.17.0,<=1.26.0' 
> 'grpcio-tools>=1.3.5,<=1.14.2' apache-flink-libraries", 1)
> Dec 22 16:10:02 ___ summary 
> 
> Dec 22 16:10:02 ERROR:   py38: could not install deps [pytest, 
> apache-beam==2.27.0, cython==0.29.16, grpcio>=1.17.0,<=1.26.0, 
> grpcio-tools>=1.3.5,<=1.14.2, apache-flink-libraries]; v = 
> InvocationError("/__w/1/s/flink-python/dev/install_command.sh pytest 
> apache-beam==2.27.0 cython==0.29.16 'grpcio>=1.17.0,<=1.26.0' 
> 'grpcio-tools>=1.3.5,<=1.14.2' apache-flink-libraries", 1)
> Dec 22 16:10:02 tox checks... [FAILED]
> Dec 22 16:10:02 Process exited with EXIT CODE: 1.
> Dec 22 16:10:02 Trying to KILL watchdog (1195).
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28443&view=logs&j=161eb7af-7e37-5bda-031e-1dd139988f4b&t=1dd6a048-0e04-5036-8cea-768313805a09
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28444&view=logs&j=161eb7af-7e37-5bda-031e-1dd139988f4b&t=e489b367-f966-5d50-f73c-2caaa8549a1f
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28454&view=logs&j=dd7e7115-b4b1-5414-20ec-97b9411e0cfc&t=c759a57f-2774-59e9-f882-8e4d5d3fbb9f



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-25422) Azure pipelines are failing due to Python tests unable to install dependencies

2021-12-22 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo resolved FLINK-25422.
--
Fix Version/s: 1.12.8
   1.13.6
   1.14.3
   Resolution: Fixed

Merged into master via 8a3d033bdf12b9894c81aa3073f84c238d8a8f87
Merged into release-1.14 via ae4856a3b4fad75cf58dfdb070add040f3e5eeb5
Merged into release-1.13 via 365715ddcf9214e71b5c6f52c1b73793c6baa443
Merged into release-1.12 via 97513b247f98f559a5028d2b22bd43f7ca25f853

> Azure pipelines are failing due to Python tests unable to install dependencies
> --
>
> Key: FLINK-25422
> URL: https://issues.apache.org/jira/browse/FLINK-25422
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.8, 1.13.6, 1.14.3
>Reporter: Martijn Visser
>Assignee: Huang Xingbo
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.8, 1.13.6, 1.14.3
>
>
> {code:java}
> Dec 22 16:10:02 Command "/__w/1/s/flink-python/.tox/py38/bin/python -u -c 
> "import setuptools, 
> tokenize;__file__='/tmp/pip-install-zwy7_7or/numpy/setup.py';f=getattr(tokenize,
>  'open', open)(__file__);code=f.read().replace('\r\n', 
> '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record 
> /tmp/pip-record-emfnsngu/install-record.txt 
> --single-version-externally-managed --compile --install-headers 
> /__w/1/s/flink-python/.tox/py38/include/site/python3.8/numpy" failed with 
> error code 1 in /tmp/pip-install-zwy7_7or/numpy/
> Dec 22 16:10:02 You are using pip version 10.0.1, however version 21.3.1 is 
> available.
> Dec 22 16:10:02 You should consider upgrading via the 'pip install --upgrade 
> pip' command.
> Dec 22 16:10:02 
> Dec 22 16:10:02 === log end 
> 
> Dec 22 16:10:02 ERROR: could not install deps [pytest, apache-beam==2.27.0, 
> cython==0.29.16, grpcio>=1.17.0,<=1.26.0, grpcio-tools>=1.3.5,<=1.14.2, 
> apache-flink-libraries]; v = 
> InvocationError("/__w/1/s/flink-python/dev/install_command.sh pytest 
> apache-beam==2.27.0 cython==0.29.16 'grpcio>=1.17.0,<=1.26.0' 
> 'grpcio-tools>=1.3.5,<=1.14.2' apache-flink-libraries", 1)
> Dec 22 16:10:02 ___ summary 
> 
> Dec 22 16:10:02 ERROR:   py38: could not install deps [pytest, 
> apache-beam==2.27.0, cython==0.29.16, grpcio>=1.17.0,<=1.26.0, 
> grpcio-tools>=1.3.5,<=1.14.2, apache-flink-libraries]; v = 
> InvocationError("/__w/1/s/flink-python/dev/install_command.sh pytest 
> apache-beam==2.27.0 cython==0.29.16 'grpcio>=1.17.0,<=1.26.0' 
> 'grpcio-tools>=1.3.5,<=1.14.2' apache-flink-libraries", 1)
> Dec 22 16:10:02 tox checks... [FAILED]
> Dec 22 16:10:02 Process exited with EXIT CODE: 1.
> Dec 22 16:10:02 Trying to KILL watchdog (1195).
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28443&view=logs&j=161eb7af-7e37-5bda-031e-1dd139988f4b&t=1dd6a048-0e04-5036-8cea-768313805a09
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28444&view=logs&j=161eb7af-7e37-5bda-031e-1dd139988f4b&t=e489b367-f966-5d50-f73c-2caaa8549a1f
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28454&view=logs&j=dd7e7115-b4b1-5414-20ec-97b9411e0cfc&t=c759a57f-2774-59e9-f882-8e4d5d3fbb9f



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25273) Some doubts about the FLINK-22848

2021-12-22 Thread Jianhui Dong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464320#comment-17464320
 ] 

Jianhui Dong commented on FLINK-25273:
--

Hi, I've sent the email a week ago, but there is no response yet.

> Some doubts about the FLINK-22848
> -
>
> Key: FLINK-25273
> URL: https://issues.apache.org/jira/browse/FLINK-25273
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jianhui Dong
>Priority: Major
>
> I have been in contact with Flink and Calcite for a while, and I have some 
> questions about this issue: https://issues.apache.org/jira/browse/FLINK-22848.
> First of all, the discussion about this issue mentioned that since calcite 
> did not support the syntax analysis of set a=b (without quotes), regular 
> expressions were used. However, I checked the commit history some days ago, 
> and I found that calcite should already support SET syntax parsing (see 
> SqlSetOption) in v1.14 or even earlier. but its problem is that it would 
> recognize the `true/false/unknown/null` tokens as keywords, causing the 
> parsing to be worse than expected, but this problem can be solved by 
> restricting the syntax, like use '' in the issue FLINK-22848.
> Then I investigated the earliest version of flink that introduced calcite, 
> flink should have introduced Calcite 1.16 in 1.5 at the earliest version. At 
> that time, calcite should already support the syntax of SET a=b (without 
> quotes), so I would like to find out What exactly does the "not supported" 
> mentioned in the issue FLINK-22848 means? Maybe you can give a more specific 
> case.
> In addition, I also have some doubts about the results of the discussion of 
> the issue. I think it is indeed a more elegant solution to use the SQL parser 
> to analyze it, but When calcite has built-in support for SET grammar, why do 
> we need to extend the SET grammar to re-support it? Even this change may 
> cause backward-incompatible.
> In my personal opinion of view, is that we can solve this problem by adding 
> special restrictions on the above tokens on the basis of native Calcite 
> analysis, such as in the user manual because values ​​such as `unknown` and 
> `null` are meaningless in the production environment. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18173: [FLINK-25407][network] Fix the deadlock issue caused by LocalBufferPool#reserveSegments

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18173:
URL: https://github.com/apache/flink/pull/18173#issuecomment-999423914


   
   ## CI report:
   
   * e6ce82f112650272bd718f24a8815f5a2db40f9b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28471)
 
   * 8fc8c93037dc8d6049456c62c95b217e76df4c2f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wsry commented on a change in pull request #18173: [FLINK-25407][network] Fix the deadlock issue caused by LocalBufferPool#reserveSegments

2021-12-22 Thread GitBox


wsry commented on a change in pull request #18173:
URL: https://github.com/apache/flink/pull/18173#discussion_r774349921



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/NetworkBufferPool.java
##
@@ -158,7 +159,8 @@ public MemorySegment requestMemorySegment() {
 
 public List requestMemorySegmentsBlocking(int 
numberOfSegmentsToRequest)
 throws IOException {
-return internalRequestMemorySegments(numberOfSegmentsToRequest);
+return internalRequestMemorySegments(
+numberOfSegmentsToRequest, 
this::internalRecycleMemorySegments);

Review comment:
   Change the name of ```requestMemorySegments``` need to change the 
interface name of MemorySegmentProvider. Another choice maybe change the method 
name used by ```LocalBufferPool``` to something like ```pollSegment``` or 
```pollSegmentsBlocking```.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo closed pull request #18185: [FLINK-25422][python] Specify requirements in dev-requirements.txt

2021-12-22 Thread GitBox


HuangXingBo closed pull request #18185:
URL: https://github.com/apache/flink/pull/18185


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18148: [FLINK-25372] Add thread dump feature for jobmanager

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18148:
URL: https://github.com/apache/flink/pull/18148#issuecomment-997327937


   
   ## CI report:
   
   * e3fb7d75eb3329c5e10dd68b148d7f137efcc08d Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28503)
 
   * bbd01131c7f4a96245fc030b135d2b0033211331 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28504)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25424) Checkpointing is currently not supported for operators that implement InputSelectable

2021-12-22 Thread Zhipeng Zhang (Jira)
Zhipeng Zhang created FLINK-25424:
-

 Summary: Checkpointing is currently not supported for operators 
that implement InputSelectable
 Key: FLINK-25424
 URL: https://issues.apache.org/jira/browse/FLINK-25424
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / Checkpointing
Reporter: Zhipeng Zhang


Flink DataStream runtime does not support checkpointing for operators that 
implements `InputSelectable` interface.

 

It throw an UnsupportedOperationExcetpion when checking the StreamJobGraph — 
"Checkpointing is currently not supported for operators that implement 
InputSelectable".



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25337) Check whether the target table is valid when SqlToOperationConverter.convertSqlInsert

2021-12-22 Thread vim-wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vim-wang closed FLINK-25337.

Resolution: Not A Bug

> Check whether the target table is valid when 
> SqlToOperationConverter.convertSqlInsert
> -
>
> Key: FLINK-25337
> URL: https://issues.apache.org/jira/browse/FLINK-25337
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: vim-wang
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> when I execute insert sql like "insert into t1 select ...", 
> If the t1 is not defined,sql will not throw an exception after 
> SqlToOperationConverter.convertSqlInsert(), I think this is unreasonable, why 
> not use catalogManager to check whether the target table is valid?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25337) Check whether the target table is valid when SqlToOperationConverter.convertSqlInsert

2021-12-22 Thread vim-wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464311#comment-17464311
 ] 

vim-wang commented on FLINK-25337:
--

[~wenlong.lwl], ok, thanks.

> Check whether the target table is valid when 
> SqlToOperationConverter.convertSqlInsert
> -
>
> Key: FLINK-25337
> URL: https://issues.apache.org/jira/browse/FLINK-25337
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: vim-wang
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> when I execute insert sql like "insert into t1 select ...", 
> If the t1 is not defined,sql will not throw an exception after 
> SqlToOperationConverter.convertSqlInsert(), I think this is unreasonable, why 
> not use catalogManager to check whether the target table is valid?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25026) UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint fails on AZP

2021-12-22 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464309#comment-17464309
 ] 

Huang Xingbo commented on FLINK-25026:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28505&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba


> UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint fails on AZP
> --
>
> Key: FLINK-25026
> URL: https://issues.apache.org/jira/browse/FLINK-25026
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.3
>
>
> {{UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint}} fails 
> on AZP with
> {code}
> 2021-11-23T00:58:03.8286352Z Nov 23 00:58:03 [ERROR] Tests run: 72, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 716.362 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase
> 2021-11-23T00:58:03.8288790Z Nov 23 00:58:03 [ERROR] 
> shouldRescaleUnalignedCheckpoint[downscale union from 3 to 2, 
> buffersPerChannel = 1]  Time elapsed: 4.051 s  <<< ERROR!
> 2021-11-23T00:58:03.8289953Z Nov 23 00:58:03 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2021-11-23T00:58:03.8291473Z Nov 23 00:58:03  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2021-11-23T00:58:03.8292776Z Nov 23 00:58:03  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.execute(UnalignedCheckpointTestBase.java:168)
> 2021-11-23T00:58:03.8294520Z Nov 23 00:58:03  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint(UnalignedCheckpointRescaleITCase.java:534)
> 2021-11-23T00:58:03.8295909Z Nov 23 00:58:03  at 
> jdk.internal.reflect.GeneratedMethodAccessor123.invoke(Unknown Source)
> 2021-11-23T00:58:03.8297310Z Nov 23 00:58:03  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-11-23T00:58:03.8298922Z Nov 23 00:58:03  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2021-11-23T00:58:03.8300298Z Nov 23 00:58:03  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2021-11-23T00:58:03.8301741Z Nov 23 00:58:03  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-11-23T00:58:03.8303233Z Nov 23 00:58:03  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2021-11-23T00:58:03.8304514Z Nov 23 00:58:03  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-11-23T00:58:03.8305736Z Nov 23 00:58:03  at 
> org.junit.rules.Verifier$1.evaluate(Verifier.java:35)
> 2021-11-23T00:58:03.8306856Z Nov 23 00:58:03  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-11-23T00:58:03.8308218Z Nov 23 00:58:03  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2021-11-23T00:58:03.8309532Z Nov 23 00:58:03  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-11-23T00:58:03.8310780Z Nov 23 00:58:03  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2021-11-23T00:58:03.8312026Z Nov 23 00:58:03  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2021-11-23T00:58:03.8313515Z Nov 23 00:58:03  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2021-11-23T00:58:03.8314842Z Nov 23 00:58:03  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2021-11-23T00:58:03.8316116Z Nov 23 00:58:03  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2021-11-23T00:58:03.8317538Z Nov 23 00:58:03  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2021-11-23T00:58:03.8320044Z Nov 23 00:58:03  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2021-11-23T00:58:03.8321044Z Nov 23 00:58:03  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2021-11-23T00:58:03.8321978Z Nov 23 00:58:03  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2021-11-23T00:58:03.8322915Z Nov 23 00:58:03  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2021-11-23T00:58:03.8323848Z Nov 23 00:58:03  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2021-11-23T00:58:03.8325330Z Nov 23 00:58:03  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2021-11-23

[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17953: [FLINK-24481][docs] Translate buffer debloat documenation to chinese

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #17953:
URL: https://github.com/apache/flink/pull/17953#issuecomment-982382670


   
   ## CI report:
   
   * f6ec665d9637d03fa33c19689ec11674e4f6094b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28470)
 
   * eef9717a5d530c8e0b7d3b43705ca18c9ec5b12d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28510)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wsry commented on pull request #18173: [FLINK-25407][network] Fix the deadlock issue caused by LocalBufferPool#reserveSegments

2021-12-22 Thread GitBox


wsry commented on pull request #18173:
URL: https://github.com/apache/flink/pull/18173#issuecomment-166327


   > I wonder if we should actually clean up the lock acquisition in the local 
and global buffer pools? For example enforce a strict contract, that if the 
global lock has to be acquired, it has to be acquired always before taking any 
local buffer pool lock?
   
   I think it is a good idea. Maybe we can pass the local pool instance or some 
abstract interface to ```NetworkBufferPool``` when calling any 
```NetworkBufferPool``` method, then check if the corresponding lock is locked 
before acquiring the ```NetworkBufferPool``` lock, though maybe in a different 
PR. WDYT, any suggestion about that?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wsry commented on a change in pull request #18173: [FLINK-25407][network] Fix the deadlock issue caused by LocalBufferPool#reserveSegments

2021-12-22 Thread GitBox


wsry commented on a change in pull request #18173:
URL: https://github.com/apache/flink/pull/18173#discussion_r774335993



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/NetworkBufferPool.java
##
@@ -158,7 +159,8 @@ public MemorySegment requestMemorySegment() {
 
 public List requestMemorySegmentsBlocking(int 
numberOfSegmentsToRequest)
 throws IOException {
-return internalRequestMemorySegments(numberOfSegmentsToRequest);
+return internalRequestMemorySegments(
+numberOfSegmentsToRequest, 
this::internalRecycleMemorySegments);

Review comment:
   I agree that the method name is misleading. Currently, the 
```requestMemorySegments``` method is only used by InputChannel for exclusive 
buffer requesting and it will add the number of required buffers to the total 
required buffers. If the total required buffers is larger than the total number 
of buffers, the "Insufficient number of network buffers" exception will be 
thrown. Because the logic number of available buffers 
(totalNumberOfMemorySegments - numTotalRequiredBuffers) is changed, buffer 
redistribution is needed. Different from the ```requestMemorySegments``` 
method, the ```requestMemorySegment``` and ```requestMemorySegmentsBlocking``` 
methods are only used by ```LocalBufferPool```, because ```LocalBufferPool``` 
has already reserved required buffers logically when it was created, so when 
requesting buffers, the logic number of available buffers is not changed, as a 
result, buffer redistribution is not needed. What do you think is I change the 
method name of ```requestMe
 morySegments``` to ```requestExclusiveSegments```? Any suggestions?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17953: [FLINK-24481][docs] Translate buffer debloat documenation to chinese

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #17953:
URL: https://github.com/apache/flink/pull/17953#issuecomment-982382670


   
   ## CI report:
   
   * f6ec665d9637d03fa33c19689ec11674e4f6094b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28470)
 
   * eef9717a5d530c8e0b7d3b43705ca18c9ec5b12d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18175: [hotfix][doc] Remove duplicate dot in generating_watermarks.md

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18175:
URL: https://github.com/apache/flink/pull/18175#issuecomment-999463191


   
   ## CI report:
   
   * 7aca07ab60e3225168a172dace0c08593ea79c90 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28507)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] WTZ468071157 commented on pull request #18183: [hotfix][doc] There is a link redirection problem in the Chinese document of Checkpointing

2021-12-22 Thread GitBox


WTZ468071157 commented on pull request #18183:
URL: https://github.com/apache/flink/pull/18183#issuecomment-158911


   @flinkbot  run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25360) Add State Desc to CheckpointMetadata

2021-12-22 Thread Yue Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464302#comment-17464302
 ] 

Yue Ma commented on FLINK-25360:



> Could we write meta state info in savepoint meta?

[~yunta] [~liufangqi]   I'm really glad to see the Community doing this kind of 
discussion . Actually , we also did the same thing  (adding meta information 
such as state desc to the savepoint)  in the internal Flink version of 
ByteDance.The reason we did this in the first place was because now users need 
to re-register the state using state-processor-api to query the state. We think 
this is too diffcult for users. So we did some simple work to make it easier 
for users to query state (such as add state meta to Savepoint or use Flink 
Batch sql for state Query .so I personally agree with this idea.

> Add State Desc to CheckpointMetadata
> 
>
> Key: FLINK-25360
> URL: https://issues.apache.org/jira/browse/FLINK-25360
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: 刘方奇
>Priority: Major
> Attachments: image-2021-12-17-20-01-42-423.png
>
>
> Now we can't get the State Descriptor info in the checkpoint meta. Like the 
> case if we use state-processor-api to load state then rewrite state, we can't 
> flexible use the state. 
> Maybe there are other cases we need the State Descriptor, so can we add this 
> info?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   * 0a637dcc5b6d1fc7f127124746bfe252cb946b0a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] aihai edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


aihai edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-129965


   Thanks very much for your advices.
   The commits are different typos in different docs and commited in different 
time, so there are two commits in this PR.
   
   I will change it as your advice.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18150: [hotfix][docs] Fix incorrect dependency and markdown format

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-997636068


   
   ## CI report:
   
   * 6f3c5bb56f88431f4b62fe443b2e1506e4b86a43 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28488)
 
   * 44d3cf49dbf308fbd1fba510542a9be8c994417d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25293) Option to let fail if KafkaSource keeps failing to commit offset

2021-12-22 Thread rerorero (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464297#comment-17464297
 ] 

rerorero commented on FLINK-25293:
--

Thank you for taking a look.

 

> since it doesn't break the correctness of Flink job. 

I got it.

 

>  Is there any cases that the offset commit failure can only be recovered by 
> restarting the Flink job

I guess my case is an issue that is not supposed to happen, so it might be a 
bug. I use KafkaSource with Confluent Cloud for a simple stream job, which just 
does some simple transforms and routes a message to another storage system, no 
window processing is used either.  But this problem happens once or twice a 
week. I contacted Confluent support team and found that the rolling restart of 
brokers was happening every time I faced the issue.

According to the log, after the client is disconnected from the broker, the 
group coordinator never recovers and keeps unavailable. This seems to be a 
problem that should be solved by client retries, but no luck in my case. All 
props I specified for the Kafka consumer is the following:

 
{code:java}
    props.setProperty("bootstrap.servers", bootstrapServers)
    props.setProperty("group.id", consumerGroup)
    props.setProperty("request.timeout.ms", "2")
    props.setProperty("retry.backoff.ms", "500")
    props.setProperty("partition.discovery.interval.ms", "1")
    props.setProperty("ssl.endpoint.identification.algorithm", "https")
    props.setProperty("security.protocol", "SASL_SSL")
    props.setProperty("sasl.mechanism", "PLAIN")
    props.setProperty(
      "sasl.jaas.config",
      s"""org.apache.kafka.common.security.plain.PlainLoginModule required 
username="$username" password="$password";"""
    )
{code}
I'm not arguing whether this is a bug or not, but I'd just pose if it makes 
sense to give developers more options when KafkaSource keeps failing to commit. 
At least I'm facing this unexpected case...

 

 

 

> Option to let fail if KafkaSource keeps failing to commit offset
> 
>
> Key: FLINK-25293
> URL: https://issues.apache.org/jira/browse/FLINK-25293
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
> Environment: Flink 1.14.0
>Reporter: rerorero
>Priority: Major
>
> Is it possible to let KafkaSource fail if it keeps failing to commit offset?
>  
> I faced an issue where KafkaSource keeps failing and never recover, while 
> it's logging like these logs:
> {code:java}
> 2021-12-08 22:18:34,155 INFO  
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator [] - 
> [Consumer clientId=dbz-cg-1, groupId=dbz-cg] Group coordinator 
> b4-pkc-x.asia-northeast1.gcp.confluent.cloud:9092 (id: 2147483643 rack: 
> null) is unavailable or invalid due to cause: null.isDisconnected: true. 
> Rediscovery will be attempted.
> 2021-12-08 22:18:34,157 WARN  
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReader [] - Failed 
> to commit consumer offsets for checkpoint 13 {code}
> This is happening not just once, but a couple of times a week (it happens 
> when the Kafka broker performs rolling restart). It can be recovered by 
> restarting the Flink Job.
> I found other people reporting the similar thing: 
> [https://lists.apache.org/thread/8l4f2yb4qwysdn1cj1wjk99tfb79kgs2]. This 
> could possibly be a problem with the Kafka client, and of course, the problem 
> should be fixed on Kafka side if so.
> However, Flink Kafka connector doesn't provide an automatic way to save this 
> situation. KafkaSource keeps retrying forever when a retriable error occurs, 
> even if it is not retriable actually: 
> [https://github.com/apache/flink/blob/afb29d92c4e76ec6a453459c3d8a08304efec549/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaSourceReader.java#L144-L148]
> Since it sends metrics of the number of times a commit fails, it could be 
> automated by monitoring it and restarting the job, but that would mean we 
> need to have a new process to be managed.
> Does it make sense to have KafkaSource have the option like, let the source 
> task fail if it keeps failing to commit an offset more than X times?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17953: [FLINK-24481][docs] Translate buffer debloat documenation to chinese

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #17953:
URL: https://github.com/apache/flink/pull/17953#issuecomment-982382670


   
   ## CI report:
   
   * f6ec665d9637d03fa33c19689ec11674e4f6094b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28470)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on a change in pull request #15140: [FLINK-20628][connectors/rabbitmq2] RabbitMQ connector using new connector API

2021-12-22 Thread GitBox


RocMarshal commented on a change in pull request #15140:
URL: https://github.com/apache/flink/pull/15140#discussion_r774280620



##
File path: flink-connectors/flink-connector-rabbitmq2/README.md
##
@@ -0,0 +1,24 @@
+# License of the Rabbit MQ Connector

Review comment:
   nit: What about merging `README.md` of RMQSource and `README.md` of 
RMQSink into this `README.md` ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-25261) Changelog not truncated on materialization

2021-12-22 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464286#comment-17464286
 ] 

Yuan Mei edited comment on FLINK-25261 at 12/23/21, 4:54 AM:
-

{quote}1. Correct (as I wrote above in the context of this ticket I mean 
in-memory data only)
2. As I wrote above in the context of this ticket I mean in-memory data only
{quote}
If truncating only means truncating the in-memory part, would this API be 
general enough for other implementations as well? (Like in-memory 
implementation, Kafka based implementation)
{quote}3. In-memory means any in-memory object not needed for checkpointing 
anymore, depending on DSTL implementation (reference to file or an in-memory 
byte array to-be -flushed); To discard already flushed but not included into 
any checkpoint changes, we have three options: a) shared/private state 
ownership and TM-side registry (FLINK-23139); b) TM-side registry only for 
changelog; c) rely on FLINK-24852 (which will likely be needed by FLINK-25395). 
I propose to postpone this decision until we decide on FLINK-25395 and state 
ownership. Note that this only happens if pre-emptive upload is enabled 
(otherwise, state changes are always associated with some checkpoint)
{quote}
I am fine to postpone this decision.
{quote}4. Materialization is independent, but handling its result is 
"synchronized" with checkpointing by using Task mailbox; so it's only the 
writer.truncate() method that should take ongoing checkpoints into account; and 
this is the with the current FS writer.
{quote}
Let's consider this:

New CP
-{-}|{-}-{-}|{-}-> (In Mem Log)
Materialization up to (but not complete yet)

1). Materialization triggered

2). CP triggered (the uploading part is not in the task thread)

3). Materialization finished and put truncating action into the mailbox

4). task thread truncate the log

5). checkpoint complete

=

Also, if I am understanding correctly, now we put clean-up into three different 
places now:

1). State Cleanup upon checkpoint subsumption

2). In-memory part clean-up upon materialization completes

3). DFS files cleanup (not included in any state) TBD.

Again, would this abstraction general enough to support using other 
implmentation? It is a bit fragile to me.


was (Author: ym):
{quote}1. Correct (as I wrote above in the context of this ticket I mean 
in-memory data only)
2. As I wrote above in the context of this ticket I mean in-memory data only
{quote}
If truncating only means truncating the in-memory part, would this API be 
general enough for other implementations as well? (Like in-memory 
implementation, Kafka based implementation)
{quote}3. In-memory means any in-memory object not needed for checkpointing 
anymore, depending on DSTL implementation (reference to file or an in-memory 
byte array to-be -flushed); To discard already flushed but not included into 
any checkpoint changes, we have three options: a) shared/private state 
ownership and TM-side registry (FLINK-23139); b) TM-side registry only for 
changelog; c) rely on FLINK-24852 (which will likely be needed by FLINK-25395). 
I propose to postpone this decision until we decide on FLINK-25395 and state 
ownership. Note that this only happens if pre-emptive upload is enabled 
(otherwise, state changes are always associated with some checkpoint)
{quote}
I am fine to postpone this decision.
{quote}Materialization is independent, but handling its result is 
"synchronized" with checkpointing by using Task mailbox; so it's only the 
writer.truncate() method that should take ongoing checkpoints into account; and 
this is the with the current FS writer.
{quote}
Let's consider this:

New CP
--|---|--> (In Mem Log)
Materialization up to (but not complete yet)

1). Materialization triggered

2). CP triggered (the uploading part is not in the task thread)

3). Materialization finished and put truncating action into the mailbox

4). task thread truncate the log

5). checkpoint complete

=

Also, if I am understanding correctly, now we put clean-up into three different 
places now:

1). State Cleanup upon checkpoint subsumption

2). In-memory part clean-up upon materialization completes

3). DFS files cleanup (not included in any state) TBD.

Again, would this abstraction general enough to support using other 
implmentation? It is a bit fragile to me.

> Changelog not truncated on materialization
> --
>
> Key: FLINK-25261
> URL: https://issues.apache.org/jira/browse/FLINK-25261
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-req

[jira] [Comment Edited] (FLINK-25261) Changelog not truncated on materialization

2021-12-22 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464286#comment-17464286
 ] 

Yuan Mei edited comment on FLINK-25261 at 12/23/21, 4:52 AM:
-

{quote}1. Correct (as I wrote above in the context of this ticket I mean 
in-memory data only)
2. As I wrote above in the context of this ticket I mean in-memory data only
{quote}
If truncating only means truncating the in-memory part, would this API be 
general enough for other implementations as well? (Like in-memory 
implementation, Kafka based implementation)
{quote}3. In-memory means any in-memory object not needed for checkpointing 
anymore, depending on DSTL implementation (reference to file or an in-memory 
byte array to-be -flushed); To discard already flushed but not included into 
any checkpoint changes, we have three options: a) shared/private state 
ownership and TM-side registry (FLINK-23139); b) TM-side registry only for 
changelog; c) rely on FLINK-24852 (which will likely be needed by FLINK-25395). 
I propose to postpone this decision until we decide on FLINK-25395 and state 
ownership. Note that this only happens if pre-emptive upload is enabled 
(otherwise, state changes are always associated with some checkpoint)
{quote}
I am fine to postpone this decision.
{quote}Materialization is independent, but handling its result is 
"synchronized" with checkpointing by using Task mailbox; so it's only the 
writer.truncate() method that should take ongoing checkpoints into account; and 
this is the with the current FS writer.
{quote}
Let's consider this:

New CP
--|---|--> (In Mem Log)
Materialization up to (but not complete yet)

1). Materialization triggered

2). CP triggered (the uploading part is not in the task thread)

3). Materialization finished and put truncating action into the mailbox

4). task thread truncate the log

5). checkpoint complete

=

Also, if I am understanding correctly, now we put clean-up into three different 
places now:

1). State Cleanup upon checkpoint subsumption

2). In-memory part clean-up upon materialization completes

3). DFS files cleanup (not included in any state) TBD.

Again, would this abstraction general enough to support using other 
implmentation? It is a bit fragile to me.


was (Author: ym):
{quote}1. Correct (as I wrote above in the context of this ticket I mean 
in-memory data only)
2. As I wrote above in the context of this ticket I mean in-memory data only
{quote}
If truncating only means truncating the in-memory part, would this API be 
general enough for other implementations as well? (Like in-memory 
implementation, Kafka based implementation)
{quote}3. In-memory means any in-memory object not needed for checkpointing 
anymore, depending on DSTL implementation (reference to file or an in-memory 
byte array to-be -flushed); To discard already flushed but not included into 
any checkpoint changes, we have three options: a) shared/private state 
ownership and TM-side registry (FLINK-23139); b) TM-side registry only for 
changelog; c) rely on FLINK-24852 (which will likely be needed by FLINK-25395). 
I propose to postpone this decision until we decide on FLINK-25395 and state 
ownership. Note that this only happens if pre-emptive upload is enabled 
(otherwise, state changes are always associated with some checkpoint)
{quote}
I am fine to postpone this decision.
{quote}Materialization is independent, but handling its result is 
"synchronized" with checkpointing by using Task mailbox; so it's only the 
writer.truncate() method that should take ongoing checkpoints into account; and 
this is the with the current FS writer.
{quote}
Let's consider this:

New CP
-{-}|{-}---{-}|{-}-> (In Mem Log)
Materialization up to (but not complete yet)

1). Materialization triggered

2). CP triggered (the uploading part is not in the task thread)

3). Materialization finished and put truncating action into the mailbox

4). task thread truncate the log

5). checkpoint complete

=

Also, if I am understanding correctly, now we put clean-up into three different 
places now:

1). State Cleanup upon checkpoint subsumption

2). In-memory part clean-up upon materialization completes

3). DFS files cleanup (not included in any state) TBD.

Again, would this abstraction general enough to support using other 
implmentation? It is a bit fragile to me.

 

 

 

 

 

 

> Changelog not truncated on materialization
> --
>
> Key: FLINK-25261
> URL: https://issues.apache.org/jira/browse/FLINK-25261
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
> 

[jira] [Commented] (FLINK-25261) Changelog not truncated on materialization

2021-12-22 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464286#comment-17464286
 ] 

Yuan Mei commented on FLINK-25261:
--

{quote}1. Correct (as I wrote above in the context of this ticket I mean 
in-memory data only)
2. As I wrote above in the context of this ticket I mean in-memory data only
{quote}
If truncating only means truncating the in-memory part, would this API be 
general enough for other implementations as well? (Like in-memory 
implementation, Kafka based implementation)
{quote}3. In-memory means any in-memory object not needed for checkpointing 
anymore, depending on DSTL implementation (reference to file or an in-memory 
byte array to-be -flushed); To discard already flushed but not included into 
any checkpoint changes, we have three options: a) shared/private state 
ownership and TM-side registry (FLINK-23139); b) TM-side registry only for 
changelog; c) rely on FLINK-24852 (which will likely be needed by FLINK-25395). 
I propose to postpone this decision until we decide on FLINK-25395 and state 
ownership. Note that this only happens if pre-emptive upload is enabled 
(otherwise, state changes are always associated with some checkpoint)
{quote}
I am fine to postpone this decision.
{quote}Materialization is independent, but handling its result is 
"synchronized" with checkpointing by using Task mailbox; so it's only the 
writer.truncate() method that should take ongoing checkpoints into account; and 
this is the with the current FS writer.
{quote}
Let's consider this:

New CP
-{-}|{-}---{-}|{-}-> (In Mem Log)
Materialization up to (but not complete yet)

1). Materialization triggered

2). CP triggered (the uploading part is not in the task thread)

3). Materialization finished and put truncating action into the mailbox

4). task thread truncate the log

5). checkpoint complete

=

Also, if I am understanding correctly, now we put clean-up into three different 
places now:

1). State Cleanup upon checkpoint subsumption

2). In-memory part clean-up upon materialization completes

3). DFS files cleanup (not included in any state) TBD.

Again, would this abstraction general enough to support using other 
implmentation? It is a bit fragile to me.

 

 

 

 

 

 

> Changelog not truncated on materialization
> --
>
> Key: FLINK-25261
> URL: https://issues.apache.org/jira/browse/FLINK-25261
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> [https://github.com/apache/flink/blob/dcc4d43e413b20f70036e73c61d52e2e1c5afee7/flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/ChangelogKeyedStateBackend.java#L640]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] sky-walking commented on pull request #18172: [hotfix][doc] Change TableEnvironment attribute reference error

2021-12-22 Thread GitBox


sky-walking commented on pull request #18172:
URL: https://github.com/apache/flink/pull/18172#issuecomment-139840


   > Please update your commit message to follow the Code Style & Quality 
Guidelines, which can be found at 
https://flink.apache.org/contributing/code-style-and-quality-preamble.html.
   
   Could you be more specific, please? I don't know what the problem is, thank 
you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] X-czh commented on pull request #18148: [FLINK-25372] Add thread dump feature for jobmanager

2021-12-22 Thread GitBox


X-czh commented on pull request #18148:
URL: https://github.com/apache/flink/pull/18148#issuecomment-136994


   @xintongsong Thanks for the advice and sorry for the inconvenience. I'll try 
restoring the commits. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24736) Non vulenerable jar files for Apache Flink 1.14.0

2021-12-22 Thread Parag Somani (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464278#comment-17464278
 ] 

Parag Somani commented on FLINK-24736:
--

Updated ticket for flink 1.14.2 due to log4j vulnerability upgrade.

> Non vulenerable jar files for Apache Flink 1.14.0
> -
>
> Key: FLINK-24736
> URL: https://issues.apache.org/jira/browse/FLINK-24736
> Project: Flink
>  Issue Type: Bug
>Reporter: Parag Somani
>Priority: Major
>
> Hello,
> We are using Apache flink 1.14.0 as one of base image in our production. Due 
> to recent upgrade, we have many container security defects. 
> I am using "flink-1.14.0-bin-scala_2.12"in our k8s env.
> Please assist with Flink version having non-vulnerable libraries. List of 
> vulnerable libs are as follows: 
> [7.5] [sonatype-2020-0029] [flink-runtime] [1.14.2]
> [9.1] [CVE-2019-20445] [flink-runtime] [1.14.2]
> [9.1] [CVE-2019-20444] [flink-runtime] [1.14.2]
> [7.5] [CVE-2019-16869] [flink-runtime] [1.14.2]
> [7.5] [sonatype-2020-0029] [flink-rpc-akka] [1.14.2]
> [9.1] [CVE-2019-20445] [flink-rpc-akka] [1.14.2]
> [9.1] [CVE-2019-20444] [flink-rpc-akka] [1.14.2]
> [7.5] [CVE-2019-16869] [flink-rpc-akka] [1.14.2]
> [7.5] [sonatype-2020-0029] [flink-rpc-akka-loader] [1.14.2]
> [9.1] [CVE-2019-20445] [flink-rpc-akka-loader] [1.14.2]
> [9.1] [CVE-2019-20444] [flink-rpc-akka-loader] [1.14.2]
> [7.5] [CVE-2019-16869] [flink-rpc-akka-loader] [1.14.2]
> Can you assist with this ?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24736) Non vulenerable jar files for Apache Flink 1.14.0

2021-12-22 Thread Parag Somani (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parag Somani updated FLINK-24736:
-
Description: 
Hello,

We are using Apache flink 1.14.0 as one of base image in our production. Due to 
recent upgrade, we have many container security defects. 

I am using "flink-1.14.0-bin-scala_2.12"in our k8s env.

Please assist with Flink version having non-vulnerable libraries. List of 
vulnerable libs are as follows: 

[7.5] [sonatype-2020-0029] [flink-runtime] [1.14.2]
[9.1] [CVE-2019-20445] [flink-runtime] [1.14.2]
[9.1] [CVE-2019-20444] [flink-runtime] [1.14.2]
[7.5] [CVE-2019-16869] [flink-runtime] [1.14.2]
[7.5] [sonatype-2020-0029] [flink-rpc-akka] [1.14.2]
[9.1] [CVE-2019-20445] [flink-rpc-akka] [1.14.2]
[9.1] [CVE-2019-20444] [flink-rpc-akka] [1.14.2]
[7.5] [CVE-2019-16869] [flink-rpc-akka] [1.14.2]
[7.5] [sonatype-2020-0029] [flink-rpc-akka-loader] [1.14.2]
[9.1] [CVE-2019-20445] [flink-rpc-akka-loader] [1.14.2]
[9.1] [CVE-2019-20444] [flink-rpc-akka-loader] [1.14.2]
[7.5] [CVE-2019-16869] [flink-rpc-akka-loader] [1.14.2]


Can you assist with this ?


  was:
Hello,

We are using Apache flink 1.14.0 as one of base image in our production. Due to 
recent upgrade, we have many container security defects. 

I am using "flink-1.14.0-bin-scala_2.12"in our k8s env.

Please assist with Flink version having non-vulnerable libraries. List of 
vulnerable libs are as follows: 

# [7.5] [sonatype-2020-0029] [flink-rpc-akka-loader] [1.14.0]
# [7.5] [sonatype-2019-0115] [flink-rpc-akka-loader] [1.14.0]
# [9.1] [CVE-2019-20445] [flink-rpc-akka-loader] [1.14.0]
# [9.1] [CVE-2019-20444] [flink-rpc-akka-loader] [1.14.0]
# [7.5] [CVE-2019-16869] [flink-rpc-akka-loader] [1.14.0]
# [7.5] [sonatype-2019-0115] [scala-compiler] [2.12.7]
# [7.5] [sonatype-2019-0115] [jquery] [1.8.2]
# [7.5] [sonatype-2020-0029] [flink-runtime] [1.14.0]
# [7.5] [sonatype-2019-0115] [flink-runtime] [1.14.0]
# [9.1] [CVE-2019-20445] [flink-runtime] [1.14.0]
# [9.1] [CVE-2019-20444] [flink-runtime] [1.14.0]
# [7.5] [CVE-2019-16869] [flink-runtime] [1.14.0]
# [7.5] [sonatype-2020-0029] [flink-rpc-akka] [1.14.0]
# [7.5] [sonatype-2019-0115] [flink-rpc-akka] [1.14.0]
# [9.1] [CVE-2019-20445] [flink-rpc-akka] [1.14.0]
# [9.1] [CVE-2019-20444] [flink-rpc-akka] [1.14.0]
# [7.5] [CVE-2019-16869] [flink-rpc-akka] [1.14.0]


Can you assist with this ?



> Non vulenerable jar files for Apache Flink 1.14.0
> -
>
> Key: FLINK-24736
> URL: https://issues.apache.org/jira/browse/FLINK-24736
> Project: Flink
>  Issue Type: Bug
>Reporter: Parag Somani
>Priority: Major
>
> Hello,
> We are using Apache flink 1.14.0 as one of base image in our production. Due 
> to recent upgrade, we have many container security defects. 
> I am using "flink-1.14.0-bin-scala_2.12"in our k8s env.
> Please assist with Flink version having non-vulnerable libraries. List of 
> vulnerable libs are as follows: 
> [7.5] [sonatype-2020-0029] [flink-runtime] [1.14.2]
> [9.1] [CVE-2019-20445] [flink-runtime] [1.14.2]
> [9.1] [CVE-2019-20444] [flink-runtime] [1.14.2]
> [7.5] [CVE-2019-16869] [flink-runtime] [1.14.2]
> [7.5] [sonatype-2020-0029] [flink-rpc-akka] [1.14.2]
> [9.1] [CVE-2019-20445] [flink-rpc-akka] [1.14.2]
> [9.1] [CVE-2019-20444] [flink-rpc-akka] [1.14.2]
> [7.5] [CVE-2019-16869] [flink-rpc-akka] [1.14.2]
> [7.5] [sonatype-2020-0029] [flink-rpc-akka-loader] [1.14.2]
> [9.1] [CVE-2019-20445] [flink-rpc-akka-loader] [1.14.2]
> [9.1] [CVE-2019-20444] [flink-rpc-akka-loader] [1.14.2]
> [7.5] [CVE-2019-16869] [flink-rpc-akka-loader] [1.14.2]
> Can you assist with this ?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] aihai commented on pull request #18150: [hotfix][docs] Fix typo in connectors doc

2021-12-22 Thread GitBox


aihai commented on pull request #18150:
URL: https://github.com/apache/flink/pull/18150#issuecomment-129965


   Thanks very much for your advices.
   The commits are different typos in different docs and commited in different 
time, so there are two commits in this PR.
   
   I will change the git log as your advice.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24736) Non vulenerable jar files for Apache Flink 1.14.0

2021-12-22 Thread Parag Somani (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parag Somani updated FLINK-24736:
-
Description: 
Hello,

We are using Apache flink 1.14.0 as one of base image in our production. Due to 
recent upgrade, we have many container security defects. 

I am using "flink-1.14.0-bin-scala_2.12"in our k8s env.

Please assist with Flink version having non-vulnerable libraries. List of 
vulnerable libs are as follows: 

# [7.5] [sonatype-2020-0029] [flink-rpc-akka-loader] [1.14.0]
# [7.5] [sonatype-2019-0115] [flink-rpc-akka-loader] [1.14.0]
# [9.1] [CVE-2019-20445] [flink-rpc-akka-loader] [1.14.0]
# [9.1] [CVE-2019-20444] [flink-rpc-akka-loader] [1.14.0]
# [7.5] [CVE-2019-16869] [flink-rpc-akka-loader] [1.14.0]
# [7.5] [sonatype-2019-0115] [scala-compiler] [2.12.7]
# [7.5] [sonatype-2019-0115] [jquery] [1.8.2]
# [7.5] [sonatype-2020-0029] [flink-runtime] [1.14.0]
# [7.5] [sonatype-2019-0115] [flink-runtime] [1.14.0]
# [9.1] [CVE-2019-20445] [flink-runtime] [1.14.0]
# [9.1] [CVE-2019-20444] [flink-runtime] [1.14.0]
# [7.5] [CVE-2019-16869] [flink-runtime] [1.14.0]
# [7.5] [sonatype-2020-0029] [flink-rpc-akka] [1.14.0]
# [7.5] [sonatype-2019-0115] [flink-rpc-akka] [1.14.0]
# [9.1] [CVE-2019-20445] [flink-rpc-akka] [1.14.0]
# [9.1] [CVE-2019-20444] [flink-rpc-akka] [1.14.0]
# [7.5] [CVE-2019-16869] [flink-rpc-akka] [1.14.0]


Can you assist with this ?


  was:
Hello,

We are using Apache flink 1.14.0 as one of base image in our production. Due to 
recent upgrade, we have many container security defects. 

I am using "flink-1.14.0-bin-scala_2.12"in our k8s env.

Please assist with Flink version having non-vulnerable libraries. List of 
vulnerable libs are as follows: 

# [7.5] [sonatype-2020-0029] [flink-rpc-akka-loader] [1.14.0]
# [7.5] [sonatype-2019-0115] [flink-rpc-akka-loader] [1.14.0]
# [9.1] [CVE-2019-20445] [flink-rpc-akka-loader] [1.14.0]
# [9.1] [CVE-2019-20444] [flink-rpc-akka-loader] [1.14.0]
# [7.5] [CVE-2019-16869] [flink-rpc-akka-loader] [1.14.0]
# [7.5] [sonatype-2019-0115] [scala-compiler] [2.12.7]
# [7.5] [sonatype-2019-0115] [jquery] [1.8.2]
# [7.5] [sonatype-2020-0029] [flink-runtime] [1.14.0]
# [7.5] [sonatype-2019-0115] [flink-runtime] [1.14.0]
# [9.1] [CVE-2019-20445] [flink-runtime] [1.14.0]
# [9.1] [CVE-2019-20444] [flink-runtime] [1.14.0]
# [7.5] [CVE-2019-16869] [flink-runtime] [1.14.0]
# [7.5] [sonatype-2020-0029] [flink-rpc-akka] [1.14.0]
# [7.5] [sonatype-2019-0115] [flink-rpc-akka] [1.14.0]
# [9.1] [CVE-2019-20445] [flink-rpc-akka] [1.14.0]
# [9.1] [CVE-2019-20444] [flink-rpc-akka] [1.14.0]
# [7.5] [CVE-2019-16869] [flink-rpc-akka] [1.14.0]
# [9.8] [CVE-2019-17571] [log4j] [1.2.17] 

Can you assist with this ?



> Non vulenerable jar files for Apache Flink 1.14.0
> -
>
> Key: FLINK-24736
> URL: https://issues.apache.org/jira/browse/FLINK-24736
> Project: Flink
>  Issue Type: Bug
>Reporter: Parag Somani
>Priority: Major
>
> Hello,
> We are using Apache flink 1.14.0 as one of base image in our production. Due 
> to recent upgrade, we have many container security defects. 
> I am using "flink-1.14.0-bin-scala_2.12"in our k8s env.
> Please assist with Flink version having non-vulnerable libraries. List of 
> vulnerable libs are as follows: 
> # [7.5] [sonatype-2020-0029] [flink-rpc-akka-loader] [1.14.0]
> # [7.5] [sonatype-2019-0115] [flink-rpc-akka-loader] [1.14.0]
> # [9.1] [CVE-2019-20445] [flink-rpc-akka-loader] [1.14.0]
> # [9.1] [CVE-2019-20444] [flink-rpc-akka-loader] [1.14.0]
> # [7.5] [CVE-2019-16869] [flink-rpc-akka-loader] [1.14.0]
> # [7.5] [sonatype-2019-0115] [scala-compiler] [2.12.7]
> # [7.5] [sonatype-2019-0115] [jquery] [1.8.2]
> # [7.5] [sonatype-2020-0029] [flink-runtime] [1.14.0]
> # [7.5] [sonatype-2019-0115] [flink-runtime] [1.14.0]
> # [9.1] [CVE-2019-20445] [flink-runtime] [1.14.0]
> # [9.1] [CVE-2019-20444] [flink-runtime] [1.14.0]
> # [7.5] [CVE-2019-16869] [flink-runtime] [1.14.0]
> # [7.5] [sonatype-2020-0029] [flink-rpc-akka] [1.14.0]
> # [7.5] [sonatype-2019-0115] [flink-rpc-akka] [1.14.0]
> # [9.1] [CVE-2019-20445] [flink-rpc-akka] [1.14.0]
> # [9.1] [CVE-2019-20444] [flink-rpc-akka] [1.14.0]
> # [7.5] [CVE-2019-16869] [flink-rpc-akka] [1.14.0]
> Can you assist with this ?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25423) Enable loading state backend via configuration in state processor api

2021-12-22 Thread Yun Tang (Jira)
Yun Tang created FLINK-25423:


 Summary: Enable loading state backend via configuration in state 
processor api
 Key: FLINK-25423
 URL: https://issues.apache.org/jira/browse/FLINK-25423
 Project: Flink
  Issue Type: Improvement
  Components: API / State Processor, Runtime / State Backends
Reporter: Yun Tang
 Fix For: 1.15.0


Currently, state processor API would load savepoint via explictly initalizated 
state backend on client side, which is like 
{{StreamExecutionEnvironment#setStateBackend(stateBackend)}}:

{code:java}
Savepoint.load(bEnv, "hdfs://path/", new HashMapStateBackend());
{code}

As we all konw, stream env also support to load state backend via configuration 
to provide flexibility to load state backends especially some customized state 
backend. This could also benefit state processor API with similiar ability.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17937: [FLINK-25044][testing][Pulsar Connector] Add More Unit Test For Pulsar Source

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #17937:
URL: https://github.com/apache/flink/pull/17937#issuecomment-981287170


   
   ## CI report:
   
   * c1728e5765b33b6ba1140a2f313687eb3bbbaf5f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28460)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18185: [FLINK-25422][python] Specify requirements in dev-requirements.txt

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18185:
URL: https://github.com/apache/flink/pull/18185#issuecomment-95564


   
   ## CI report:
   
   * b367abdaa24ab8d7e51bf98e3eb8d39c08fed00a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28505)
 
   * f57ac3a67f9dee469816fe1d2ebd78a56d9b9980 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28508)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangzhuoz commented on pull request #18175: [hotfix][docs] Remove duplicate dot in generating_watermarks.md

2021-12-22 Thread GitBox


wangzhuoz commented on pull request #18175:
URL: https://github.com/apache/flink/pull/18175#issuecomment-124877


   Thank you. I changed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18185: [FLINK-25422][python] Specify requirements in dev-requirements.txt

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18185:
URL: https://github.com/apache/flink/pull/18185#issuecomment-95564


   
   ## CI report:
   
   * b367abdaa24ab8d7e51bf98e3eb8d39c08fed00a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28505)
 
   * f57ac3a67f9dee469816fe1d2ebd78a56d9b9980 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18175: Update generating_watermarks.md

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18175:
URL: https://github.com/apache/flink/pull/18175#issuecomment-999463191


   
   ## CI report:
   
   * b2e2037f26b06d81f8fb556602699a2dd9246567 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28475)
 
   * 7aca07ab60e3225168a172dace0c08593ea79c90 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28507)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25188) Cannot install PyFlink on MacOS with M1 chip

2021-12-22 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464262#comment-17464262
 ] 

Dian Fu commented on FLINK-25188:
-

[~ana4] Thanks for the sharing this information. 

> Cannot install PyFlink on MacOS with M1 chip
> 
>
> Key: FLINK-25188
> URL: https://issues.apache.org/jira/browse/FLINK-25188
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.14.0
>Reporter: Ada Wong
>Priority: Major
> Fix For: 1.15.0
>
>
> Need to update dependencies: numpy>= 
> 1.20.3、pyarrow>=5.0.0、pandas>=1.3.0、apache-beam==2.36.0
> This following is some dependencies adapt M1 chip informations
> Numpy version:
> [https://stackoverflow.com/questions/65336789/numpy-build-fail-in-m1-big-sur-11-1]
> [https://github.com/numpy/numpy/releases/tag/v1.21.4]
> pyarrow version:
> [https://stackoverflow.com/questions/68385728/installing-pyarrow-cant-copy-build-lib-macosx-11-arm64-3-9-pyarrow-include-ar]
> pandas version:
> [https://github.com/pandas-dev/pandas/issues/40611#issuecomment-901569655]
> Apache beam:
> https://issues.apache.org/jira/browse/BEAM-12957
> https://issues.apache.org/jira/browse/BEAM-11703
> This following is dependency tree after installed successfully 
> Although Beam need numpy<1.21.0 and M1 need numpy >=1.21.4, when I using 
> numpy 1.20.3  I install successfully on M1 chip.
> {code:java}
> apache-flink==1.14.dev0
>   - apache-beam [required: ==2.34.0, installed: 2.34.0]
>     - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1]
>     - crcmod [required: >=1.7,<2.0, installed: 1.7]
>     - dill [required: >=0.3.1.1,<0.3.2, installed: 0.3.1.1]
>     - fastavro [required: >=0.21.4,<2, installed: 0.23.6]
>       - pytz [required: Any, installed: 2021.3]
>     - future [required: >=0.18.2,<1.0.0, installed: 0.18.2]
>     - grpcio [required: >=1.29.0,<2, installed: 1.42.0]
>       - six [required: >=1.5.2, installed: 1.16.0]
>     - hdfs [required: >=2.1.0,<3.0.0, installed: 2.6.0]
>       - docopt [required: Any, installed: 0.6.2]
>       - requests [required: >=2.7.0, installed: 2.26.0]
>         - certifi [required: >=2017.4.17, installed: 2021.10.8]
>         - charset-normalizer [required: ~=2.0.0, installed: 2.0.9]
>         - idna [required: >=2.5,<4, installed: 3.3]
>         - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7]
>       - six [required: >=1.9.0, installed: 1.16.0]
>     - httplib2 [required: >=0.8,<0.20.0, installed: 0.19.1]
>       - pyparsing [required: >=2.4.2,<3, installed: 2.4.7]
>     - numpy [required: >=1.14.3,<1.21.0, installed: 1.20.3]
>     - oauth2client [required: >=2.0.1,<5, installed: 4.1.3]
>       - httplib2 [required: >=0.9.1, installed: 0.19.1]
>         - pyparsing [required: >=2.4.2,<3, installed: 2.4.7]
>       - pyasn1 [required: >=0.1.7, installed: 0.4.8]
>       - pyasn1-modules [required: >=0.0.5, installed: 0.2.8]
>         - pyasn1 [required: >=0.4.6,<0.5.0, installed: 0.4.8]
>       - rsa [required: >=3.1.4, installed: 4.8]
>         - pyasn1 [required: >=0.1.3, installed: 0.4.8]
>       - six [required: >=1.6.1, installed: 1.16.0]
>     - orjson [required: <4.0, installed: 3.6.5]
>     - protobuf [required: >=3.12.2,<4, installed: 3.17.3]
>       - six [required: >=1.9, installed: 1.16.0]
>     - pyarrow [required: >=0.15.1,<6.0.0, installed: 5.0.0]
>       - numpy [required: >=1.16.6, installed: 1.20.3]
>     - pydot [required: >=1.2.0,<2, installed: 1.4.2]
>       - pyparsing [required: >=2.1.4, installed: 2.4.7]
>     - pymongo [required: >=3.8.0,<4.0.0, installed: 3.12.2]
>     - python-dateutil [required: >=2.8.0,<3, installed: 2.8.0]
>       - six [required: >=1.5, installed: 1.16.0]
>     - pytz [required: >=2018.3, installed: 2021.3]
>     - requests [required: >=2.24.0,<3.0.0, installed: 2.26.0]
>       - certifi [required: >=2017.4.17, installed: 2021.10.8]
>       - charset-normalizer [required: ~=2.0.0, installed: 2.0.9]
>       - idna [required: >=2.5,<4, installed: 3.3]
>       - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7]
>     - typing-extensions [required: >=3.7.0,<4, installed: 3.10.0.2]
>   - apache-flink-libraries [required: ==1.14.dev0, installed: 1.14.dev0]
>   - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1]
>   - cloudpickle [required: ==1.2.2, installed: 1.2.2]
>   - fastavro [required: >=0.21.4,<0.24, installed: 0.23.6]
>     - pytz [required: Any, installed: 2021.3]
>   - numpy [required: >=1.20.3, installed: 1.20.3]
>   - pandas [required: >=1.3.0, installed: 1.3.0]
>     - numpy [required: >=1.17.3, installed: 1.20.3]
>     - python-dateutil [required: >=2.7.3, installed: 2.8.0]
>       - six [required: >=1.5, installed: 1.16.0]
>     - pytz [required: >=2017.3, installed: 2021.3]

[GitHub] [flink] imaffe commented on pull request #18014: [FLINK-24857][test][Kafka] Upgrade SourceReaderTestBase t…

2021-12-22 Thread GitBox


imaffe commented on pull request #18014:
URL: https://github.com/apache/flink/pull/18014#issuecomment-123030


   > > @PatrickRen Updated ~
   > 
   > @imaffe
   > 
   > The text used in the old JUnit4 should be more like an error msg than 
description before the test. It should be better to use withFailMessage(..) in 
this case. I think what you have done before this change was actually the right 
one. @PatrickRen WDYT?
   > 
   > Reference: 
https://junit.org/junit4/javadoc/4.8/org/junit/Assert.html#assertEquals(java.lang.String,%20long,%20long).
 The message is used for AssertionError.
   
   Yeah I know it is like error message, but I think from @PatrickRen as() is 
the right way to show description when failed plus the original "Expected xxx 
but got xxx". (When test passes the description can be ignored)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18175: Update generating_watermarks.md

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18175:
URL: https://github.com/apache/flink/pull/18175#issuecomment-999463191


   
   ## CI report:
   
   * b2e2037f26b06d81f8fb556602699a2dd9246567 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28475)
 
   * 7aca07ab60e3225168a172dace0c08593ea79c90 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25360) Add State Desc to CheckpointMetadata

2021-12-22 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-25360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464259#comment-17464259
 ] 

刘方奇 commented on FLINK-25360:
-

[~yunta] THX for your reply, actually I don't understand why RocksDB 
incremenatl checkpoints already have enough information in checkpoint meta. As 
I see, there are still not enough info in RocksDB incremenatl checkpoint meta 
such as UDF.

Reply to your two lines:
 # Is it means all state meta info will be stored in the checkpoint / savepoint 
meta, if the discussion is agreed.
 # How to start the discussion threads in dev mailing list? Could you help to 
give a guide?

Wish all of you a happy new year and a merry Christmas in advance.

> Add State Desc to CheckpointMetadata
> 
>
> Key: FLINK-25360
> URL: https://issues.apache.org/jira/browse/FLINK-25360
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: 刘方奇
>Priority: Major
> Attachments: image-2021-12-17-20-01-42-423.png
>
>
> Now we can't get the State Descriptor info in the checkpoint meta. Like the 
> case if we use state-processor-api to load state then rewrite state, we can't 
> flexible use the state. 
> Maybe there are other cases we need the State Descriptor, so can we add this 
> info?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18172: [hotfix][doc] Change TableEnvironment attribute reference error

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18172:
URL: https://github.com/apache/flink/pull/18172#issuecomment-999323746


   
   ## CI report:
   
   * 20f4eea5c51af444b31a4535e7751af021a9041c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28462)
 
   * edab20c2aa2b2d60566d2504509055575c730ba6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28506)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25418) The dir_cache is specified in the flink task. When there is no network, you will still download the python third-party library

2021-12-22 Thread yangcai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yangcai updated FLINK-25418:

Summary: The dir_cache is specified in the flink task. When there is no 
network, you will still download the python third-party library  (was: The 
dir_cache is specified in the flick task. When there is no network, you will 
still download the python third-party library)

> The dir_cache is specified in the flink task. When there is no network, you 
> will still download the python third-party library
> --
>
> Key: FLINK-25418
> URL: https://issues.apache.org/jira/browse/FLINK-25418
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.0, 1.14.0
> Environment: python3.7
> flink1.13.1
>Reporter: yangcai
>Assignee: Ada Wong
>Priority: Major
>
> Specified in Python code 
> set_python_requirements(requirements_cache_dir=dir_cache)
> During task execution, priority will be given to downloading Python 
> third-party packages from the network,Can I directly use the python package 
> in the cache I specify when I specify the cache value and don't want the task 
> task to download the python package from the network



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18172: [hotfix][doc] Change TableEnvironment attribute reference error

2021-12-22 Thread GitBox


flinkbot edited a comment on pull request #18172:
URL: https://github.com/apache/flink/pull/18172#issuecomment-999323746


   
   ## CI report:
   
   * 20f4eea5c51af444b31a4535e7751af021a9041c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28462)
 
   * edab20c2aa2b2d60566d2504509055575c730ba6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25413) Use append dfs.nameservices hadoop config to replace override

2021-12-22 Thread qiunan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiunan updated FLINK-25413:
---
Summary: Use append dfs.nameservices hadoop config  to replace override  
(was: Use append dfs.nameservices hadoop config  to replace overwrite)

> Use append dfs.nameservices hadoop config  to replace override
> --
>
> Key: FLINK-25413
> URL: https://issues.apache.org/jira/browse/FLINK-25413
> Project: Flink
>  Issue Type: Improvement
>Reporter: qiunan
>Priority: Major
>
> In FLINK-16005[flink-yarn] Support yarn and hadoop config override.
> In flink-conf.yaml
> flink.hadoop.dfs.namenode.rpc-address.nameservice1.nn1: bigdata1:8020
> flink.hadoop.dfs.namenode.rpc-address.nameservice1.nn2: bigdata2:8020
> flink.hadoop.dfs.client.failover.proxy.provider.nameservice1: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> flink.hadoop.dfs.ha.namenodes.nameservice1: nn1,nn2
> flink.hadoop.dfs.nameservices: nameservice1
> code:
> {code:java}
> // Approach 4: Flink configuration
> // add all configuration key with prefix 'flink.hadoop.' in flink conf to 
> hadoop conf
> for (String key : flinkConfiguration.keySet()) {
> for (String prefix : FLINK_CONFIG_PREFIXES) {
> if (key.startsWith(prefix)) {
> String newKey = key.substring(prefix.length());
> String value = flinkConfiguration.getString(key, null);
> result.set(newKey, value);
> LOG.debug(
> "Adding Flink config entry for {} as {}={} to Hadoop 
> config",
> key,
> newKey,
> value);
> foundHadoopConfiguration = true;
> }
> }
> } {code}
> If my HADOOP_CONF_DIR hdfs-site.xml have dfs.nameservices: nameservice2, see 
> the code logic this config will be override. I think this config should not 
> be override append will be better. if override we should add all config, but 
> we have many clusters in production, it is impossible to configure all 
> configurations in flink-conf.yaml.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25418) The dir_cache is specified in the flick task. When there is no network, you will still download the python third-party library

2021-12-22 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-25418:

Affects Version/s: 1.14.0
   1.13.0
   (was: 1.13.1)

> The dir_cache is specified in the flick task. When there is no network, you 
> will still download the python third-party library
> --
>
> Key: FLINK-25418
> URL: https://issues.apache.org/jira/browse/FLINK-25418
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.0, 1.14.0
> Environment: python3.7
> flink1.13.1
>Reporter: yangcai
>Assignee: Ada Wong
>Priority: Major
>
> Specified in Python code 
> set_python_requirements(requirements_cache_dir=dir_cache)
> During task execution, priority will be given to downloading Python 
> third-party packages from the network,Can I directly use the python package 
> in the cache I specify when I specify the cache value and don't want the task 
> task to download the python package from the network



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25413) Use append hadoop config dfs.nameservices to replace overwrite

2021-12-22 Thread qiunan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiunan updated FLINK-25413:
---
Summary: Use append hadoop config dfs.nameservices to replace overwrite  
(was: Use append dfs.nameservices hadoop config to replace overwrite)

> Use append hadoop config dfs.nameservices to replace overwrite
> --
>
> Key: FLINK-25413
> URL: https://issues.apache.org/jira/browse/FLINK-25413
> Project: Flink
>  Issue Type: Improvement
>Reporter: qiunan
>Priority: Major
>
> In FLINK-16005[flink-yarn] Support yarn and hadoop config override.
> In flink-conf.yaml
> flink.hadoop.dfs.namenode.rpc-address.nameservice1.nn1: bigdata1:8020
> flink.hadoop.dfs.namenode.rpc-address.nameservice1.nn2: bigdata2:8020
> flink.hadoop.dfs.client.failover.proxy.provider.nameservice1: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> flink.hadoop.dfs.ha.namenodes.nameservice1: nn1,nn2
> flink.hadoop.dfs.nameservices: nameservice1
> code:
> {code:java}
> // Approach 4: Flink configuration
> // add all configuration key with prefix 'flink.hadoop.' in flink conf to 
> hadoop conf
> for (String key : flinkConfiguration.keySet()) {
> for (String prefix : FLINK_CONFIG_PREFIXES) {
> if (key.startsWith(prefix)) {
> String newKey = key.substring(prefix.length());
> String value = flinkConfiguration.getString(key, null);
> result.set(newKey, value);
> LOG.debug(
> "Adding Flink config entry for {} as {}={} to Hadoop 
> config",
> key,
> newKey,
> value);
> foundHadoopConfiguration = true;
> }
> }
> } {code}
> If my HADOOP_CONF_DIR hdfs-site.xml have dfs.nameservices: nameservice2, see 
> the code logic this config will be override. I think this config should not 
> be override append will be better. if override we should add all config, but 
> we have many clusters in production, it is impossible to configure all 
> configurations in flink-conf.yaml.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25418) The dir_cache is specified in the flick task. When there is no network, you will still download the python third-party library

2021-12-22 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464254#comment-17464254
 ] 

Dian Fu commented on FLINK-25418:
-

[~ana4] Thanks for taking care of this issue. Have assigned the issue to you. 
The solution makes sense to me and looking forward to the PR~

> The dir_cache is specified in the flick task. When there is no network, you 
> will still download the python third-party library
> --
>
> Key: FLINK-25418
> URL: https://issues.apache.org/jira/browse/FLINK-25418
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.1
> Environment: python3.7
> flink1.13.1
>Reporter: yangcai
>Assignee: Ada Wong
>Priority: Major
>
> Specified in Python code 
> set_python_requirements(requirements_cache_dir=dir_cache)
> During task execution, priority will be given to downloading Python 
> third-party packages from the network,Can I directly use the python package 
> in the cache I specify when I specify the cache value and don't want the task 
> task to download the python package from the network



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25413) Use append dfs.nameservices hadoop config to replace overwrite

2021-12-22 Thread qiunan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiunan updated FLINK-25413:
---
Summary: Use append dfs.nameservices hadoop config  to replace overwrite  
(was: Use append hadoop config dfs.nameservices to replace overwrite)

> Use append dfs.nameservices hadoop config  to replace overwrite
> ---
>
> Key: FLINK-25413
> URL: https://issues.apache.org/jira/browse/FLINK-25413
> Project: Flink
>  Issue Type: Improvement
>Reporter: qiunan
>Priority: Major
>
> In FLINK-16005[flink-yarn] Support yarn and hadoop config override.
> In flink-conf.yaml
> flink.hadoop.dfs.namenode.rpc-address.nameservice1.nn1: bigdata1:8020
> flink.hadoop.dfs.namenode.rpc-address.nameservice1.nn2: bigdata2:8020
> flink.hadoop.dfs.client.failover.proxy.provider.nameservice1: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> flink.hadoop.dfs.ha.namenodes.nameservice1: nn1,nn2
> flink.hadoop.dfs.nameservices: nameservice1
> code:
> {code:java}
> // Approach 4: Flink configuration
> // add all configuration key with prefix 'flink.hadoop.' in flink conf to 
> hadoop conf
> for (String key : flinkConfiguration.keySet()) {
> for (String prefix : FLINK_CONFIG_PREFIXES) {
> if (key.startsWith(prefix)) {
> String newKey = key.substring(prefix.length());
> String value = flinkConfiguration.getString(key, null);
> result.set(newKey, value);
> LOG.debug(
> "Adding Flink config entry for {} as {}={} to Hadoop 
> config",
> key,
> newKey,
> value);
> foundHadoopConfiguration = true;
> }
> }
> } {code}
> If my HADOOP_CONF_DIR hdfs-site.xml have dfs.nameservices: nameservice2, see 
> the code logic this config will be override. I think this config should not 
> be override append will be better. if override we should add all config, but 
> we have many clusters in production, it is impossible to configure all 
> configurations in flink-conf.yaml.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   3   4   5   6   >