[jira] [Commented] (FLINK-26137) Create webhook REST api test

2022-03-07 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502781#comment-17502781
 ] 

Nicholas Jiang commented on FLINK-26137:


[~gyfora], I'm working for this ticket. Could you please assign this ticket to 
me?

> Create webhook REST api test
> 
>
> Key: FLINK-26137
> URL: https://issues.apache.org/jira/browse/FLINK-26137
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>
> Add test to validate the webhook rest endpoint and make sure it returns the 
> expected responses, status codes etc.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26208) Introduce implementation of ManagedTableFactory

2022-03-07 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26208:
--
Description: Introduce impl for 
`org.apache.flink.table.factories.ManagedTableFactory`(#enrichOptions, 
#onCreateTable, #onDropTable) to support interaction with Flink's TableEnv via 
SQL  (was: Introduce impl for 
`org.apache.flink.table.factories.ManagedTableFactory`(#enrichOptions, 
#onCreateTable, #onDropTable and #onCompactTable) to support interaction with 
Flink's TableEnv)

> Introduce implementation of ManagedTableFactory
> ---
>
> Key: FLINK-26208
> URL: https://issues.apache.org/jira/browse/FLINK-26208
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> Introduce impl for 
> `org.apache.flink.table.factories.ManagedTableFactory`(#enrichOptions, 
> #onCreateTable, #onDropTable) to support interaction with Flink's TableEnv 
> via SQL



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19004: [FLINK-26490][checkpoint] Adjust the MaxParallelism when operator state don't contain keyed state.

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #19004:
URL: https://github.com/apache/flink/pull/19004#issuecomment-1061472524


   
   ## CI report:
   
   * b4fe404fd3eb4563d6e476c83c4d29c947752571 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32669)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19004: [FLINK-26490][checkpoint] Adjust the MaxParallelism when operator state don't contain keyed state.

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #19004:
URL: https://github.com/apache/flink/pull/19004#issuecomment-1061472524


   
   ## CI report:
   
   * b4fe404fd3eb4563d6e476c83c4d29c947752571 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32669)
 
   * b4d2eac3584665d4e667ed60b5231a3850b18430 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18958: [FLINK-15854][hive] Use the new type inference for Hive UDTF

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18958:
URL: https://github.com/apache/flink/pull/18958#issuecomment-1056725576


   
   ## CI report:
   
   * d34eb21ae0bda39ec119ef18a9782fbaad2310bc Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32660)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26534) shuffle by sink's primary key should cover the case that input changelog stream has a different parallelism

2022-03-07 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-26534:

Description: 
FLINK-20370 fix the wrong result when sink primary key is not the same with 
query and introduced a new auto-keyby sink's primary key strategy for append 
stream if the sink's parallelism differs from input stream's.

But still exists one case to be solved:
for a changelog stream, its changelog upsert key same as sink's primary key, 
but sink's parallelism changed by user (via those sinks which implement the 
`ParallelismProvider` interface, e.g., KafkaDynamicSink), we should fix it.

And a minor change: keyby canbe omitted when sink has single parallism (because 
none partitioner will cause worse disorder)



  was:
FLINK-20370 fix the wrong result when sink primary key is not the same with 
query and introduced a new auto-keyby sink's primary key strategy for append 
stream if the sink's parallelism differs from input stream's.
But still exists one case to be solved:
for a changelog stream, its changelog upsert key same as sink's primary key, 
but sink's parallelism changed by user (via those sinks which implement the 
`ParallelismProvider` interface, e.g., KafkaDynamicSink), we should fix it.
And a minor change: keyby canbe omitted when sink has single parallism (because 
none partitioner will cause worse disorder)




> shuffle by sink's primary key should cover the case that input changelog 
> stream has a different parallelism
> ---
>
> Key: FLINK-26534
> URL: https://issues.apache.org/jira/browse/FLINK-26534
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: lincoln lee
>Priority: Minor
>
> FLINK-20370 fix the wrong result when sink primary key is not the same with 
> query and introduced a new auto-keyby sink's primary key strategy for append 
> stream if the sink's parallelism differs from input stream's.
> But still exists one case to be solved:
> for a changelog stream, its changelog upsert key same as sink's primary key, 
> but sink's parallelism changed by user (via those sinks which implement the 
> `ParallelismProvider` interface, e.g., KafkaDynamicSink), we should fix it.
> And a minor change: keyby canbe omitted when sink has single parallism 
> (because none partitioner will cause worse disorder)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26534) shuffle by sink's primary key should cover the case that input changelog stream has a different parallelism

2022-03-07 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-26534:

Description: 
FLINK-20370 fix the wrong result when sink primary key is not the same with 
query and introduced a new auto-keyby sink's primary key strategy for append 
stream if the sink's parallelism differs from input stream's.
But still exists one case to be solved:
for a changelog stream, its changelog upsert key same as sink's primary key, 
but sink's parallelism changed by user (via those sinks which implement the 
`ParallelismProvider` interface, e.g., KafkaDynamicSink), we should fix it.
And a minor change: keyby canbe omitted when sink has single parallism (because 
none partitioner will cause worse disorder)



  was:
FLINK-20370 fix the wrong result when sink primary key is not the same with 
query and introduced a new auto-keyby sink's primary key strategy for append 
stream if the sink's parallelism differs from input stream's.
But still exists one case to be solved:
for a changelog stream, its changelog upsert key same as sink's primary key, 
but sink's parallelism changed by user (via those sinks which implement the 
`ParallelismProvider` interface, e.g., KafkaDynamicSink), we should fix it.




> shuffle by sink's primary key should cover the case that input changelog 
> stream has a different parallelism
> ---
>
> Key: FLINK-26534
> URL: https://issues.apache.org/jira/browse/FLINK-26534
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: lincoln lee
>Priority: Minor
>
> FLINK-20370 fix the wrong result when sink primary key is not the same with 
> query and introduced a new auto-keyby sink's primary key strategy for append 
> stream if the sink's parallelism differs from input stream's.
> But still exists one case to be solved:
> for a changelog stream, its changelog upsert key same as sink's primary key, 
> but sink's parallelism changed by user (via those sinks which implement the 
> `ParallelismProvider` interface, e.g., KafkaDynamicSink), we should fix it.
> And a minor change: keyby canbe omitted when sink has single parallism 
> (because none partitioner will cause worse disorder)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19004: [FLINK-26490][checkpoint] Adjust the MaxParallelism when operator state don't contain keyed state.

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #19004:
URL: https://github.com/apache/flink/pull/19004#issuecomment-1061472524


   
   ## CI report:
   
   * b4fe404fd3eb4563d6e476c83c4d29c947752571 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32669)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19004: [FLINK-26490][checkpoint] Adjust the MaxParallelism when operator state don't contain keyed state.

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #19004:
URL: https://github.com/apache/flink/pull/19004#issuecomment-1061472524


   
   ## CI report:
   
   * b4fe404fd3eb4563d6e476c83c4d29c947752571 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32669)
 
   * b4d2eac3584665d4e667ed60b5231a3850b18430 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19004: [FLINK-26490][checkpoint] Adjust the MaxParallelism when operator state don't contain keyed state.

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #19004:
URL: https://github.com/apache/flink/pull/19004#issuecomment-1061472524


   
   ## CI report:
   
   * b4fe404fd3eb4563d6e476c83c4d29c947752571 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32669)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18838: [FLINK-26177][Connector/pulsar] Use testcontainer pulsar runtime instead o…

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18838:
URL: https://github.com/apache/flink/pull/18838#issuecomment-1044141081


   
   ## CI report:
   
   * 2aa1d90060e534a17aa8f169d71cb0830178c183 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32658)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19004: [FLINK-26490][checkpoint] Adjust the MaxParallelism when operator state don't contain keyed state.

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #19004:
URL: https://github.com/apache/flink/pull/19004#issuecomment-1061472524


   
   ## CI report:
   
   * b4fe404fd3eb4563d6e476c83c4d29c947752571 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32669)
 
   * b4d2eac3584665d4e667ed60b5231a3850b18430 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19005: [FLINK-26531][kafka] KafkaWriterITCase.testMetadataPublisher failed on azure

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #19005:
URL: https://github.com/apache/flink/pull/19005#issuecomment-1061480820


   
   ## CI report:
   
   * a259c21a8693b3bc8b6732f3f5a9372dc846a658 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32671)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-21352) FLIP-158: Generalized incremental checkpoints

2022-03-07 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan resolved FLINK-21352.
---
Resolution: Fixed

> FLIP-158: Generalized incremental checkpoints
> -
>
> Key: FLINK-21352
> URL: https://issues.apache.org/jira/browse/FLINK-21352
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
> Fix For: 1.15.0
>
>
> Umbrella ticket for [FLIP-158: Generalized incremental 
> checkpoints|https://cwiki.apache.org/confluence/display/FLINK/FLIP-158%3A+Generalized+incremental+checkpoints]
>  (v1).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25867) [ZH] Add ChangelogBackend documentation

2022-03-07 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan updated FLINK-25867:
--
Parent: FLINK-25842
Issue Type: Sub-task  (was: Improvement)

> [ZH] Add ChangelogBackend documentation
> ---
>
> Key: FLINK-25867
> URL: https://issues.apache.org/jira/browse/FLINK-25867
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> In FLINK-25024, documentation for Changelog was added.
> Chinese version is a copy of English one and needs translation.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25867) [ZH] Add ChangelogBackend documentation

2022-03-07 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan updated FLINK-25867:
--
Parent: (was: FLINK-21352)
Issue Type: Improvement  (was: Sub-task)

> [ZH] Add ChangelogBackend documentation
> ---
>
> Key: FLINK-25867
> URL: https://issues.apache.org/jira/browse/FLINK-25867
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> In FLINK-25024, documentation for Changelog was added.
> Chinese version is a copy of English one and needs translation.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #19005: [FLINK-26531][kafka] KafkaWriterITCase.testMetadataPublisher failed on azure

2022-03-07 Thread GitBox


flinkbot commented on pull request #19005:
URL: https://github.com/apache/flink/pull/19005#issuecomment-1061480820


   
   ## CI report:
   
   * a259c21a8693b3bc8b6732f3f5a9372dc846a658 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18863: [FLINK-26033][flink-connector-kafka]Fix the problem that robin does not take effect due to upgrading kafka client to 2.4.1 since Flin

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18863:
URL: https://github.com/apache/flink/pull/18863#issuecomment-1046819200


   
   ## CI report:
   
   * 8dda8fdb80894c83abcaa8774f30e6f8388f2c68 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32659)
 
   * 8cda572f57f1ee4a969e84d51b05d1e1ba74887f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32670)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26531) KafkaWriterITCase.testMetadataPublisher failed on azure

2022-03-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26531:
---
Labels: pull-request-available test-stability  (was: test-stability)

> KafkaWriterITCase.testMetadataPublisher  failed on azure
> 
>
> Key: FLINK-26531
> URL: https://issues.apache.org/jira/browse/FLINK-26531
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> 022-03-07T13:43:34.3882626Z Mar 07 13:43:34 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaWriterITCase.testMetadataPublisher 
>  Time elapsed: 0.205 s  <<< FAILURE!
> 2022-03-07T13:43:34.3883743Z Mar 07 13:43:34 java.lang.AssertionError: 
> 2022-03-07T13:43:34.3884867Z Mar 07 13:43:34 
> 2022-03-07T13:43:34.3885412Z Mar 07 13:43:34 Expecting actual:
> 2022-03-07T13:43:34.3886464Z Mar 07 13:43:34   ["testMetadataPublisher-0@0",
> 2022-03-07T13:43:34.3887361Z Mar 07 13:43:34 "testMetadataPublisher-0@1",
> 2022-03-07T13:43:34.3888222Z Mar 07 13:43:34 "testMetadataPublisher-0@2",
> 2022-03-07T13:43:34.333Z Mar 07 13:43:34 "testMetadataPublisher-0@3",
> 2022-03-07T13:43:34.3892032Z Mar 07 13:43:34 "testMetadataPublisher-0@4",
> 2022-03-07T13:43:34.3893140Z Mar 07 13:43:34 "testMetadataPublisher-0@5",
> 2022-03-07T13:43:34.3893849Z Mar 07 13:43:34 "testMetadataPublisher-0@6",
> 2022-03-07T13:43:34.3895077Z Mar 07 13:43:34 "testMetadataPublisher-0@7",
> 2022-03-07T13:43:34.3895779Z Mar 07 13:43:34 "testMetadataPublisher-0@8",
> 2022-03-07T13:43:34.3896423Z Mar 07 13:43:34 "testMetadataPublisher-0@9",
> 2022-03-07T13:43:34.3897164Z Mar 07 13:43:34 "testMetadataPublisher-0@10",
> 2022-03-07T13:43:34.3897792Z Mar 07 13:43:34 "testMetadataPublisher-0@11",
> 2022-03-07T13:43:34.3949208Z Mar 07 13:43:34 "testMetadataPublisher-0@12",
> 2022-03-07T13:43:34.3950956Z Mar 07 13:43:34 "testMetadataPublisher-0@13",
> 2022-03-07T13:43:34.3952287Z Mar 07 13:43:34 "testMetadataPublisher-0@14",
> 2022-03-07T13:43:34.3954341Z Mar 07 13:43:34 "testMetadataPublisher-0@15",
> 2022-03-07T13:43:34.3955834Z Mar 07 13:43:34 "testMetadataPublisher-0@16",
> 2022-03-07T13:43:34.3957048Z Mar 07 13:43:34 "testMetadataPublisher-0@17",
> 2022-03-07T13:43:34.3958287Z Mar 07 13:43:34 "testMetadataPublisher-0@18",
> 2022-03-07T13:43:34.3959519Z Mar 07 13:43:34 "testMetadataPublisher-0@19",
> 2022-03-07T13:43:34.3960798Z Mar 07 13:43:34 "testMetadataPublisher-0@20",
> 2022-03-07T13:43:34.3961973Z Mar 07 13:43:34 "testMetadataPublisher-0@21",
> 2022-03-07T13:43:34.3963302Z Mar 07 13:43:34 "testMetadataPublisher-0@22",
> 2022-03-07T13:43:34.3964563Z Mar 07 13:43:34 "testMetadataPublisher-0@23",
> 2022-03-07T13:43:34.3966941Z Mar 07 13:43:34 "testMetadataPublisher-0@24",
> 2022-03-07T13:43:34.3968246Z Mar 07 13:43:34 "testMetadataPublisher-0@25",
> 2022-03-07T13:43:34.3969452Z Mar 07 13:43:34 "testMetadataPublisher-0@26",
> 2022-03-07T13:43:34.3970656Z Mar 07 13:43:34 "testMetadataPublisher-0@27",
> 2022-03-07T13:43:34.3971853Z Mar 07 13:43:34 "testMetadataPublisher-0@28",
> 2022-03-07T13:43:34.3974163Z Mar 07 13:43:34 "testMetadataPublisher-0@29",
> 2022-03-07T13:43:34.3975441Z Mar 07 13:43:34 "testMetadataPublisher-0@30",
> 2022-03-07T13:43:34.3976380Z Mar 07 13:43:34 "testMetadataPublisher-0@31",
> 2022-03-07T13:43:34.3977278Z Mar 07 13:43:34 "testMetadataPublisher-0@32",
> 2022-03-07T13:43:34.3978197Z Mar 07 13:43:34 "testMetadataPublisher-0@33",
> 2022-03-07T13:43:34.3979120Z Mar 07 13:43:34 "testMetadataPublisher-0@34",
> 2022-03-07T13:43:34.3980051Z Mar 07 13:43:34 "testMetadataPublisher-0@35",
> 2022-03-07T13:43:34.3981017Z Mar 07 13:43:34 "testMetadataPublisher-0@36",
> 2022-03-07T13:43:34.3981952Z Mar 07 13:43:34 "testMetadataPublisher-0@37",
> 2022-03-07T13:43:34.3982975Z Mar 07 13:43:34 "testMetadataPublisher-0@38",
> 2022-03-07T13:43:34.3983882Z Mar 07 13:43:34 "testMetadataPublisher-0@39",
> 2022-03-07T13:43:34.3984940Z Mar 07 13:43:34 "testMetadataPublisher-0@40",
> 2022-03-07T13:43:34.3985838Z Mar 07 13:43:34 "testMetadataPublisher-0@41",
> 2022-03-07T13:43:34.3986702Z Mar 07 13:43:34 "testMetadataPublisher-0@42",
> 2022-03-07T13:43:34.3987661Z Mar 07 13:43:34 "testMetadataPublisher-0@43",
> 2022-03-07T13:43:34.3988564Z Mar 07 13:43:34 "testMetadataPublisher-0@44",
> 2022-03-07T13:43:34.3989444Z Mar 07 13:43:34 "testMetadataPublisher-0@45",
> 2022-03-07T13:43:34.3990347Z Mar 07 13:43:34 "testMetadataPublisher-0@46",

[GitHub] [flink] JingsongLi opened a new pull request #19005: [FLINK-26531][kafka] KafkaWriterITCase.testMetadataPublisher failed on azure

2022-03-07 Thread GitBox


JingsongLi opened a new pull request #19005:
URL: https://github.com/apache/flink/pull/19005


   ## What is the purpose of the change
   
   Fix unstable case: `KafkaWriterITCase.testMetadataPublisher`
   
   ## Brief change log
   
   After the sink v2 refactoring, `precommit` is divided into two separate 
methods: `flush` and `precommit`.
   In this test, we should call `flush` instead of `precommit`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26527) ClassCastException in TemporaryClassLoaderContext

2022-03-07 Thread shizhengchao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shizhengchao updated FLINK-26527:
-
Description: 
When I try to run sql using flink's classloader, I get the following exception:
{code:java}
Exception in thread "main" java.lang.ClassCastException: 
org.codehaus.janino.CompilerFactory cannot be cast to 
org.codehaus.commons.compiler.ICompilerFactory
    at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
 
……{code}
my code is like this:
{code:java}
Configuration configuration = new Configuration();
configuration.set(CoreOptions.CLASSLOADER_RESOLVE_ORDER, "child-first");
List dependencies = FlinkClassLoader.getFlinkDependencies(FLINK_HOME/lib);
URLClassLoader classLoader = ClientUtils.buildUserCodeClassLoader(
dependencies,
Collections.emptyList(),
SessionContext.class.getClassLoader(),
configuration);
try (TemporaryClassLoaderContext ignored = 
TemporaryClassLoaderContext.of(classLoader)) {     
   tableEnv.explainSql(sql);
 
//CompilerFactoryFactory.getCompilerFactory("org.codehaus.janino.CompilerFactory");
} {code}
But, if you change `classloader.resolve-order` to `parent-first`, everything 
works fine

  was:
When I try to run sql using flink's classloader, I get the following exception:
{code:java}
Exception in thread "main" java.lang.ClassCastException: 
org.codehaus.janino.CompilerFactory cannot be cast to 
org.codehaus.commons.compiler.ICompilerFactory
    at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
 
……{code}
my code is like this:
{code:java}
Configuration configuration = new Configuration();
configuration.set(CoreOptions.CLASSLOADER_RESOLVE_ORDER, "child-first");
List dependencies = FlinkClassLoader.getFlinkDependencies(
{code}
{color:#91}${FLINK_HOME}/lib{color}
{code:java}
);
URLClassLoader classLoader = ClientUtils.buildUserCodeClassLoader(
dependencies,
Collections.emptyList(),
SessionContext.class.getClassLoader(),
configuration);
try (TemporaryClassLoaderContext ignored = 
TemporaryClassLoaderContext.of(classLoader)) {     
   tableEnv.explainSql(sql);
 
//CompilerFactoryFactory.getCompilerFactory("org.codehaus.janino.CompilerFactory");
} {code}
But, if you change `classloader.resolve-order` to `parent-first`, everything 
works fine


> ClassCastException in TemporaryClassLoaderContext
> -
>
> Key: FLINK-26527
> URL: https://issues.apache.org/jira/browse/FLINK-26527
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.13.5, 1.14.3
>Reporter: shizhengchao
>Priority: Major
>
> When I try to run sql using flink's classloader, I get the following 
> exception:
> {code:java}
> Exception in thread "main" java.lang.ClassCastException: 
> org.codehaus.janino.CompilerFactory cannot be cast to 
> org.codehaus.commons.compiler.ICompilerFactory
>     at 
> org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
>  
> ……{code}
> my code is like this:
> {code:java}
> Configuration configuration = new Configuration();
> configuration.set(CoreOptions.CLASSLOADER_RESOLVE_ORDER, "child-first");
> List dependencies = 
> FlinkClassLoader.getFlinkDependencies(FLINK_HOME/lib);
> URLClassLoader classLoader = ClientUtils.buildUserCodeClassLoader(
> dependencies,
> Collections.emptyList(),
> SessionContext.class.getClassLoader(),
> configuration);
> try (TemporaryClassLoaderContext ignored = 
> TemporaryClassLoaderContext.of(classLoader)) {     
>tableEnv.explainSql(sql);
>  
> //CompilerFactoryFactory.getCompilerFactory("org.codehaus.janino.CompilerFactory");
> } {code}
> But, if you change `classloader.resolve-order` to `parent-first`, everything 
> works fine



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18863: [FLINK-26033][flink-connector-kafka]Fix the problem that robin does not take effect due to upgrading kafka client to 2.4.1 since Flin

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18863:
URL: https://github.com/apache/flink/pull/18863#issuecomment-1046819200


   
   ## CI report:
   
   * 480fcced44131cc57105d254b1d7cbad7004fdce Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32603)
 
   * 8dda8fdb80894c83abcaa8774f30e6f8388f2c68 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32659)
 
   * 8cda572f57f1ee4a969e84d51b05d1e1ba74887f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26527) ClassCastException in TemporaryClassLoaderContext

2022-03-07 Thread shizhengchao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shizhengchao updated FLINK-26527:
-
Description: 
When I try to run sql using flink's classloader, I get the following exception:
{code:java}
Exception in thread "main" java.lang.ClassCastException: 
org.codehaus.janino.CompilerFactory cannot be cast to 
org.codehaus.commons.compiler.ICompilerFactory
    at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
 
……{code}
my code is like this:
{code:java}
Configuration configuration = new Configuration();
configuration.set(CoreOptions.CLASSLOADER_RESOLVE_ORDER, "child-first");
List dependencies = FlinkClassLoader.getFlinkDependencies(
{code}
{color:#91}${FLINK_HOME}/lib{color}
{code:java}
);
URLClassLoader classLoader = ClientUtils.buildUserCodeClassLoader(
dependencies,
Collections.emptyList(),
SessionContext.class.getClassLoader(),
configuration);
try (TemporaryClassLoaderContext ignored = 
TemporaryClassLoaderContext.of(classLoader)) {     
   tableEnv.explainSql(sql);
 
//CompilerFactoryFactory.getCompilerFactory("org.codehaus.janino.CompilerFactory");
} {code}
But, if you change `classloader.resolve-order` to `parent-first`, everything 
works fine

  was:
When I try to run sql using flink's classloader, I get the following exception:
{code:java}
Exception in thread "main" java.lang.ClassCastException: 
org.codehaus.janino.CompilerFactory cannot be cast to 
org.codehaus.commons.compiler.ICompilerFactory
    at 
org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
 
……{code}
my code is like this:
{code:java}
Configuration configuration = new Configuration();
configuration.set(CoreOptions.CLASSLOADER_RESOLVE_ORDER, "child-first");
List dependencies = 
FlinkClassLoader.getFlinkDependencies(System.getenv(ConfigConstants.ENV_FLINK_HOME_DIR));
URLClassLoader classLoader = ClientUtils.buildUserCodeClassLoader(
dependencies,
Collections.emptyList(),
SessionContext.class.getClassLoader(),
configuration);
try (TemporaryClassLoaderContext ignored = 
TemporaryClassLoaderContext.of(classLoader)) {     
   tableEnv.explainSql(sql);
 
//CompilerFactoryFactory.getCompilerFactory("org.codehaus.janino.CompilerFactory");
} {code}
But, if you change `classloader.resolve-order` to `parent-first`, everything 
works fine


> ClassCastException in TemporaryClassLoaderContext
> -
>
> Key: FLINK-26527
> URL: https://issues.apache.org/jira/browse/FLINK-26527
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.13.5, 1.14.3
>Reporter: shizhengchao
>Priority: Major
>
> When I try to run sql using flink's classloader, I get the following 
> exception:
> {code:java}
> Exception in thread "main" java.lang.ClassCastException: 
> org.codehaus.janino.CompilerFactory cannot be cast to 
> org.codehaus.commons.compiler.ICompilerFactory
>     at 
> org.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
>  
> ……{code}
> my code is like this:
> {code:java}
> Configuration configuration = new Configuration();
> configuration.set(CoreOptions.CLASSLOADER_RESOLVE_ORDER, "child-first");
> List dependencies = FlinkClassLoader.getFlinkDependencies(
> {code}
> {color:#91}${FLINK_HOME}/lib{color}
> {code:java}
> );
> URLClassLoader classLoader = ClientUtils.buildUserCodeClassLoader(
> dependencies,
> Collections.emptyList(),
> SessionContext.class.getClassLoader(),
> configuration);
> try (TemporaryClassLoaderContext ignored = 
> TemporaryClassLoaderContext.of(classLoader)) {     
>tableEnv.explainSql(sql);
>  
> //CompilerFactoryFactory.getCompilerFactory("org.codehaus.janino.CompilerFactory");
> } {code}
> But, if you change `classloader.resolve-order` to `parent-first`, everything 
> works fine



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19004: [FLINK-26490][checkpoint] Adjust the MaxParallelism when unnecessary.

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #19004:
URL: https://github.com/apache/flink/pull/19004#issuecomment-1061472524


   
   ## CI report:
   
   * b4fe404fd3eb4563d6e476c83c4d29c947752571 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32669)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shizhengchao commented on pull request #18863: [FLINK-26033][flink-connector-kafka]Fix the problem that robin does not take effect due to upgrading kafka client to 2.4.1 since Flink1.

2022-03-07 Thread GitBox


shizhengchao commented on pull request #18863:
URL: https://github.com/apache/flink/pull/18863#issuecomment-1061474367


   > 
   
   done


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26490) Adjust the MaxParallelism or remove the MaxParallelism check when unnecessary.

2022-03-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-26490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502768#comment-17502768
 ] 

刘方奇 commented on FLINK-26490:
-

[https://github.com/apache/flink/pull/19004]

[~yunta] Hi, I did a prototype of what I have in mind, but this certainly needs 
more thoughts and polishing. Could you help to take a glance?

> Adjust the MaxParallelism or remove the MaxParallelism check when unnecessary.
> --
>
> Key: FLINK-26490
> URL: https://issues.apache.org/jira/browse/FLINK-26490
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: 刘方奇
>Priority: Major
>  Labels: pull-request-available
>
> Since Flink introduce key group and MaxParallelism, Flink can rescale with 
> less cost.
> But when we want to update the job parallelism bigger than the 
> MaxParallelism, it 's impossible cause there are so many MaxParallelism check 
> that require new parallelism should not bigger than MaxParallelism. 
> Actually, when an operator which don't contain keyed state, there should be 
> no problem when update the parallelism bigger than the MaxParallelism,, cause 
> only keyed state need MaxParallelism and key group.
> So should we remove this check or auto adjust the MaxParallelism when we 
> restore an operator state that don't contain keyed state?
> It can make job restore from checkpoint easier.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] shizhengchao removed a comment on pull request #18863: [FLINK-26033][flink-connector-kafka]Fix the problem that robin does not take effect due to upgrading kafka client to 2.4.1 since

2022-03-07 Thread GitBox


shizhengchao removed a comment on pull request #18863:
URL: https://github.com/apache/flink/pull/18863#issuecomment-1061471664






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-26531) KafkaWriterITCase.testMetadataPublisher failed on azure

2022-03-07 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-26531:


Assignee: Jingsong Lee

> KafkaWriterITCase.testMetadataPublisher  failed on azure
> 
>
> Key: FLINK-26531
> URL: https://issues.apache.org/jira/browse/FLINK-26531
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Jingsong Lee
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> 022-03-07T13:43:34.3882626Z Mar 07 13:43:34 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaWriterITCase.testMetadataPublisher 
>  Time elapsed: 0.205 s  <<< FAILURE!
> 2022-03-07T13:43:34.3883743Z Mar 07 13:43:34 java.lang.AssertionError: 
> 2022-03-07T13:43:34.3884867Z Mar 07 13:43:34 
> 2022-03-07T13:43:34.3885412Z Mar 07 13:43:34 Expecting actual:
> 2022-03-07T13:43:34.3886464Z Mar 07 13:43:34   ["testMetadataPublisher-0@0",
> 2022-03-07T13:43:34.3887361Z Mar 07 13:43:34 "testMetadataPublisher-0@1",
> 2022-03-07T13:43:34.3888222Z Mar 07 13:43:34 "testMetadataPublisher-0@2",
> 2022-03-07T13:43:34.333Z Mar 07 13:43:34 "testMetadataPublisher-0@3",
> 2022-03-07T13:43:34.3892032Z Mar 07 13:43:34 "testMetadataPublisher-0@4",
> 2022-03-07T13:43:34.3893140Z Mar 07 13:43:34 "testMetadataPublisher-0@5",
> 2022-03-07T13:43:34.3893849Z Mar 07 13:43:34 "testMetadataPublisher-0@6",
> 2022-03-07T13:43:34.3895077Z Mar 07 13:43:34 "testMetadataPublisher-0@7",
> 2022-03-07T13:43:34.3895779Z Mar 07 13:43:34 "testMetadataPublisher-0@8",
> 2022-03-07T13:43:34.3896423Z Mar 07 13:43:34 "testMetadataPublisher-0@9",
> 2022-03-07T13:43:34.3897164Z Mar 07 13:43:34 "testMetadataPublisher-0@10",
> 2022-03-07T13:43:34.3897792Z Mar 07 13:43:34 "testMetadataPublisher-0@11",
> 2022-03-07T13:43:34.3949208Z Mar 07 13:43:34 "testMetadataPublisher-0@12",
> 2022-03-07T13:43:34.3950956Z Mar 07 13:43:34 "testMetadataPublisher-0@13",
> 2022-03-07T13:43:34.3952287Z Mar 07 13:43:34 "testMetadataPublisher-0@14",
> 2022-03-07T13:43:34.3954341Z Mar 07 13:43:34 "testMetadataPublisher-0@15",
> 2022-03-07T13:43:34.3955834Z Mar 07 13:43:34 "testMetadataPublisher-0@16",
> 2022-03-07T13:43:34.3957048Z Mar 07 13:43:34 "testMetadataPublisher-0@17",
> 2022-03-07T13:43:34.3958287Z Mar 07 13:43:34 "testMetadataPublisher-0@18",
> 2022-03-07T13:43:34.3959519Z Mar 07 13:43:34 "testMetadataPublisher-0@19",
> 2022-03-07T13:43:34.3960798Z Mar 07 13:43:34 "testMetadataPublisher-0@20",
> 2022-03-07T13:43:34.3961973Z Mar 07 13:43:34 "testMetadataPublisher-0@21",
> 2022-03-07T13:43:34.3963302Z Mar 07 13:43:34 "testMetadataPublisher-0@22",
> 2022-03-07T13:43:34.3964563Z Mar 07 13:43:34 "testMetadataPublisher-0@23",
> 2022-03-07T13:43:34.3966941Z Mar 07 13:43:34 "testMetadataPublisher-0@24",
> 2022-03-07T13:43:34.3968246Z Mar 07 13:43:34 "testMetadataPublisher-0@25",
> 2022-03-07T13:43:34.3969452Z Mar 07 13:43:34 "testMetadataPublisher-0@26",
> 2022-03-07T13:43:34.3970656Z Mar 07 13:43:34 "testMetadataPublisher-0@27",
> 2022-03-07T13:43:34.3971853Z Mar 07 13:43:34 "testMetadataPublisher-0@28",
> 2022-03-07T13:43:34.3974163Z Mar 07 13:43:34 "testMetadataPublisher-0@29",
> 2022-03-07T13:43:34.3975441Z Mar 07 13:43:34 "testMetadataPublisher-0@30",
> 2022-03-07T13:43:34.3976380Z Mar 07 13:43:34 "testMetadataPublisher-0@31",
> 2022-03-07T13:43:34.3977278Z Mar 07 13:43:34 "testMetadataPublisher-0@32",
> 2022-03-07T13:43:34.3978197Z Mar 07 13:43:34 "testMetadataPublisher-0@33",
> 2022-03-07T13:43:34.3979120Z Mar 07 13:43:34 "testMetadataPublisher-0@34",
> 2022-03-07T13:43:34.3980051Z Mar 07 13:43:34 "testMetadataPublisher-0@35",
> 2022-03-07T13:43:34.3981017Z Mar 07 13:43:34 "testMetadataPublisher-0@36",
> 2022-03-07T13:43:34.3981952Z Mar 07 13:43:34 "testMetadataPublisher-0@37",
> 2022-03-07T13:43:34.3982975Z Mar 07 13:43:34 "testMetadataPublisher-0@38",
> 2022-03-07T13:43:34.3983882Z Mar 07 13:43:34 "testMetadataPublisher-0@39",
> 2022-03-07T13:43:34.3984940Z Mar 07 13:43:34 "testMetadataPublisher-0@40",
> 2022-03-07T13:43:34.3985838Z Mar 07 13:43:34 "testMetadataPublisher-0@41",
> 2022-03-07T13:43:34.3986702Z Mar 07 13:43:34 "testMetadataPublisher-0@42",
> 2022-03-07T13:43:34.3987661Z Mar 07 13:43:34 "testMetadataPublisher-0@43",
> 2022-03-07T13:43:34.3988564Z Mar 07 13:43:34 "testMetadataPublisher-0@44",
> 2022-03-07T13:43:34.3989444Z Mar 07 13:43:34 "testMetadataPublisher-0@45",
> 2022-03-07T13:43:34.3990347Z Mar 07 13:43:34 "testMetadataPublisher-0@46",
> 2022-03-07T13:43:34.3991206Z Mar 07 13:43:34 "testMetadataPub

[GitHub] [flink] flinkbot commented on pull request #19004: [FLINK-26490][checkpoint] Adjust the MaxParallelism when unnecessary.

2022-03-07 Thread GitBox


flinkbot commented on pull request #19004:
URL: https://github.com/apache/flink/pull/19004#issuecomment-1061472524


   
   ## CI report:
   
   * b4fe404fd3eb4563d6e476c83c4d29c947752571 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26534) shuffle by sink's primary key should cover the case that input changelog stream has a different parallelism

2022-03-07 Thread lincoln lee (Jira)
lincoln lee created FLINK-26534:
---

 Summary: shuffle by sink's primary key should cover the case that 
input changelog stream has a different parallelism
 Key: FLINK-26534
 URL: https://issues.apache.org/jira/browse/FLINK-26534
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.15.0
Reporter: lincoln lee


FLINK-20370 fix the wrong result when sink primary key is not the same with 
query and introduced a new auto-keyby sink's primary key strategy for append 
stream if the sink's parallelism differs from input stream's.
But still exists one case to be solved:
for a changelog stream, its changelog upsert key same as sink's primary key, 
but sink's parallelism changed by user (via those sinks which implement the 
`ParallelismProvider` interface, e.g., KafkaDynamicSink), we should fix it.





--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] shizhengchao commented on pull request #18863: [FLINK-26033][flink-connector-kafka]Fix the problem that robin does not take effect due to upgrading kafka client to 2.4.1 since Flink1.

2022-03-07 Thread GitBox


shizhengchao commented on pull request #18863:
URL: https://github.com/apache/flink/pull/18863#issuecomment-1061471664


   use CRLF or LF?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26490) Adjust the MaxParallelism or remove the MaxParallelism check when unnecessary.

2022-03-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26490:
---
Labels: pull-request-available  (was: )

> Adjust the MaxParallelism or remove the MaxParallelism check when unnecessary.
> --
>
> Key: FLINK-26490
> URL: https://issues.apache.org/jira/browse/FLINK-26490
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: 刘方奇
>Priority: Major
>  Labels: pull-request-available
>
> Since Flink introduce key group and MaxParallelism, Flink can rescale with 
> less cost.
> But when we want to update the job parallelism bigger than the 
> MaxParallelism, it 's impossible cause there are so many MaxParallelism check 
> that require new parallelism should not bigger than MaxParallelism. 
> Actually, when an operator which don't contain keyed state, there should be 
> no problem when update the parallelism bigger than the MaxParallelism,, cause 
> only keyed state need MaxParallelism and key group.
> So should we remove this check or auto adjust the MaxParallelism when we 
> restore an operator state that don't contain keyed state?
> It can make job restore from checkpoint easier.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] liufangqi opened a new pull request #19004: [FLINK-26490][checkpoint] Adjust the MaxParallelism when unnecessary.

2022-03-07 Thread GitBox


liufangqi opened a new pull request #19004:
URL: https://github.com/apache/flink/pull/19004


   
   
   ## What is the purpose of the change
   
   Since Flink introduce key group and MaxParallelism, Flink can rescale with 
less cost.
   But when we want to update the job parallelism bigger than the 
MaxParallelism, it 's impossible cause there are so many MaxParallelism check 
that require new parallelism should not bigger than MaxParallelism. 
   
   Actually, when an operator which don't contain keyed state, there should be 
no problem when update the parallelism bigger than the MaxParallelism,, cause 
only keyed state need MaxParallelism and key group.
   
   So should we remove this check or auto adjust the MaxParallelism when we 
restore an operator state that don't contain keyed state?
   
   It can make job restore from checkpoint easier.
   
   
   ## Brief change log
   
   
   ## Verifying this change
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-25684) Support enhanced show databases syntax

2022-03-07 Thread Moses (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17477583#comment-17477583
 ] 

Moses edited comment on FLINK-25684 at 3/8/22, 6:59 AM:


Hi [~lzljs3620320] , could you please help to check this issue ~


was (Author: zhangchaoming):
[~jark]  Could you please help to check this issue ~

> Support enhanced show databases syntax
> --
>
> Key: FLINK-25684
> URL: https://issues.apache.org/jira/browse/FLINK-25684
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Moses
>Priority: Major
>  Labels: pull-request-available
>
> Enhanced `show databases` statement like ` show databasesfrom like 'db%' ` 
> has been supported broadly in many popular SQL engine like Spark SQL/MySQL.
> We could use such statement to easily show the databases that we wannted.
> h3. SHOW DATABSES [ LIKE regex_pattern ]
> Examples:
> {code:java}
> Flink SQL> create database db1;
> [INFO] Execute statement succeed.
> Flink SQL> create database db1_1;
> [INFO] Execute statement succeed.
> Flink SQL> create database pre_db;
> [INFO] Execute statement succeed.
> Flink SQL> show databases;
> +--+
> |database name |
> +--+
> | default_database |
> |  db1 |
> |db1_1 |
> |   pre_db |
> +--+
> 4 rows in set
> Flink SQL> show databases like 'db1';
> +---+
> | database name |
> +---+
> |   db1 |
> +---+
> 1 row in set
> Flink SQL> show databases like 'db%';
> +---+
> | database name |
> +---+
> |   db1 |
> | db1_1 |
> +---+
> 2 rows in set
> Flink SQL> show databases like '%db%';
> +---+
> | database name |
> +---+
> |   db1 |
> | db1_1 |
> |pre_db |
> +---+
> 3 rows in set
> Flink SQL> show databases like '%db';
> +---+
> | database name |
> +---+
> |pre_db |
> +---+
> 1 row in set
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] ZhangChaoming removed a comment on pull request #18386: [FLINK-25684][table] Support enhanced `show databases` syntax

2022-03-07 Thread GitBox


ZhangChaoming removed a comment on pull request #18386:
URL: https://github.com/apache/flink/pull/18386#issuecomment-1038939945


   @wuchong  Hi, Could you please help give a review ? Thanks very much !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-26508) Webhook should only validate on /validate endpoint end log errors for others

2022-03-07 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-26508.
--
Resolution: Fixed

merged: b75c05807e43129016741277f16ca78634f1e423

> Webhook should only validate on /validate endpoint end log errors for others
> 
>
> Key: FLINK-26508
> URL: https://issues.apache.org/jira/browse/FLINK-26508
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
>
> The current webhook implementations accept requests on all paths and execute 
> validations. 
> We should restrict this to only /validate and log errors on other rest paths.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-26472) Introduce Savepoint object in JobStatus

2022-03-07 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-26472.
--
Resolution: Duplicate

I am closing this as this is already part of the manual savepoint trigger work

> Introduce Savepoint object in JobStatus
> ---
>
> Key: FLINK-26472
> URL: https://issues.apache.org/jira/browse/FLINK-26472
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Assignee: Matyas Orhidi
>Priority: Major
>
> We currently store only the `savepointLocation` as a String in the JobState. 
> It would be beneficial to introduce a Savepoint object with a few additional 
> fields instead:
>  * {{String location}}
>  * {{String timestamp}}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26137) Create webhook REST api test

2022-03-07 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-26137:
--

Assignee: (was: Nicholas Jiang)

> Create webhook REST api test
> 
>
> Key: FLINK-26137
> URL: https://issues.apache.org/jira/browse/FLINK-26137
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>
> Add test to validate the webhook rest endpoint and make sure it returns the 
> expected responses, status codes etc.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26531) KafkaWriterITCase.testMetadataPublisher failed on azure

2022-03-07 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502758#comment-17502758
 ] 

Jingsong Lee commented on FLINK-26531:
--

Thanks for reporting. I will take a look~

> KafkaWriterITCase.testMetadataPublisher  failed on azure
> 
>
> Key: FLINK-26531
> URL: https://issues.apache.org/jira/browse/FLINK-26531
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> 022-03-07T13:43:34.3882626Z Mar 07 13:43:34 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaWriterITCase.testMetadataPublisher 
>  Time elapsed: 0.205 s  <<< FAILURE!
> 2022-03-07T13:43:34.3883743Z Mar 07 13:43:34 java.lang.AssertionError: 
> 2022-03-07T13:43:34.3884867Z Mar 07 13:43:34 
> 2022-03-07T13:43:34.3885412Z Mar 07 13:43:34 Expecting actual:
> 2022-03-07T13:43:34.3886464Z Mar 07 13:43:34   ["testMetadataPublisher-0@0",
> 2022-03-07T13:43:34.3887361Z Mar 07 13:43:34 "testMetadataPublisher-0@1",
> 2022-03-07T13:43:34.3888222Z Mar 07 13:43:34 "testMetadataPublisher-0@2",
> 2022-03-07T13:43:34.333Z Mar 07 13:43:34 "testMetadataPublisher-0@3",
> 2022-03-07T13:43:34.3892032Z Mar 07 13:43:34 "testMetadataPublisher-0@4",
> 2022-03-07T13:43:34.3893140Z Mar 07 13:43:34 "testMetadataPublisher-0@5",
> 2022-03-07T13:43:34.3893849Z Mar 07 13:43:34 "testMetadataPublisher-0@6",
> 2022-03-07T13:43:34.3895077Z Mar 07 13:43:34 "testMetadataPublisher-0@7",
> 2022-03-07T13:43:34.3895779Z Mar 07 13:43:34 "testMetadataPublisher-0@8",
> 2022-03-07T13:43:34.3896423Z Mar 07 13:43:34 "testMetadataPublisher-0@9",
> 2022-03-07T13:43:34.3897164Z Mar 07 13:43:34 "testMetadataPublisher-0@10",
> 2022-03-07T13:43:34.3897792Z Mar 07 13:43:34 "testMetadataPublisher-0@11",
> 2022-03-07T13:43:34.3949208Z Mar 07 13:43:34 "testMetadataPublisher-0@12",
> 2022-03-07T13:43:34.3950956Z Mar 07 13:43:34 "testMetadataPublisher-0@13",
> 2022-03-07T13:43:34.3952287Z Mar 07 13:43:34 "testMetadataPublisher-0@14",
> 2022-03-07T13:43:34.3954341Z Mar 07 13:43:34 "testMetadataPublisher-0@15",
> 2022-03-07T13:43:34.3955834Z Mar 07 13:43:34 "testMetadataPublisher-0@16",
> 2022-03-07T13:43:34.3957048Z Mar 07 13:43:34 "testMetadataPublisher-0@17",
> 2022-03-07T13:43:34.3958287Z Mar 07 13:43:34 "testMetadataPublisher-0@18",
> 2022-03-07T13:43:34.3959519Z Mar 07 13:43:34 "testMetadataPublisher-0@19",
> 2022-03-07T13:43:34.3960798Z Mar 07 13:43:34 "testMetadataPublisher-0@20",
> 2022-03-07T13:43:34.3961973Z Mar 07 13:43:34 "testMetadataPublisher-0@21",
> 2022-03-07T13:43:34.3963302Z Mar 07 13:43:34 "testMetadataPublisher-0@22",
> 2022-03-07T13:43:34.3964563Z Mar 07 13:43:34 "testMetadataPublisher-0@23",
> 2022-03-07T13:43:34.3966941Z Mar 07 13:43:34 "testMetadataPublisher-0@24",
> 2022-03-07T13:43:34.3968246Z Mar 07 13:43:34 "testMetadataPublisher-0@25",
> 2022-03-07T13:43:34.3969452Z Mar 07 13:43:34 "testMetadataPublisher-0@26",
> 2022-03-07T13:43:34.3970656Z Mar 07 13:43:34 "testMetadataPublisher-0@27",
> 2022-03-07T13:43:34.3971853Z Mar 07 13:43:34 "testMetadataPublisher-0@28",
> 2022-03-07T13:43:34.3974163Z Mar 07 13:43:34 "testMetadataPublisher-0@29",
> 2022-03-07T13:43:34.3975441Z Mar 07 13:43:34 "testMetadataPublisher-0@30",
> 2022-03-07T13:43:34.3976380Z Mar 07 13:43:34 "testMetadataPublisher-0@31",
> 2022-03-07T13:43:34.3977278Z Mar 07 13:43:34 "testMetadataPublisher-0@32",
> 2022-03-07T13:43:34.3978197Z Mar 07 13:43:34 "testMetadataPublisher-0@33",
> 2022-03-07T13:43:34.3979120Z Mar 07 13:43:34 "testMetadataPublisher-0@34",
> 2022-03-07T13:43:34.3980051Z Mar 07 13:43:34 "testMetadataPublisher-0@35",
> 2022-03-07T13:43:34.3981017Z Mar 07 13:43:34 "testMetadataPublisher-0@36",
> 2022-03-07T13:43:34.3981952Z Mar 07 13:43:34 "testMetadataPublisher-0@37",
> 2022-03-07T13:43:34.3982975Z Mar 07 13:43:34 "testMetadataPublisher-0@38",
> 2022-03-07T13:43:34.3983882Z Mar 07 13:43:34 "testMetadataPublisher-0@39",
> 2022-03-07T13:43:34.3984940Z Mar 07 13:43:34 "testMetadataPublisher-0@40",
> 2022-03-07T13:43:34.3985838Z Mar 07 13:43:34 "testMetadataPublisher-0@41",
> 2022-03-07T13:43:34.3986702Z Mar 07 13:43:34 "testMetadataPublisher-0@42",
> 2022-03-07T13:43:34.3987661Z Mar 07 13:43:34 "testMetadataPublisher-0@43",
> 2022-03-07T13:43:34.3988564Z Mar 07 13:43:34 "testMetadataPublisher-0@44",
> 2022-03-07T13:43:34.3989444Z Mar 07 13:43:34 "testMetadataPublisher-0@45",
> 2022-03-07T13:43:34.3990347Z Mar 07 13:43:34 "testMetadataPublisher-0@46",
> 2022-03-07T13:43:34.3991206Z Mar 07

[GitHub] [flink] shizhengchao commented on pull request #18863: [FLINK-26033][flink-connector-kafka]Fix the problem that robin does not take effect due to upgrading kafka client to 2.4.1 since Flink1.

2022-03-07 Thread GitBox


shizhengchao commented on pull request #18863:
URL: https://github.com/apache/flink/pull/18863#issuecomment-1061466075


   > 
   
   I tried it, but after running, all files are changed files


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-26137) Create webhook REST api test

2022-03-07 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-26137:
--

Assignee: Nicholas Jiang

> Create webhook REST api test
> 
>
> Key: FLINK-26137
> URL: https://issues.apache.org/jira/browse/FLINK-26137
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Nicholas Jiang
>Priority: Major
>
> Add test to validate the webhook rest endpoint and make sure it returns the 
> expected responses, status codes etc.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26533) KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee failed on azure due to delete topic timeout

2022-03-07 Thread Yun Gao (Jira)
Yun Gao created FLINK-26533:
---

 Summary: KafkaSinkITCase.testRecoveryWithExactlyOnceGuarantee 
failed on azure due to delete topic timeout
 Key: FLINK-26533
 URL: https://issues.apache.org/jira/browse/FLINK-26533
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.14.3
Reporter: Yun Gao



{code:java}
Mar 07 02:42:17 [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 174.077 s <<< FAILURE! - in 
org.apache.flink.connector.kafka.sink.KafkaSinkITCase
Mar 07 02:42:17 [ERROR] testRecoveryWithAtLeastOnceGuarantee  Time elapsed: 
63.913 s  <<< ERROR!
Mar 07 02:42:17 java.util.concurrent.TimeoutException
Mar 07 02:42:17 at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
Mar 07 02:42:17 at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
Mar 07 02:42:17 at 
org.apache.flink.connector.kafka.sink.KafkaSinkITCase.deleteTestTopic(KafkaSinkITCase.java:429)
Mar 07 02:42:17 at 
org.apache.flink.connector.kafka.sink.KafkaSinkITCase.tearDown(KafkaSinkITCase.java:160)
Mar 07 02:42:17 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Mar 07 02:42:17 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Mar 07 02:42:17 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Mar 07 02:42:17 at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
Mar 07 02:42:17 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Mar 07 02:42:17 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Mar 07 02:42:17 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Mar 07 02:42:17 at 
org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
Mar 07 02:42:17 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
Mar 07 02:42:17 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Mar 07 02:42:17 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Mar 07 02:42:17 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Mar 07 02:42:17 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Mar 07 02:42:17 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Mar 07 02:42:17 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Mar 07 02:42:17 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Mar 07 02:42:17 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Mar 07 02:42:17 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Mar 07 02:42:17 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Mar 07 02:42:17 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Mar 07 02:42:17 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Mar 07 02:42:17 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Mar 07 02:42:17 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Mar 07 02:42:17 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Mar 07 02:42:17 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Mar 07 02:42:17 at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
Mar 07 02:42:17 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Mar 07 02:42:17 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)

{code}

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32582&view=logs&j=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f&t=918e890f-5ed9-5212-a25e-962628fb4bc5&l=7345



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24960) YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots ha

2022-03-07 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502757#comment-17502757
 ] 

Yun Gao commented on FLINK-24960:
-

1.14: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32582&view=logs&j=a5ef94ef-68c2-57fd-3794-dc108ed1c495&t=2c68b137-b01d-55c9-e603-3ff3f320364b&l=34954]

 

> YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots
>  hangs on azure
> ---
>
> Key: FLINK-24960
> URL: https://issues.apache.org/jira/browse/FLINK-24960
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> Nov 18 22:37:08 
> 
> Nov 18 22:37:08 Test 
> testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase)
>  is running.
> Nov 18 22:37:08 
> 
> Nov 18 22:37:25 22:37:25,470 [main] INFO  
> org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase [] - Extracted 
> hostname:port: 5718b812c7ab:38622
> Nov 18 22:52:36 
> ==
> Nov 18 22:52:36 Process produced no output for 900 seconds.
> Nov 18 22:52:36 
> ==
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=26722&view=logs&j=f450c1a5-64b1-5955-e215-49cb1ad5ec88&t=cc452273-9efa-565d-9db8-ef62a38a0c10&l=36395



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24960) YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots hang

2022-03-07 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24960:

Affects Version/s: 1.14.3

> YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots
>  hangs on azure
> ---
>
> Key: FLINK-24960
> URL: https://issues.apache.org/jira/browse/FLINK-24960
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> Nov 18 22:37:08 
> 
> Nov 18 22:37:08 Test 
> testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots(org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase)
>  is running.
> Nov 18 22:37:08 
> 
> Nov 18 22:37:25 22:37:25,470 [main] INFO  
> org.apache.flink.yarn.YARNSessionCapacitySchedulerITCase [] - Extracted 
> hostname:port: 5718b812c7ab:38622
> Nov 18 22:52:36 
> ==
> Nov 18 22:52:36 Process produced no output for 900 seconds.
> Nov 18 22:52:36 
> ==
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=26722&view=logs&j=f450c1a5-64b1-5955-e215-49cb1ad5ec88&t=cc452273-9efa-565d-9db8-ef62a38a0c10&l=36395



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26501) Quickstarts Scala nightly end-to-end test failed on azure due to checkponts failed and logs contains exceptions

2022-03-07 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26501:

Affects Version/s: 1.15.0

> Quickstarts Scala nightly end-to-end test failed on azure due to checkponts 
> failed and logs contains exceptions
> ---
>
> Key: FLINK-26501
> URL: https://issues.apache.org/jira/browse/FLINK-26501
> Project: Flink
>  Issue Type: Bug
>  Components: API / Scala, Runtime / Checkpointing
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Yun Gao
>Assignee: Anton Kalashnikov
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> 2022-03-05T02:35:36.4040037Z Mar 05 02:35:36 2022-03-05 02:35:34,334 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Triggering 
> checkpoint 1 (type=CHECKPOINT) @ 1646447734295 for job 
> b236087395260dc34648b84c2b86d6e8.
> 2022-03-05T02:35:36.4041701Z Mar 05 02:35:36 2022-03-05 02:35:34,387 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Decline 
> checkpoint 1 by task e8a324cae6bf452d32db6797bbbafad0 of job 
> b236087395260dc34648b84c2b86d6e8 at 127.0.0.1:45911-0a50f5 @ localhost 
> (dataPort=44047).
> 2022-03-05T02:35:36.4043279Z Mar 05 02:35:36 
> org.apache.flink.util.SerializedThrowable: Task name with subtask : Source: 
> Sequence Source (Deprecated) -> Map -> Sink: Unnamed (1/1)#0 Failure reason: 
> Checkpoint was declined (task is closing)
> 2022-03-05T02:35:36.4044531Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.taskmanager.Task.declineCheckpoint(Task.java:1389) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4045729Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.taskmanager.Task.declineCheckpoint(Task.java:1382) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4047172Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.taskmanager.Task.triggerCheckpointBarrier(Task.java:1348)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4049092Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.triggerCheckpoint(TaskExecutor.java:956)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4050158Z Mar 05 02:35:36  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_322]
> 2022-03-05T02:35:36.4050929Z Mar 05 02:35:36  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_322]
> 2022-03-05T02:35:36.4051776Z Mar 05 02:35:36  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_322]
> 2022-03-05T02:35:36.4052559Z Mar 05 02:35:36  at 
> java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
> 2022-03-05T02:35:36.4053373Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:316)
>  ~[?:?]
> 2022-03-05T02:35:36.4054849Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
>  ~[?:?]
> 2022-03-05T02:35:36.4055685Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:314)
>  ~[?:?]
> 2022-03-05T02:35:36.4056461Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
>  ~[?:?]
> 2022-03-05T02:35:36.4057219Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
>  ~[?:?]
> 2022-03-05T02:35:36.4057899Z Mar 05 02:35:36  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]
> 2022-03-05T02:35:36.4059666Z Mar 05 02:35:36  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]
> 2022-03-05T02:35:36.4061005Z Mar 05 02:35:36  at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4062324Z Mar 05 02:35:36  at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4063941Z Mar 05 02:35:36  at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[?:?]
> 2022-03-05T02:35:36.4065009Z Mar 05 02:35:36  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4066205Z Mar 05 02:35:36  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4067514Z Mar 05 02:35:36  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scal

[jira] [Commented] (FLINK-26501) Quickstarts Scala nightly end-to-end test failed on azure due to checkponts failed and logs contains exceptions

2022-03-07 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502756#comment-17502756
 ] 

Yun Gao commented on FLINK-26501:
-

1.15: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32594&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=070ff179-953e-5bda-71fa-d6599415701c&l=17640

> Quickstarts Scala nightly end-to-end test failed on azure due to checkponts 
> failed and logs contains exceptions
> ---
>
> Key: FLINK-26501
> URL: https://issues.apache.org/jira/browse/FLINK-26501
> Project: Flink
>  Issue Type: Bug
>  Components: API / Scala, Runtime / Checkpointing
>Affects Versions: 1.14.3
>Reporter: Yun Gao
>Assignee: Anton Kalashnikov
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> 2022-03-05T02:35:36.4040037Z Mar 05 02:35:36 2022-03-05 02:35:34,334 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Triggering 
> checkpoint 1 (type=CHECKPOINT) @ 1646447734295 for job 
> b236087395260dc34648b84c2b86d6e8.
> 2022-03-05T02:35:36.4041701Z Mar 05 02:35:36 2022-03-05 02:35:34,387 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Decline 
> checkpoint 1 by task e8a324cae6bf452d32db6797bbbafad0 of job 
> b236087395260dc34648b84c2b86d6e8 at 127.0.0.1:45911-0a50f5 @ localhost 
> (dataPort=44047).
> 2022-03-05T02:35:36.4043279Z Mar 05 02:35:36 
> org.apache.flink.util.SerializedThrowable: Task name with subtask : Source: 
> Sequence Source (Deprecated) -> Map -> Sink: Unnamed (1/1)#0 Failure reason: 
> Checkpoint was declined (task is closing)
> 2022-03-05T02:35:36.4044531Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.taskmanager.Task.declineCheckpoint(Task.java:1389) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4045729Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.taskmanager.Task.declineCheckpoint(Task.java:1382) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4047172Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.taskmanager.Task.triggerCheckpointBarrier(Task.java:1348)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4049092Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.triggerCheckpoint(TaskExecutor.java:956)
>  ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4050158Z Mar 05 02:35:36  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_322]
> 2022-03-05T02:35:36.4050929Z Mar 05 02:35:36  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_322]
> 2022-03-05T02:35:36.4051776Z Mar 05 02:35:36  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_322]
> 2022-03-05T02:35:36.4052559Z Mar 05 02:35:36  at 
> java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_322]
> 2022-03-05T02:35:36.4053373Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:316)
>  ~[?:?]
> 2022-03-05T02:35:36.4054849Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
>  ~[?:?]
> 2022-03-05T02:35:36.4055685Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:314)
>  ~[?:?]
> 2022-03-05T02:35:36.4056461Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
>  ~[?:?]
> 2022-03-05T02:35:36.4057219Z Mar 05 02:35:36  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
>  ~[?:?]
> 2022-03-05T02:35:36.4057899Z Mar 05 02:35:36  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]
> 2022-03-05T02:35:36.4059666Z Mar 05 02:35:36  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]
> 2022-03-05T02:35:36.4061005Z Mar 05 02:35:36  at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4062324Z Mar 05 02:35:36  at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4063941Z Mar 05 02:35:36  at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) ~[?:?]
> 2022-03-05T02:35:36.4065009Z Mar 05 02:35:36  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
> ~[flink-dist_2.11-1.14-SNAPSHOT.jar:1.14-SNAPSHOT]
> 2022-03-05T02:35:36.4066205Z Mar 05 02:35:36  at 
> scala.PartialFunction$OrElse.applyOrE

[jira] [Closed] (FLINK-26423) Integrate log store to StoreSink

2022-03-07 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-26423.

Resolution: Fixed

master: 7f91a853e70bb558fd2e2aa619c83558b9bccaa7

> Integrate log store to StoreSink
> 
>
> Key: FLINK-26423
> URL: https://issues.apache.org/jira/browse/FLINK-26423
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> StoreSink is a hybrid sink. We need:
>  * Introduce LocalCommitterOperator, kafka can only use the same produceId 
> when committing, because starting multiple instances of a producer with a 
> single produceId is not allowed by the kafka server. So we have to reuse the 
> producer instance within the same process to commit.
>  * Committable can not be serialized, Kafka reuses producer instance in 
> Committable. and where the producer will not be serialized.
>  * Integrate log store to StoreSink.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26526) Record hasNull and allNull instead of nullCount in FieldStats

2022-03-07 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-26526:

Summary: Record hasNull and allNull instead of nullCount in FieldStats  
(was: Record hasNull and allNull instead of nullCount in fieldStats)

> Record hasNull and allNull instead of nullCount in FieldStats
> -
>
> Key: FLINK-26526
> URL: https://issues.apache.org/jira/browse/FLINK-26526
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Caizhi Weng
>Priority: Major
>
> Currently we aren't strongly relying on {{nullCount}}. Also, some formats 
> (for example orc) does not support {{nullCount}} statistics. So we can record 
> {{hasNull}} and {{allNull}} instead of {{nullCount}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-table-store] JingsongLi merged pull request #28: [FLINK-26423] Integrate log store to StoreSink

2022-03-07 Thread GitBox


JingsongLi merged pull request #28:
URL: https://github.com/apache/flink-table-store/pull/28


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25819) NetworkBufferPoolTest.testIsAvailableOrNotAfterRequestAndRecycleMultiSegments fails on AZP

2022-03-07 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502752#comment-17502752
 ] 

Yun Gao commented on FLINK-25819:
-

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32654&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=c2734c79-73b6-521c-e85a-67c7ecae9107&l=5932]
 The issue happend also on 1.13

> NetworkBufferPoolTest.testIsAvailableOrNotAfterRequestAndRecycleMultiSegments 
> fails on AZP
> --
>
> Key: FLINK-25819
> URL: https://issues.apache.org/jira/browse/FLINK-25819
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.14.3
>Reporter: Till Rohrmann
>Assignee: Anton Kalashnikov
>Priority: Critical
>  Labels: pull-request-available, stale-critical, test-stability
> Fix For: 1.15.0, 1.14.5
>
>
> The 
> {{NetworkBufferPoolTest.testIsAvailableOrNotAfterRequestAndRecycleMultiSegments}}
>  fails on AZP with:
> {code}
> Jan 26 07:57:03 [ERROR] 
> testIsAvailableOrNotAfterRequestAndRecycleMultiSegments  Time elapsed: 10.028 
> s  <<< ERROR!
> Jan 26 07:57:03 org.junit.runners.model.TestTimedOutException: test timed out 
> after 10 seconds
> Jan 26 07:57:03   at sun.misc.Unsafe.park(Native Method)
> Jan 26 07:57:03   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> Jan 26 07:57:03   at 
> java.util.concurrent.FutureTask.awaitDone(FutureTask.java:426)
> Jan 26 07:57:03   at 
> java.util.concurrent.FutureTask.get(FutureTask.java:204)
> Jan 26 07:57:03   at 
> org.junit.internal.runners.statements.FailOnTimeout.getResult(FailOnTimeout.java:167)
> Jan 26 07:57:03   at 
> org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:128)
> Jan 26 07:57:03   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
> Jan 26 07:57:03   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Jan 26 07:57:03   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Jan 26 07:57:03   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Jan 26 07:57:03   at java.lang.Thread.run(Thread.java:748)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=30187&view=logs&j=4d4a0d10-fca2-5507-8eed-c07f0bdf4887&t=7b25afdf-cc6c-566f-5459-359dc2585798&l=7350



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26532) SinkMetricsITCase.testMetrics failed on azure

2022-03-07 Thread Yun Gao (Jira)
Yun Gao created FLINK-26532:
---

 Summary: SinkMetricsITCase.testMetrics failed on azure
 Key: FLINK-26532
 URL: https://issues.apache.org/jira/browse/FLINK-26532
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.15.0
Reporter: Yun Gao


{code:java}
Mar 08 05:38:35 [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 6.512 s <<< FAILURE! - in 
org.apache.flink.test.streaming.runtime.SinkMetricsITCase
Mar 08 05:38:35 [ERROR] 
org.apache.flink.test.streaming.runtime.SinkMetricsITCase.testMetrics  Time 
elapsed: 1.607 s  <<< FAILURE!
Mar 08 05:38:35 java.lang.AssertionError: 
Mar 08 05:38:35 
Mar 08 05:38:35 Expected: Counter with <4L>
Mar 08 05:38:35  but: Counter with was <0L>
Mar 08 05:38:35 at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
Mar 08 05:38:35 at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
Mar 08 05:38:35 at 
org.apache.flink.test.streaming.runtime.SinkMetricsITCase.assertSinkMetrics(SinkMetricsITCase.java:139)
Mar 08 05:38:35 at 
org.apache.flink.test.streaming.runtime.SinkMetricsITCase.testMetrics(SinkMetricsITCase.java:113)
Mar 08 05:38:35 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Mar 08 05:38:35 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Mar 08 05:38:35 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Mar 08 05:38:35 at java.lang.reflect.Method.invoke(Method.java:498)
Mar 08 05:38:35 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Mar 08 05:38:35 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Mar 08 05:38:35 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Mar 08 05:38:35 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Mar 08 05:38:35 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Mar 08 05:38:35 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Mar 08 05:38:35 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Mar 08 05:38:35 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Mar 08 05:38:35 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Mar 08 05:38:35 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Mar 08 05:38:35 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Mar 08 05:38:35 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Mar 08 05:38:35 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Mar 08 05:38:35 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Mar 08 05:38:35 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Mar 08 05:38:35 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Mar 08 05:38:35 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Mar 08 05:38:35 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Mar 08 05:38:35 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Mar 08 05:38:35 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Mar 08 05:38:35 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Mar 08 05:38:35 at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
Mar 08 05:38:35 at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
 {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32655&view=logs&j=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3&t=0c010d0c-3dec-5bf1-d408-7b18988b1b2b&l=6032



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #19003: [FLINK-26517][runtime] Normalize the decided parallelism to power of 2 when using adaptive batch scheduler

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #19003:
URL: https://github.com/apache/flink/pull/19003#issuecomment-1061377385


   
   ## CI report:
   
   * fb78d6887a37ac8c242333159011831bab65dcb4 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32665)
 
   * d412278f22b0f79c0af6481398cb697a82da0ad5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32668)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #19003: [FLINK-26517][runtime] Normalize the decided parallelism to power of 2 when using adaptive batch scheduler

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #19003:
URL: https://github.com/apache/flink/pull/19003#issuecomment-1061377385


   
   ## CI report:
   
   * fb78d6887a37ac8c242333159011831bab65dcb4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32665)
 
   * d412278f22b0f79c0af6481398cb697a82da0ad5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24538) ZooKeeperLeaderElectionTest.testLeaderShouldBeCorrectedWhenOverwritten fails with NPE

2022-03-07 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502750#comment-17502750
 ] 

Yun Gao commented on FLINK-24538:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32649&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=7254

> ZooKeeperLeaderElectionTest.testLeaderShouldBeCorrectedWhenOverwritten fails 
> with NPE
> -
>
> Key: FLINK-24538
> URL: https://issues.apache.org/jira/browse/FLINK-24538
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Assignee: xmarker
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25020&view=logs&j=f2b08047-82c3-520f-51ee-a30fd6254285&t=3810d23d-4df2-586c-103c-ec14ede6af00&l=7573
> {code}
> Oct 13 22:26:04 [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 12.355 s <<< FAILURE! - in 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest
> Oct 13 22:26:04 [ERROR] testLeaderShouldBeCorrectedWhenOverwritten  Time 
> elapsed: 1.138 s  <<< ERROR!
> Oct 13 22:26:04 java.lang.NullPointerException
> Oct 13 22:26:04   at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest.testLeaderShouldBeCorrectedWhenOverwritten(ZooKeeperLeaderElectionTest.java:434)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24538) ZooKeeperLeaderElectionTest.testLeaderShouldBeCorrectedWhenOverwritten fails with NPE

2022-03-07 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24538:

Priority: Critical  (was: Major)

> ZooKeeperLeaderElectionTest.testLeaderShouldBeCorrectedWhenOverwritten fails 
> with NPE
> -
>
> Key: FLINK-24538
> URL: https://issues.apache.org/jira/browse/FLINK-24538
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Assignee: xmarker
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25020&view=logs&j=f2b08047-82c3-520f-51ee-a30fd6254285&t=3810d23d-4df2-586c-103c-ec14ede6af00&l=7573
> {code}
> Oct 13 22:26:04 [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 12.355 s <<< FAILURE! - in 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest
> Oct 13 22:26:04 [ERROR] testLeaderShouldBeCorrectedWhenOverwritten  Time 
> elapsed: 1.138 s  <<< ERROR!
> Oct 13 22:26:04 java.lang.NullPointerException
> Oct 13 22:26:04   at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest.testLeaderShouldBeCorrectedWhenOverwritten(ZooKeeperLeaderElectionTest.java:434)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24538) ZooKeeperLeaderElectionTest.testLeaderShouldBeCorrectedWhenOverwritten fails with NPE

2022-03-07 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-24538:

Affects Version/s: 1.15.0

> ZooKeeperLeaderElectionTest.testLeaderShouldBeCorrectedWhenOverwritten fails 
> with NPE
> -
>
> Key: FLINK-24538
> URL: https://issues.apache.org/jira/browse/FLINK-24538
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Xintong Song
>Assignee: xmarker
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25020&view=logs&j=f2b08047-82c3-520f-51ee-a30fd6254285&t=3810d23d-4df2-586c-103c-ec14ede6af00&l=7573
> {code}
> Oct 13 22:26:04 [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 12.355 s <<< FAILURE! - in 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest
> Oct 13 22:26:04 [ERROR] testLeaderShouldBeCorrectedWhenOverwritten  Time 
> elapsed: 1.138 s  <<< ERROR!
> Oct 13 22:26:04 java.lang.NullPointerException
> Oct 13 22:26:04   at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest.testLeaderShouldBeCorrectedWhenOverwritten(ZooKeeperLeaderElectionTest.java:434)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26531) KafkaWriterITCase.testMetadataPublisher failed on azure

2022-03-07 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502749#comment-17502749
 ] 

Yun Gao commented on FLINK-26531:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32628&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=15a22db7-8faa-5b34-3920-d33c9f0ca23c&l=36037

> KafkaWriterITCase.testMetadataPublisher  failed on azure
> 
>
> Key: FLINK-26531
> URL: https://issues.apache.org/jira/browse/FLINK-26531
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> 022-03-07T13:43:34.3882626Z Mar 07 13:43:34 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaWriterITCase.testMetadataPublisher 
>  Time elapsed: 0.205 s  <<< FAILURE!
> 2022-03-07T13:43:34.3883743Z Mar 07 13:43:34 java.lang.AssertionError: 
> 2022-03-07T13:43:34.3884867Z Mar 07 13:43:34 
> 2022-03-07T13:43:34.3885412Z Mar 07 13:43:34 Expecting actual:
> 2022-03-07T13:43:34.3886464Z Mar 07 13:43:34   ["testMetadataPublisher-0@0",
> 2022-03-07T13:43:34.3887361Z Mar 07 13:43:34 "testMetadataPublisher-0@1",
> 2022-03-07T13:43:34.3888222Z Mar 07 13:43:34 "testMetadataPublisher-0@2",
> 2022-03-07T13:43:34.333Z Mar 07 13:43:34 "testMetadataPublisher-0@3",
> 2022-03-07T13:43:34.3892032Z Mar 07 13:43:34 "testMetadataPublisher-0@4",
> 2022-03-07T13:43:34.3893140Z Mar 07 13:43:34 "testMetadataPublisher-0@5",
> 2022-03-07T13:43:34.3893849Z Mar 07 13:43:34 "testMetadataPublisher-0@6",
> 2022-03-07T13:43:34.3895077Z Mar 07 13:43:34 "testMetadataPublisher-0@7",
> 2022-03-07T13:43:34.3895779Z Mar 07 13:43:34 "testMetadataPublisher-0@8",
> 2022-03-07T13:43:34.3896423Z Mar 07 13:43:34 "testMetadataPublisher-0@9",
> 2022-03-07T13:43:34.3897164Z Mar 07 13:43:34 "testMetadataPublisher-0@10",
> 2022-03-07T13:43:34.3897792Z Mar 07 13:43:34 "testMetadataPublisher-0@11",
> 2022-03-07T13:43:34.3949208Z Mar 07 13:43:34 "testMetadataPublisher-0@12",
> 2022-03-07T13:43:34.3950956Z Mar 07 13:43:34 "testMetadataPublisher-0@13",
> 2022-03-07T13:43:34.3952287Z Mar 07 13:43:34 "testMetadataPublisher-0@14",
> 2022-03-07T13:43:34.3954341Z Mar 07 13:43:34 "testMetadataPublisher-0@15",
> 2022-03-07T13:43:34.3955834Z Mar 07 13:43:34 "testMetadataPublisher-0@16",
> 2022-03-07T13:43:34.3957048Z Mar 07 13:43:34 "testMetadataPublisher-0@17",
> 2022-03-07T13:43:34.3958287Z Mar 07 13:43:34 "testMetadataPublisher-0@18",
> 2022-03-07T13:43:34.3959519Z Mar 07 13:43:34 "testMetadataPublisher-0@19",
> 2022-03-07T13:43:34.3960798Z Mar 07 13:43:34 "testMetadataPublisher-0@20",
> 2022-03-07T13:43:34.3961973Z Mar 07 13:43:34 "testMetadataPublisher-0@21",
> 2022-03-07T13:43:34.3963302Z Mar 07 13:43:34 "testMetadataPublisher-0@22",
> 2022-03-07T13:43:34.3964563Z Mar 07 13:43:34 "testMetadataPublisher-0@23",
> 2022-03-07T13:43:34.3966941Z Mar 07 13:43:34 "testMetadataPublisher-0@24",
> 2022-03-07T13:43:34.3968246Z Mar 07 13:43:34 "testMetadataPublisher-0@25",
> 2022-03-07T13:43:34.3969452Z Mar 07 13:43:34 "testMetadataPublisher-0@26",
> 2022-03-07T13:43:34.3970656Z Mar 07 13:43:34 "testMetadataPublisher-0@27",
> 2022-03-07T13:43:34.3971853Z Mar 07 13:43:34 "testMetadataPublisher-0@28",
> 2022-03-07T13:43:34.3974163Z Mar 07 13:43:34 "testMetadataPublisher-0@29",
> 2022-03-07T13:43:34.3975441Z Mar 07 13:43:34 "testMetadataPublisher-0@30",
> 2022-03-07T13:43:34.3976380Z Mar 07 13:43:34 "testMetadataPublisher-0@31",
> 2022-03-07T13:43:34.3977278Z Mar 07 13:43:34 "testMetadataPublisher-0@32",
> 2022-03-07T13:43:34.3978197Z Mar 07 13:43:34 "testMetadataPublisher-0@33",
> 2022-03-07T13:43:34.3979120Z Mar 07 13:43:34 "testMetadataPublisher-0@34",
> 2022-03-07T13:43:34.3980051Z Mar 07 13:43:34 "testMetadataPublisher-0@35",
> 2022-03-07T13:43:34.3981017Z Mar 07 13:43:34 "testMetadataPublisher-0@36",
> 2022-03-07T13:43:34.3981952Z Mar 07 13:43:34 "testMetadataPublisher-0@37",
> 2022-03-07T13:43:34.3982975Z Mar 07 13:43:34 "testMetadataPublisher-0@38",
> 2022-03-07T13:43:34.3983882Z Mar 07 13:43:34 "testMetadataPublisher-0@39",
> 2022-03-07T13:43:34.3984940Z Mar 07 13:43:34 "testMetadataPublisher-0@40",
> 2022-03-07T13:43:34.3985838Z Mar 07 13:43:34 "testMetadataPublisher-0@41",
> 2022-03-07T13:43:34.3986702Z Mar 07 13:43:34 "testMetadataPublisher-0@42",
> 2022-03-07T13:43:34.3987661Z Mar 07 13:43:34 "testMetadataPublisher-0@43",
> 2022-03-07T13:43:34.3988564Z Mar 07 13:43:34 "testMetadataPublisher-0@44",
> 2022-03-07T13:43:34.3989444Z Mar 07 13:43:34 "testMetadataPublisher-0@45

[jira] [Commented] (FLINK-25771) CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP

2022-03-07 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502748#comment-17502748
 ] 

Yun Gao commented on FLINK-25771:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32633&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d&l=12572

> CassandraConnectorITCase.testRetrialAndDropTables timeouts on AZP
> -
>
> Key: FLINK-25771
> URL: https://issues.apache.org/jira/browse/FLINK-25771
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.15.0, 1.13.5, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.13.7, 1.14.5
>
>
> The test {{CassandraConnectorITCase.testRetrialAndDropTables}} fails on AZP 
> with
> {code}
> Jan 23 01:02:52 com.datastax.driver.core.exceptions.NoHostAvailableException: 
> All host(s) tried for query failed (tried: /172.17.0.1:59220 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: 
> [/172.17.0.1] Timed out waiting for server response))
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> Jan 23 01:02:52   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> Jan 23 01:02:52   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testRetrialAndDropTables(CassandraConnectorITCase.java:554)
> Jan 23 01:02:52   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jan 23 01:02:52   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jan 23 01:02:52   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jan 23 01:02:52   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 23 01:02:52   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jan 23 01:02:52   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jan 23 01:02:52   at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:196)
> Jan 23 01:02:52   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jan 23 01:02:52   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> Jan 23 01:02:52   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jan 23 01:02:52   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jan 23 01:02:52   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jan 23 01:02:52   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Jan 23 01:02:52   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Jan 23 01:02:52   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Jan 23 01:02:52   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jan 23 01:02:52   at 
> org.junit.r

[jira] [Updated] (FLINK-26531) KafkaWriterITCase.testMetadataPublisher failed on azure

2022-03-07 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26531:

Priority: Critical  (was: Major)

> KafkaWriterITCase.testMetadataPublisher  failed on azure
> 
>
> Key: FLINK-26531
> URL: https://issues.apache.org/jira/browse/FLINK-26531
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 022-03-07T13:43:34.3882626Z Mar 07 13:43:34 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaWriterITCase.testMetadataPublisher 
>  Time elapsed: 0.205 s  <<< FAILURE!
> 2022-03-07T13:43:34.3883743Z Mar 07 13:43:34 java.lang.AssertionError: 
> 2022-03-07T13:43:34.3884867Z Mar 07 13:43:34 
> 2022-03-07T13:43:34.3885412Z Mar 07 13:43:34 Expecting actual:
> 2022-03-07T13:43:34.3886464Z Mar 07 13:43:34   ["testMetadataPublisher-0@0",
> 2022-03-07T13:43:34.3887361Z Mar 07 13:43:34 "testMetadataPublisher-0@1",
> 2022-03-07T13:43:34.3888222Z Mar 07 13:43:34 "testMetadataPublisher-0@2",
> 2022-03-07T13:43:34.333Z Mar 07 13:43:34 "testMetadataPublisher-0@3",
> 2022-03-07T13:43:34.3892032Z Mar 07 13:43:34 "testMetadataPublisher-0@4",
> 2022-03-07T13:43:34.3893140Z Mar 07 13:43:34 "testMetadataPublisher-0@5",
> 2022-03-07T13:43:34.3893849Z Mar 07 13:43:34 "testMetadataPublisher-0@6",
> 2022-03-07T13:43:34.3895077Z Mar 07 13:43:34 "testMetadataPublisher-0@7",
> 2022-03-07T13:43:34.3895779Z Mar 07 13:43:34 "testMetadataPublisher-0@8",
> 2022-03-07T13:43:34.3896423Z Mar 07 13:43:34 "testMetadataPublisher-0@9",
> 2022-03-07T13:43:34.3897164Z Mar 07 13:43:34 "testMetadataPublisher-0@10",
> 2022-03-07T13:43:34.3897792Z Mar 07 13:43:34 "testMetadataPublisher-0@11",
> 2022-03-07T13:43:34.3949208Z Mar 07 13:43:34 "testMetadataPublisher-0@12",
> 2022-03-07T13:43:34.3950956Z Mar 07 13:43:34 "testMetadataPublisher-0@13",
> 2022-03-07T13:43:34.3952287Z Mar 07 13:43:34 "testMetadataPublisher-0@14",
> 2022-03-07T13:43:34.3954341Z Mar 07 13:43:34 "testMetadataPublisher-0@15",
> 2022-03-07T13:43:34.3955834Z Mar 07 13:43:34 "testMetadataPublisher-0@16",
> 2022-03-07T13:43:34.3957048Z Mar 07 13:43:34 "testMetadataPublisher-0@17",
> 2022-03-07T13:43:34.3958287Z Mar 07 13:43:34 "testMetadataPublisher-0@18",
> 2022-03-07T13:43:34.3959519Z Mar 07 13:43:34 "testMetadataPublisher-0@19",
> 2022-03-07T13:43:34.3960798Z Mar 07 13:43:34 "testMetadataPublisher-0@20",
> 2022-03-07T13:43:34.3961973Z Mar 07 13:43:34 "testMetadataPublisher-0@21",
> 2022-03-07T13:43:34.3963302Z Mar 07 13:43:34 "testMetadataPublisher-0@22",
> 2022-03-07T13:43:34.3964563Z Mar 07 13:43:34 "testMetadataPublisher-0@23",
> 2022-03-07T13:43:34.3966941Z Mar 07 13:43:34 "testMetadataPublisher-0@24",
> 2022-03-07T13:43:34.3968246Z Mar 07 13:43:34 "testMetadataPublisher-0@25",
> 2022-03-07T13:43:34.3969452Z Mar 07 13:43:34 "testMetadataPublisher-0@26",
> 2022-03-07T13:43:34.3970656Z Mar 07 13:43:34 "testMetadataPublisher-0@27",
> 2022-03-07T13:43:34.3971853Z Mar 07 13:43:34 "testMetadataPublisher-0@28",
> 2022-03-07T13:43:34.3974163Z Mar 07 13:43:34 "testMetadataPublisher-0@29",
> 2022-03-07T13:43:34.3975441Z Mar 07 13:43:34 "testMetadataPublisher-0@30",
> 2022-03-07T13:43:34.3976380Z Mar 07 13:43:34 "testMetadataPublisher-0@31",
> 2022-03-07T13:43:34.3977278Z Mar 07 13:43:34 "testMetadataPublisher-0@32",
> 2022-03-07T13:43:34.3978197Z Mar 07 13:43:34 "testMetadataPublisher-0@33",
> 2022-03-07T13:43:34.3979120Z Mar 07 13:43:34 "testMetadataPublisher-0@34",
> 2022-03-07T13:43:34.3980051Z Mar 07 13:43:34 "testMetadataPublisher-0@35",
> 2022-03-07T13:43:34.3981017Z Mar 07 13:43:34 "testMetadataPublisher-0@36",
> 2022-03-07T13:43:34.3981952Z Mar 07 13:43:34 "testMetadataPublisher-0@37",
> 2022-03-07T13:43:34.3982975Z Mar 07 13:43:34 "testMetadataPublisher-0@38",
> 2022-03-07T13:43:34.3983882Z Mar 07 13:43:34 "testMetadataPublisher-0@39",
> 2022-03-07T13:43:34.3984940Z Mar 07 13:43:34 "testMetadataPublisher-0@40",
> 2022-03-07T13:43:34.3985838Z Mar 07 13:43:34 "testMetadataPublisher-0@41",
> 2022-03-07T13:43:34.3986702Z Mar 07 13:43:34 "testMetadataPublisher-0@42",
> 2022-03-07T13:43:34.3987661Z Mar 07 13:43:34 "testMetadataPublisher-0@43",
> 2022-03-07T13:43:34.3988564Z Mar 07 13:43:34 "testMetadataPublisher-0@44",
> 2022-03-07T13:43:34.3989444Z Mar 07 13:43:34 "testMetadataPublisher-0@45",
> 2022-03-07T13:43:34.3990347Z Mar 07 13:43:34 "testMetadataPublisher-0@46",
> 2022-03-07T13:43:34.3991206Z Mar 07 13:43:34 "testMetadataPublisher-0@47",
> 2022-03-07T13:43:34.3992100Z Mar 07 13:43:34 "testMetad

[jira] [Commented] (FLINK-26531) KafkaWriterITCase.testMetadataPublisher failed on azure

2022-03-07 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502746#comment-17502746
 ] 

Yun Gao commented on FLINK-26531:
-

Hi [~lzljs3620320]  could you have a look at this test~?

> KafkaWriterITCase.testMetadataPublisher  failed on azure
> 
>
> Key: FLINK-26531
> URL: https://issues.apache.org/jira/browse/FLINK-26531
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 022-03-07T13:43:34.3882626Z Mar 07 13:43:34 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaWriterITCase.testMetadataPublisher 
>  Time elapsed: 0.205 s  <<< FAILURE!
> 2022-03-07T13:43:34.3883743Z Mar 07 13:43:34 java.lang.AssertionError: 
> 2022-03-07T13:43:34.3884867Z Mar 07 13:43:34 
> 2022-03-07T13:43:34.3885412Z Mar 07 13:43:34 Expecting actual:
> 2022-03-07T13:43:34.3886464Z Mar 07 13:43:34   ["testMetadataPublisher-0@0",
> 2022-03-07T13:43:34.3887361Z Mar 07 13:43:34 "testMetadataPublisher-0@1",
> 2022-03-07T13:43:34.3888222Z Mar 07 13:43:34 "testMetadataPublisher-0@2",
> 2022-03-07T13:43:34.333Z Mar 07 13:43:34 "testMetadataPublisher-0@3",
> 2022-03-07T13:43:34.3892032Z Mar 07 13:43:34 "testMetadataPublisher-0@4",
> 2022-03-07T13:43:34.3893140Z Mar 07 13:43:34 "testMetadataPublisher-0@5",
> 2022-03-07T13:43:34.3893849Z Mar 07 13:43:34 "testMetadataPublisher-0@6",
> 2022-03-07T13:43:34.3895077Z Mar 07 13:43:34 "testMetadataPublisher-0@7",
> 2022-03-07T13:43:34.3895779Z Mar 07 13:43:34 "testMetadataPublisher-0@8",
> 2022-03-07T13:43:34.3896423Z Mar 07 13:43:34 "testMetadataPublisher-0@9",
> 2022-03-07T13:43:34.3897164Z Mar 07 13:43:34 "testMetadataPublisher-0@10",
> 2022-03-07T13:43:34.3897792Z Mar 07 13:43:34 "testMetadataPublisher-0@11",
> 2022-03-07T13:43:34.3949208Z Mar 07 13:43:34 "testMetadataPublisher-0@12",
> 2022-03-07T13:43:34.3950956Z Mar 07 13:43:34 "testMetadataPublisher-0@13",
> 2022-03-07T13:43:34.3952287Z Mar 07 13:43:34 "testMetadataPublisher-0@14",
> 2022-03-07T13:43:34.3954341Z Mar 07 13:43:34 "testMetadataPublisher-0@15",
> 2022-03-07T13:43:34.3955834Z Mar 07 13:43:34 "testMetadataPublisher-0@16",
> 2022-03-07T13:43:34.3957048Z Mar 07 13:43:34 "testMetadataPublisher-0@17",
> 2022-03-07T13:43:34.3958287Z Mar 07 13:43:34 "testMetadataPublisher-0@18",
> 2022-03-07T13:43:34.3959519Z Mar 07 13:43:34 "testMetadataPublisher-0@19",
> 2022-03-07T13:43:34.3960798Z Mar 07 13:43:34 "testMetadataPublisher-0@20",
> 2022-03-07T13:43:34.3961973Z Mar 07 13:43:34 "testMetadataPublisher-0@21",
> 2022-03-07T13:43:34.3963302Z Mar 07 13:43:34 "testMetadataPublisher-0@22",
> 2022-03-07T13:43:34.3964563Z Mar 07 13:43:34 "testMetadataPublisher-0@23",
> 2022-03-07T13:43:34.3966941Z Mar 07 13:43:34 "testMetadataPublisher-0@24",
> 2022-03-07T13:43:34.3968246Z Mar 07 13:43:34 "testMetadataPublisher-0@25",
> 2022-03-07T13:43:34.3969452Z Mar 07 13:43:34 "testMetadataPublisher-0@26",
> 2022-03-07T13:43:34.3970656Z Mar 07 13:43:34 "testMetadataPublisher-0@27",
> 2022-03-07T13:43:34.3971853Z Mar 07 13:43:34 "testMetadataPublisher-0@28",
> 2022-03-07T13:43:34.3974163Z Mar 07 13:43:34 "testMetadataPublisher-0@29",
> 2022-03-07T13:43:34.3975441Z Mar 07 13:43:34 "testMetadataPublisher-0@30",
> 2022-03-07T13:43:34.3976380Z Mar 07 13:43:34 "testMetadataPublisher-0@31",
> 2022-03-07T13:43:34.3977278Z Mar 07 13:43:34 "testMetadataPublisher-0@32",
> 2022-03-07T13:43:34.3978197Z Mar 07 13:43:34 "testMetadataPublisher-0@33",
> 2022-03-07T13:43:34.3979120Z Mar 07 13:43:34 "testMetadataPublisher-0@34",
> 2022-03-07T13:43:34.3980051Z Mar 07 13:43:34 "testMetadataPublisher-0@35",
> 2022-03-07T13:43:34.3981017Z Mar 07 13:43:34 "testMetadataPublisher-0@36",
> 2022-03-07T13:43:34.3981952Z Mar 07 13:43:34 "testMetadataPublisher-0@37",
> 2022-03-07T13:43:34.3982975Z Mar 07 13:43:34 "testMetadataPublisher-0@38",
> 2022-03-07T13:43:34.3983882Z Mar 07 13:43:34 "testMetadataPublisher-0@39",
> 2022-03-07T13:43:34.3984940Z Mar 07 13:43:34 "testMetadataPublisher-0@40",
> 2022-03-07T13:43:34.3985838Z Mar 07 13:43:34 "testMetadataPublisher-0@41",
> 2022-03-07T13:43:34.3986702Z Mar 07 13:43:34 "testMetadataPublisher-0@42",
> 2022-03-07T13:43:34.3987661Z Mar 07 13:43:34 "testMetadataPublisher-0@43",
> 2022-03-07T13:43:34.3988564Z Mar 07 13:43:34 "testMetadataPublisher-0@44",
> 2022-03-07T13:43:34.3989444Z Mar 07 13:43:34 "testMetadataPublisher-0@45",
> 2022-03-07T13:43:34.3990347Z Mar 07 13:43:34 "testMetadataPublisher-0@46",
> 2022-03-07T13:43:34.3991206Z Mar 07 13:43:34 "testMetadataPub

[jira] [Updated] (FLINK-26531) KafkaWriterITCase.testMetadataPublisher failed on azure

2022-03-07 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26531:

Fix Version/s: 1.15.0

> KafkaWriterITCase.testMetadataPublisher  failed on azure
> 
>
> Key: FLINK-26531
> URL: https://issues.apache.org/jira/browse/FLINK-26531
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> 022-03-07T13:43:34.3882626Z Mar 07 13:43:34 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaWriterITCase.testMetadataPublisher 
>  Time elapsed: 0.205 s  <<< FAILURE!
> 2022-03-07T13:43:34.3883743Z Mar 07 13:43:34 java.lang.AssertionError: 
> 2022-03-07T13:43:34.3884867Z Mar 07 13:43:34 
> 2022-03-07T13:43:34.3885412Z Mar 07 13:43:34 Expecting actual:
> 2022-03-07T13:43:34.3886464Z Mar 07 13:43:34   ["testMetadataPublisher-0@0",
> 2022-03-07T13:43:34.3887361Z Mar 07 13:43:34 "testMetadataPublisher-0@1",
> 2022-03-07T13:43:34.3888222Z Mar 07 13:43:34 "testMetadataPublisher-0@2",
> 2022-03-07T13:43:34.333Z Mar 07 13:43:34 "testMetadataPublisher-0@3",
> 2022-03-07T13:43:34.3892032Z Mar 07 13:43:34 "testMetadataPublisher-0@4",
> 2022-03-07T13:43:34.3893140Z Mar 07 13:43:34 "testMetadataPublisher-0@5",
> 2022-03-07T13:43:34.3893849Z Mar 07 13:43:34 "testMetadataPublisher-0@6",
> 2022-03-07T13:43:34.3895077Z Mar 07 13:43:34 "testMetadataPublisher-0@7",
> 2022-03-07T13:43:34.3895779Z Mar 07 13:43:34 "testMetadataPublisher-0@8",
> 2022-03-07T13:43:34.3896423Z Mar 07 13:43:34 "testMetadataPublisher-0@9",
> 2022-03-07T13:43:34.3897164Z Mar 07 13:43:34 "testMetadataPublisher-0@10",
> 2022-03-07T13:43:34.3897792Z Mar 07 13:43:34 "testMetadataPublisher-0@11",
> 2022-03-07T13:43:34.3949208Z Mar 07 13:43:34 "testMetadataPublisher-0@12",
> 2022-03-07T13:43:34.3950956Z Mar 07 13:43:34 "testMetadataPublisher-0@13",
> 2022-03-07T13:43:34.3952287Z Mar 07 13:43:34 "testMetadataPublisher-0@14",
> 2022-03-07T13:43:34.3954341Z Mar 07 13:43:34 "testMetadataPublisher-0@15",
> 2022-03-07T13:43:34.3955834Z Mar 07 13:43:34 "testMetadataPublisher-0@16",
> 2022-03-07T13:43:34.3957048Z Mar 07 13:43:34 "testMetadataPublisher-0@17",
> 2022-03-07T13:43:34.3958287Z Mar 07 13:43:34 "testMetadataPublisher-0@18",
> 2022-03-07T13:43:34.3959519Z Mar 07 13:43:34 "testMetadataPublisher-0@19",
> 2022-03-07T13:43:34.3960798Z Mar 07 13:43:34 "testMetadataPublisher-0@20",
> 2022-03-07T13:43:34.3961973Z Mar 07 13:43:34 "testMetadataPublisher-0@21",
> 2022-03-07T13:43:34.3963302Z Mar 07 13:43:34 "testMetadataPublisher-0@22",
> 2022-03-07T13:43:34.3964563Z Mar 07 13:43:34 "testMetadataPublisher-0@23",
> 2022-03-07T13:43:34.3966941Z Mar 07 13:43:34 "testMetadataPublisher-0@24",
> 2022-03-07T13:43:34.3968246Z Mar 07 13:43:34 "testMetadataPublisher-0@25",
> 2022-03-07T13:43:34.3969452Z Mar 07 13:43:34 "testMetadataPublisher-0@26",
> 2022-03-07T13:43:34.3970656Z Mar 07 13:43:34 "testMetadataPublisher-0@27",
> 2022-03-07T13:43:34.3971853Z Mar 07 13:43:34 "testMetadataPublisher-0@28",
> 2022-03-07T13:43:34.3974163Z Mar 07 13:43:34 "testMetadataPublisher-0@29",
> 2022-03-07T13:43:34.3975441Z Mar 07 13:43:34 "testMetadataPublisher-0@30",
> 2022-03-07T13:43:34.3976380Z Mar 07 13:43:34 "testMetadataPublisher-0@31",
> 2022-03-07T13:43:34.3977278Z Mar 07 13:43:34 "testMetadataPublisher-0@32",
> 2022-03-07T13:43:34.3978197Z Mar 07 13:43:34 "testMetadataPublisher-0@33",
> 2022-03-07T13:43:34.3979120Z Mar 07 13:43:34 "testMetadataPublisher-0@34",
> 2022-03-07T13:43:34.3980051Z Mar 07 13:43:34 "testMetadataPublisher-0@35",
> 2022-03-07T13:43:34.3981017Z Mar 07 13:43:34 "testMetadataPublisher-0@36",
> 2022-03-07T13:43:34.3981952Z Mar 07 13:43:34 "testMetadataPublisher-0@37",
> 2022-03-07T13:43:34.3982975Z Mar 07 13:43:34 "testMetadataPublisher-0@38",
> 2022-03-07T13:43:34.3983882Z Mar 07 13:43:34 "testMetadataPublisher-0@39",
> 2022-03-07T13:43:34.3984940Z Mar 07 13:43:34 "testMetadataPublisher-0@40",
> 2022-03-07T13:43:34.3985838Z Mar 07 13:43:34 "testMetadataPublisher-0@41",
> 2022-03-07T13:43:34.3986702Z Mar 07 13:43:34 "testMetadataPublisher-0@42",
> 2022-03-07T13:43:34.3987661Z Mar 07 13:43:34 "testMetadataPublisher-0@43",
> 2022-03-07T13:43:34.3988564Z Mar 07 13:43:34 "testMetadataPublisher-0@44",
> 2022-03-07T13:43:34.3989444Z Mar 07 13:43:34 "testMetadataPublisher-0@45",
> 2022-03-07T13:43:34.3990347Z Mar 07 13:43:34 "testMetadataPublisher-0@46",
> 2022-03-07T13:43:34.3991206Z Mar 07 13:43:34 "testMetadataPublisher-0@47",
> 2022-03-07T13:43:34.3992100Z Mar 07 13

[jira] [Created] (FLINK-26531) KafkaWriterITCase.testMetadataPublisher failed on azure

2022-03-07 Thread Yun Gao (Jira)
Yun Gao created FLINK-26531:
---

 Summary: KafkaWriterITCase.testMetadataPublisher  failed on azure
 Key: FLINK-26531
 URL: https://issues.apache.org/jira/browse/FLINK-26531
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.15.0
Reporter: Yun Gao


{code:java}
022-03-07T13:43:34.3882626Z Mar 07 13:43:34 [ERROR] 
org.apache.flink.connector.kafka.sink.KafkaWriterITCase.testMetadataPublisher  
Time elapsed: 0.205 s  <<< FAILURE!
2022-03-07T13:43:34.3883743Z Mar 07 13:43:34 java.lang.AssertionError: 
2022-03-07T13:43:34.3884867Z Mar 07 13:43:34 
2022-03-07T13:43:34.3885412Z Mar 07 13:43:34 Expecting actual:
2022-03-07T13:43:34.3886464Z Mar 07 13:43:34   ["testMetadataPublisher-0@0",
2022-03-07T13:43:34.3887361Z Mar 07 13:43:34 "testMetadataPublisher-0@1",
2022-03-07T13:43:34.3888222Z Mar 07 13:43:34 "testMetadataPublisher-0@2",
2022-03-07T13:43:34.333Z Mar 07 13:43:34 "testMetadataPublisher-0@3",
2022-03-07T13:43:34.3892032Z Mar 07 13:43:34 "testMetadataPublisher-0@4",
2022-03-07T13:43:34.3893140Z Mar 07 13:43:34 "testMetadataPublisher-0@5",
2022-03-07T13:43:34.3893849Z Mar 07 13:43:34 "testMetadataPublisher-0@6",
2022-03-07T13:43:34.3895077Z Mar 07 13:43:34 "testMetadataPublisher-0@7",
2022-03-07T13:43:34.3895779Z Mar 07 13:43:34 "testMetadataPublisher-0@8",
2022-03-07T13:43:34.3896423Z Mar 07 13:43:34 "testMetadataPublisher-0@9",
2022-03-07T13:43:34.3897164Z Mar 07 13:43:34 "testMetadataPublisher-0@10",
2022-03-07T13:43:34.3897792Z Mar 07 13:43:34 "testMetadataPublisher-0@11",
2022-03-07T13:43:34.3949208Z Mar 07 13:43:34 "testMetadataPublisher-0@12",
2022-03-07T13:43:34.3950956Z Mar 07 13:43:34 "testMetadataPublisher-0@13",
2022-03-07T13:43:34.3952287Z Mar 07 13:43:34 "testMetadataPublisher-0@14",
2022-03-07T13:43:34.3954341Z Mar 07 13:43:34 "testMetadataPublisher-0@15",
2022-03-07T13:43:34.3955834Z Mar 07 13:43:34 "testMetadataPublisher-0@16",
2022-03-07T13:43:34.3957048Z Mar 07 13:43:34 "testMetadataPublisher-0@17",
2022-03-07T13:43:34.3958287Z Mar 07 13:43:34 "testMetadataPublisher-0@18",
2022-03-07T13:43:34.3959519Z Mar 07 13:43:34 "testMetadataPublisher-0@19",
2022-03-07T13:43:34.3960798Z Mar 07 13:43:34 "testMetadataPublisher-0@20",
2022-03-07T13:43:34.3961973Z Mar 07 13:43:34 "testMetadataPublisher-0@21",
2022-03-07T13:43:34.3963302Z Mar 07 13:43:34 "testMetadataPublisher-0@22",
2022-03-07T13:43:34.3964563Z Mar 07 13:43:34 "testMetadataPublisher-0@23",
2022-03-07T13:43:34.3966941Z Mar 07 13:43:34 "testMetadataPublisher-0@24",
2022-03-07T13:43:34.3968246Z Mar 07 13:43:34 "testMetadataPublisher-0@25",
2022-03-07T13:43:34.3969452Z Mar 07 13:43:34 "testMetadataPublisher-0@26",
2022-03-07T13:43:34.3970656Z Mar 07 13:43:34 "testMetadataPublisher-0@27",
2022-03-07T13:43:34.3971853Z Mar 07 13:43:34 "testMetadataPublisher-0@28",
2022-03-07T13:43:34.3974163Z Mar 07 13:43:34 "testMetadataPublisher-0@29",
2022-03-07T13:43:34.3975441Z Mar 07 13:43:34 "testMetadataPublisher-0@30",
2022-03-07T13:43:34.3976380Z Mar 07 13:43:34 "testMetadataPublisher-0@31",
2022-03-07T13:43:34.3977278Z Mar 07 13:43:34 "testMetadataPublisher-0@32",
2022-03-07T13:43:34.3978197Z Mar 07 13:43:34 "testMetadataPublisher-0@33",
2022-03-07T13:43:34.3979120Z Mar 07 13:43:34 "testMetadataPublisher-0@34",
2022-03-07T13:43:34.3980051Z Mar 07 13:43:34 "testMetadataPublisher-0@35",
2022-03-07T13:43:34.3981017Z Mar 07 13:43:34 "testMetadataPublisher-0@36",
2022-03-07T13:43:34.3981952Z Mar 07 13:43:34 "testMetadataPublisher-0@37",
2022-03-07T13:43:34.3982975Z Mar 07 13:43:34 "testMetadataPublisher-0@38",
2022-03-07T13:43:34.3983882Z Mar 07 13:43:34 "testMetadataPublisher-0@39",
2022-03-07T13:43:34.3984940Z Mar 07 13:43:34 "testMetadataPublisher-0@40",
2022-03-07T13:43:34.3985838Z Mar 07 13:43:34 "testMetadataPublisher-0@41",
2022-03-07T13:43:34.3986702Z Mar 07 13:43:34 "testMetadataPublisher-0@42",
2022-03-07T13:43:34.3987661Z Mar 07 13:43:34 "testMetadataPublisher-0@43",
2022-03-07T13:43:34.3988564Z Mar 07 13:43:34 "testMetadataPublisher-0@44",
2022-03-07T13:43:34.3989444Z Mar 07 13:43:34 "testMetadataPublisher-0@45",
2022-03-07T13:43:34.3990347Z Mar 07 13:43:34 "testMetadataPublisher-0@46",
2022-03-07T13:43:34.3991206Z Mar 07 13:43:34 "testMetadataPublisher-0@47",
2022-03-07T13:43:34.3992100Z Mar 07 13:43:34 "testMetadataPublisher-0@48",
2022-03-07T13:43:34.3993091Z Mar 07 13:43:34 "testMetadataPublisher-0@49",
2022-03-07T13:43:34.3994383Z Mar 07 13:43:34 "testMetadataPublisher-0@50",
2022-03-07T13:43:34.3995399Z Mar 07 13:43:34 "testMetadataPublisher-0@51",
2022-03-07T13:43:34.3996287Z Mar 07 13:43:34 "testMetadataPublisher-0@52",
2022-03-07T13:43:34.3997270Z Mar 07 13:43:34 "testMeta

[jira] [Assigned] (FLINK-26507) Last state upgrade mode should allow reconciliation regardless of job and deployment status

2022-03-07 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-26507:
--

Assignee: Gyula Fora

> Last state upgrade mode should allow reconciliation regardless of job and 
> deployment status
> ---
>
> Key: FLINK-26507
> URL: https://issues.apache.org/jira/browse/FLINK-26507
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
>
> Currently there is a strict check for both deployment readyness and 
> successful listing of jobs before we allow any reconciliation.
> While the status should be updated we should allow reconciliation of jobs 
> with last state upgrade mode regardless of the deployment/job status as this 
> mode does not require cluster interactions to execute upgrade and suspend 
> operations



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26521) Reconsider setting generationAwareEventProcessing = true

2022-03-07 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-26521:
--

Assignee: Gyula Fora

> Reconsider setting generationAwareEventProcessing = true
> 
>
> Key: FLINK-26521
> URL: https://issues.apache.org/jira/browse/FLINK-26521
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
>
> At the moment (if I understand correctly) FlinkDeployment status changes 
> automatically trigger an immediate reconcile step due to having 
> generationAwareEventProcessing = false
> on the FlinkDeploymentController .
> This causes a weird behaviour where reschedule delays and logic around these 
> are not respected properly and many cases.
> This might cause issues as timeout exceptions etc. We should consider whether 
> we need this flag on false.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] schumiyi removed a comment on pull request #18914: [FLINK-26259][table-planner]Partial insert and partition insert canno…

2022-03-07 Thread GitBox


schumiyi removed a comment on pull request #18914:
URL: https://github.com/apache/flink/pull/18914#issuecomment-1050529549


   @wuchong @JingsongLi Would you like to review this pr? Thanks for your time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] schumiyi commented on pull request #18914: [FLINK-26259][table-planner]Partial insert and partition insert canno…

2022-03-07 Thread GitBox


schumiyi commented on pull request #18914:
URL: https://github.com/apache/flink/pull/18914#issuecomment-1061444884


@twalthr Appreciate if you can help take a look.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-26314) StreamingCompactingFileSinkITCase.testFileSink failed on azure

2022-03-07 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao closed FLINK-26314.
---
Fix Version/s: 1.15.0
   Resolution: Fixed

> StreamingCompactingFileSinkITCase.testFileSink failed on azure
> --
>
> Key: FLINK-26314
> URL: https://issues.apache.org/jira/browse/FLINK-26314
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Gen Luo
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> Feb 22 13:34:32 [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 12.735 s <<< FAILURE! - in 
> org.apache.flink.connector.file.sink.StreamingCompactingFileSinkITCase
> Feb 22 13:34:32 [ERROR] StreamingCompactingFileSinkITCase.testFileSink  Time 
> elapsed: 3.311 s  <<< FAILURE!
> Feb 22 13:34:32 java.lang.AssertionError: The record 6788 should occur 4 
> times,  but only occurs 3time expected:<4> but was:<3>
> Feb 22 13:34:32   at org.junit.Assert.fail(Assert.java:89)
> Feb 22 13:34:32   at org.junit.Assert.failNotEquals(Assert.java:835)
> Feb 22 13:34:32   at org.junit.Assert.assertEquals(Assert.java:647)
> Feb 22 13:34:32   at 
> org.apache.flink.connector.file.sink.utils.IntegerFileSinkTestDataUtils.checkIntegerSequenceSinkOutput(IntegerFileSinkTestDataUtils.java:155)
> Feb 22 13:34:32   at 
> org.apache.flink.connector.file.sink.FileSinkITBase.testFileSink(FileSinkITBase.java:84)
> Feb 22 13:34:32   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 22 13:34:32   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 22 13:34:32   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 22 13:34:32   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 22 13:34:32   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Feb 22 13:34:32   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 22 13:34:32   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Feb 22 13:34:32   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Feb 22 13:34:32   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Feb 22 13:34:32   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Feb 22 13:34:32   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 22 13:34:32   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Feb 22 13:34:32   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Feb 22 13:34:32   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Feb 22 13:34:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Feb 22 13:34:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Feb 22 13:34:32   at org.junit.runners.Suite.runChild(Suite.java:128)
> Feb 22 13:34:32   at org.junit.runners.Suite.runChild(Suite.java:27)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32023&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d&l=111

[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   * 610f23f625a4559bef6f000916a2de37a5cb3b38 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32666)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26314) StreamingCompactingFileSinkITCase.testFileSink failed on azure

2022-03-07 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502739#comment-17502739
 ] 

Yun Gao commented on FLINK-26314:
-

I'll first close this issue since the two root-cause issues are all fixed. We 
could reopen the issue if it reproduced. 

> StreamingCompactingFileSinkITCase.testFileSink failed on azure
> --
>
> Key: FLINK-26314
> URL: https://issues.apache.org/jira/browse/FLINK-26314
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Gen Luo
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> {code:java}
> Feb 22 13:34:32 [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 12.735 s <<< FAILURE! - in 
> org.apache.flink.connector.file.sink.StreamingCompactingFileSinkITCase
> Feb 22 13:34:32 [ERROR] StreamingCompactingFileSinkITCase.testFileSink  Time 
> elapsed: 3.311 s  <<< FAILURE!
> Feb 22 13:34:32 java.lang.AssertionError: The record 6788 should occur 4 
> times,  but only occurs 3time expected:<4> but was:<3>
> Feb 22 13:34:32   at org.junit.Assert.fail(Assert.java:89)
> Feb 22 13:34:32   at org.junit.Assert.failNotEquals(Assert.java:835)
> Feb 22 13:34:32   at org.junit.Assert.assertEquals(Assert.java:647)
> Feb 22 13:34:32   at 
> org.apache.flink.connector.file.sink.utils.IntegerFileSinkTestDataUtils.checkIntegerSequenceSinkOutput(IntegerFileSinkTestDataUtils.java:155)
> Feb 22 13:34:32   at 
> org.apache.flink.connector.file.sink.FileSinkITBase.testFileSink(FileSinkITBase.java:84)
> Feb 22 13:34:32   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 22 13:34:32   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 22 13:34:32   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 22 13:34:32   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 22 13:34:32   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Feb 22 13:34:32   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 22 13:34:32   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Feb 22 13:34:32   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Feb 22 13:34:32   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Feb 22 13:34:32   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Feb 22 13:34:32   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Feb 22 13:34:32   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Feb 22 13:34:32   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Feb 22 13:34:32   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Feb 22 13:34:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Feb 22 13:34:32   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Feb 22 13:34:32   at org.junit.runners.Suite.runChild(Suite.java:128)
> Feb 22 13:34:32   at org.junit.runners.Suite.runChild(Suite.java:27)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Feb 22 13:34:32   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3

[jira] [Closed] (FLINK-26440) CompactorOperatorStateHandler can not work with unaligned checkpoint

2022-03-07 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao closed FLINK-26440.
---
Resolution: Fixed

> CompactorOperatorStateHandler can not work with unaligned checkpoint
> 
>
> Key: FLINK-26440
> URL: https://issues.apache.org/jira/browse/FLINK-26440
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Gen Luo
>Assignee: Gen Luo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> As mentioned in FLINK-26314, CompactorOperatorStateHandler can not work with 
> unaligned checkpoint currently. Though FLINK-26314 is actually caused by 
> another issue in the writer, we should still fix this issue.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26440) CompactorOperatorStateHandler can not work with unaligned checkpoint

2022-03-07 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502738#comment-17502738
 ] 

Yun Gao commented on FLINK-26440:
-

Fix on master via 13b203fef748bdbe9b1d14ba01f23ca6c6b24b7e.

> CompactorOperatorStateHandler can not work with unaligned checkpoint
> 
>
> Key: FLINK-26440
> URL: https://issues.apache.org/jira/browse/FLINK-26440
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Gen Luo
>Assignee: Gen Luo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> As mentioned in FLINK-26314, CompactorOperatorStateHandler can not work with 
> unaligned checkpoint currently. Though FLINK-26314 is actually caused by 
> another issue in the writer, we should still fix this issue.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] gaoyunhaii closed pull request #18955: [FLINK-26440] CompactorOperatorStateHandler can not work with unaligned checkpoint.

2022-03-07 Thread GitBox


gaoyunhaii closed pull request #18955:
URL: https://github.com/apache/flink/pull/18955


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   * 610f23f625a4559bef6f000916a2de37a5cb3b38 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-26440) CompactorOperatorStateHandler can not work with unaligned checkpoint

2022-03-07 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26440:

Affects Version/s: 1.15.0

> CompactorOperatorStateHandler can not work with unaligned checkpoint
> 
>
> Key: FLINK-26440
> URL: https://issues.apache.org/jira/browse/FLINK-26440
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Gen Luo
>Assignee: Gen Luo
>Priority: Major
>  Labels: pull-request-available
>
> As mentioned in FLINK-26314, CompactorOperatorStateHandler can not work with 
> unaligned checkpoint currently. Though FLINK-26314 is actually caused by 
> another issue in the writer, we should still fix this issue.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26440) CompactorOperatorStateHandler can not work with unaligned checkpoint

2022-03-07 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-26440:

Fix Version/s: 1.15.0

> CompactorOperatorStateHandler can not work with unaligned checkpoint
> 
>
> Key: FLINK-26440
> URL: https://issues.apache.org/jira/browse/FLINK-26440
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Gen Luo
>Assignee: Gen Luo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> As mentioned in FLINK-26314, CompactorOperatorStateHandler can not work with 
> unaligned checkpoint currently. Though FLINK-26314 is actually caused by 
> another issue in the writer, we should still fix this issue.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   * 610f23f625a4559bef6f000916a2de37a5cb3b38 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] mumuhhh edited a comment on pull request #68: [FLINK-26404] support non-local file systems

2022-03-07 Thread GitBox


mumuhhh edited a comment on pull request #68:
URL: https://github.com/apache/flink-ml/pull/68#issuecomment-1061426210


   `mvn checkstyle:check`
   sorry,  I found it was the import problem;  Idea optimizes import and 
replaces import


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   * 610f23f625a4559bef6f000916a2de37a5cb3b38 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   * 610f23f625a4559bef6f000916a2de37a5cb3b38 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18989: [FLINK-26306][state/changelog] Randomly offset materialization

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18989:
URL: https://github.com/apache/flink/pull/18989#issuecomment-1060121028


   
   ## CI report:
   
   * 1da5d40f751ebb3d9bc2382fb7ef5ab52c69bf4f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32656)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] mumuhhh commented on pull request #68: [FLINK-26404] support non-local file systems

2022-03-07 Thread GitBox


mumuhhh commented on pull request #68:
URL: https://github.com/apache/flink-ml/pull/68#issuecomment-1061426210


   sorry,  I found it was the import problem;  Idea optimizes import and 
replaces import


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   * 610f23f625a4559bef6f000916a2de37a5cb3b38 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   * 610f23f625a4559bef6f000916a2de37a5cb3b38 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18957: [FLINK-26444][python]Window allocator supporting pyflink datastream API

2022-03-07 Thread GitBox


flinkbot edited a comment on pull request #18957:
URL: https://github.com/apache/flink/pull/18957#issuecomment-1056202149


   
   ## CI report:
   
   * ec1e0a435186082e5ac1481bc093f9bdd9d94d70 UNKNOWN
   * 8e46affb05226eeed6b8eb20df971445139654a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=32535)
 
   * 610f23f625a4559bef6f000916a2de37a5cb3b38 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24349) Support customized Calalogs via JDBC

2022-03-07 Thread RocMarshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502720#comment-17502720
 ] 

RocMarshal commented on FLINK-24349:


 [~martijnvisser] [~Bo Cui]  Is there any progress ?
May I help to advance it ?

> Support customized Calalogs via JDBC
> 
>
> Key: FLINK-24349
> URL: https://issues.apache.org/jira/browse/FLINK-24349
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.15.0
>Reporter: Bo Cui
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> Support customized catalogs in flink-connector-jdbc



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] syhily commented on a change in pull request #18838: [FLINK-26177][Connector/pulsar] Use testcontainer pulsar runtime instead o…

2022-03-07 Thread GitBox


syhily commented on a change in pull request #18838:
URL: https://github.com/apache/flink/pull/18838#discussion_r821323308



##
File path: 
flink-connectors/flink-connector-pulsar/archunit-violations/stored.rules
##
@@ -0,0 +1,4 @@
+#
+#Thu Mar 03 12:42:13 CST 2022
+Tests\ inheriting\ from\ AbstractTestBase\ should\ have\ name\ ending\ with\ 
ITCase=3ac3a1dc-681f-4213-9990-b7b3298a20bc
+ITCASE\ tests\ should\ use\ a\ MiniCluster\ resource\ or\ 
extension=f4d91193-72ba-4ce4-ad83-98f780dce581

Review comment:
   Tks, this helps a lot for the tests based on connector testing tools.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26530) Refactor the FileStoreITCase and StoreSink

2022-03-07 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-26530:


 Summary: Refactor the FileStoreITCase and StoreSink
 Key: FLINK-26530
 URL: https://issues.apache.org/jira/browse/FLINK-26530
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Reporter: Jingsong Lee
 Fix For: table-store-0.1.0


We need to refactor the FileStoreITCase, and even the Sink interface itself, 
which is a DataStream layer class that is more complex to build than a simple 
SQL can accomplish.
We need to think through a problem, StoreSink exposed API should be what kind 
of, currently about keyed is rather confusing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #28: [FLINK-26423] Integrate log store to StoreSink

2022-03-07 Thread GitBox


JingsongLi commented on a change in pull request #28:
URL: https://github.com/apache/flink-table-store/pull/28#discussion_r821318987



##
File path: 
flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/sink/LogStoreSinkITCase.java
##
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.connector.sink;
+
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.factories.DynamicTableFactory;
+import org.apache.flink.table.store.connector.FileStoreITCase;
+import org.apache.flink.table.store.file.FileStore;
+import org.apache.flink.table.store.kafka.KafkaLogSinkProvider;
+import org.apache.flink.table.store.kafka.KafkaLogSourceProvider;
+import org.apache.flink.table.store.kafka.KafkaLogStoreFactory;
+import org.apache.flink.table.store.kafka.KafkaLogTestUtils;
+import org.apache.flink.table.store.kafka.KafkaTableTestBase;
+import org.apache.flink.table.store.log.LogOptions;
+import org.apache.flink.types.Row;
+
+import org.junit.Test;
+
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static 
org.apache.flink.table.store.connector.FileStoreITCase.buildBatchEnv;
+import static 
org.apache.flink.table.store.connector.FileStoreITCase.buildConfiguration;
+import static 
org.apache.flink.table.store.connector.FileStoreITCase.buildFileStore;
+import static 
org.apache.flink.table.store.connector.FileStoreITCase.buildStreamEnv;
+import static 
org.apache.flink.table.store.connector.FileStoreITCase.buildTestSource;
+import static 
org.apache.flink.table.store.kafka.KafkaLogTestUtils.SINK_CONTEXT;
+import static 
org.apache.flink.table.store.kafka.KafkaLogTestUtils.SOURCE_CONTEXT;
+import static 
org.apache.flink.table.store.kafka.KafkaLogTestUtils.discoverKafkaLogFactory;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Test for table store with kafka. */
+public class LogStoreSinkITCase extends KafkaTableTestBase {
+
+@Test
+public void testStreamingPartitioned() throws Exception {
+innerTest("testStreamingPartitioned", false, true, true);
+}
+
+@Test
+public void testStreamingNonPartitioned() throws Exception {
+innerTest("testStreamingNonPartitioned", false, false, true);
+}
+
+@Test
+public void testBatchPartitioned() throws Exception {
+innerTest("testBatchPartitioned", true, true, true);
+}
+
+@Test
+public void testStreamingEventual() throws Exception {
+innerTest("testStreamingEventual", false, true, false);
+}
+
+private void innerTest(String name, boolean isBatch, boolean partitioned, 
boolean transaction)
+throws Exception {
+StreamExecutionEnvironment env = isBatch ? buildBatchEnv() : 
buildStreamEnv();
+
+// in eventual mode, failure will result in duplicate data
+FileStore fileStore =
+buildFileStore(

Review comment:
   https://issues.apache.org/jira/browse/FLINK-26530




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-26461) Throw CannotPlanException in TableFunction

2022-03-07 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502711#comment-17502711
 ] 

lincoln lee commented on FLINK-26461:
-

[~SpongebobZ] Very likely that some changes since 1.14.2 have taken effect, but 
I haven't had time to dig into it.  cc [~godfreyhe]  if has some time to help 
take a look.

> Throw CannotPlanException in TableFunction
> --
>
> Key: FLINK-26461
> URL: https://issues.apache.org/jira/browse/FLINK-26461
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.3
>Reporter: Spongebob
>Priority: Major
>
> I got an CannotPlanException when change the isDeterministic option to false. 
> For detail see this code:
> {code:java}
> //代码占位符
> public class GetDayTimeEtlSwitch extends TableFunction {
> private boolean status = false;
> @Override
> public boolean isDeterministic() {
> return false;
> }
> public void eval() {
> if (status) {
> collect(1);
> } else {
> if (System.currentTimeMillis() > 1646298908000L) {
> status = true;
> collect(1);
> } else {
> collect(0);
> }
> }
> }
> } {code}
> Exception stack...
> {code:java}
> //代码占位符
> Exception in thread "main" org.apache.flink.table.api.TableException: Cannot 
> generate a valid execution plan for the given query: 
> FlinkLogicalSink(table=[default_catalog.default_database.Unregistered_Collect_Sink_1],
>  fields=[STUNAME, SUBJECT, SCORE, PROC_TIME, EXPR$0])
> +- FlinkLogicalJoin(condition=[true], joinType=[left])
>    :- FlinkLogicalCalc(select=[STUNAME, SUBJECT, SCORE, 
> PROCTIME_MATERIALIZE(PROCTIME()) AS PROC_TIME])
>    :  +- FlinkLogicalTableSourceScan(table=[[default_catalog, 
> default_database, DAILY_SCORE_CDC]], fields=[STUNAME, SUBJECT, SCORE])
>    +- FlinkLogicalTableFunctionScan(invocation=[GET_SWITCH()], 
> rowType=[RecordType(INTEGER EXPR$0)])This exception indicates that the query 
> uses an unsupported SQL feature.
> Please check the documentation for the set of currently supported SQL 
> features.
>     at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:76)
>     at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:62)
>     at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
>     at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
>     at scala.collection.Iterator.foreach(Iterator.scala:937)
>     at scala.collection.Iterator.foreach$(Iterator.scala:937)
>     at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>     at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>     at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>     at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>     at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
>     at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
>     at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:58)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:163)
>     at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:81)
>     at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:300)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:183)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1665)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:805)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1274)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:742)
>     at TestSwitch.main(TestSwitch.java:33)
> Caused by: org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There 
> are not enough rules to produce a node with desired properties: 
> convention=STREAM_PHYSICAL, FlinkRelDistributionTraitDef=any, 
> MiniBatchIntervalTraitDef=None: 0, ModifyKindSetTraitDef=[NONE], 
> Update

[jira] [Commented] (FLINK-26521) Reconsider setting generationAwareEventProcessing = true

2022-03-07 Thread Thomas Weise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502710#comment-17502710
 ] 

Thomas Weise commented on FLINK-26521:
--

It would be great to eliminate the redundant reconciliations that result from 
this.

> Reconsider setting generationAwareEventProcessing = true
> 
>
> Key: FLINK-26521
> URL: https://issues.apache.org/jira/browse/FLINK-26521
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Priority: Major
>
> At the moment (if I understand correctly) FlinkDeployment status changes 
> automatically trigger an immediate reconcile step due to having 
> generationAwareEventProcessing = false
> on the FlinkDeploymentController .
> This causes a weird behaviour where reschedule delays and logic around these 
> are not respected properly and many cases.
> This might cause issues as timeout exceptions etc. We should consider whether 
> we need this flag on false.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-table-store] JingsongLi commented on a change in pull request #28: [FLINK-26423] Integrate log store to StoreSink

2022-03-07 Thread GitBox


JingsongLi commented on a change in pull request #28:
URL: https://github.com/apache/flink-table-store/pull/28#discussion_r821313001



##
File path: 
flink-table-store-connector/src/test/java/org/apache/flink/table/store/connector/sink/LogStoreSinkITCase.java
##
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.connector.sink;
+
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.streaming.api.datastream.DataStreamSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.factories.DynamicTableFactory;
+import org.apache.flink.table.store.connector.FileStoreITCase;
+import org.apache.flink.table.store.file.FileStore;
+import org.apache.flink.table.store.kafka.KafkaLogSinkProvider;
+import org.apache.flink.table.store.kafka.KafkaLogSourceProvider;
+import org.apache.flink.table.store.kafka.KafkaLogStoreFactory;
+import org.apache.flink.table.store.kafka.KafkaLogTestUtils;
+import org.apache.flink.table.store.kafka.KafkaTableTestBase;
+import org.apache.flink.table.store.log.LogOptions;
+import org.apache.flink.types.Row;
+
+import org.junit.Test;
+
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static 
org.apache.flink.table.store.connector.FileStoreITCase.buildBatchEnv;
+import static 
org.apache.flink.table.store.connector.FileStoreITCase.buildConfiguration;
+import static 
org.apache.flink.table.store.connector.FileStoreITCase.buildFileStore;
+import static 
org.apache.flink.table.store.connector.FileStoreITCase.buildStreamEnv;
+import static 
org.apache.flink.table.store.connector.FileStoreITCase.buildTestSource;
+import static 
org.apache.flink.table.store.kafka.KafkaLogTestUtils.SINK_CONTEXT;
+import static 
org.apache.flink.table.store.kafka.KafkaLogTestUtils.SOURCE_CONTEXT;
+import static 
org.apache.flink.table.store.kafka.KafkaLogTestUtils.discoverKafkaLogFactory;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Test for table store with kafka. */
+public class LogStoreSinkITCase extends KafkaTableTestBase {
+
+@Test
+public void testStreamingPartitioned() throws Exception {
+innerTest("testStreamingPartitioned", false, true, true);
+}
+
+@Test
+public void testStreamingNonPartitioned() throws Exception {
+innerTest("testStreamingNonPartitioned", false, false, true);
+}
+
+@Test
+public void testBatchPartitioned() throws Exception {
+innerTest("testBatchPartitioned", true, true, true);
+}
+
+@Test
+public void testStreamingEventual() throws Exception {
+innerTest("testStreamingEventual", false, true, false);
+}
+
+private void innerTest(String name, boolean isBatch, boolean partitioned, 
boolean transaction)
+throws Exception {
+StreamExecutionEnvironment env = isBatch ? buildBatchEnv() : 
buildStreamEnv();
+
+// in eventual mode, failure will result in duplicate data
+FileStore fileStore =
+buildFileStore(

Review comment:
   Yes, we can.
   We need to refactor the FileStoreITCase, and even the Sink interface itself, 
which is a DataStream layer class that is more complex to build than a simple 
SQL can accomplish.
   I create a jira, and subsequently we need to think through a problem, 
StoreSink exposed API should be what kind of, currently about keyed is rather 
confusing.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] weibozhao commented on a change in pull request #67: [FLINK-26383] Create model stream sink demo

2022-03-07 Thread GitBox


weibozhao commented on a change in pull request #67:
URL: https://github.com/apache/flink-ml/pull/67#discussion_r821309252



##
File path: 
flink-ml-lib/src/test/java/org/apache/flink/ml/classification/OnlineModelSaveLoadTest.java
##
@@ -0,0 +1,178 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.classification;
+
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.serialization.Encoder;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.file.sink.FileSink;
+import org.apache.flink.connector.file.src.FileSource;
+import org.apache.flink.connector.file.src.reader.SimpleStreamFormat;
+import 
org.apache.flink.ml.classification.logisticregression.LogisticRegressionModelData;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.linalg.Vectors;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import 
org.apache.flink.streaming.api.environment.ExecutionCheckpointingOptions;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import 
org.apache.flink.streaming.api.functions.sink.filesystem.bucketassigners.BasePathBucketAssigner;
+import 
org.apache.flink.streaming.api.functions.sink.filesystem.rollingpolicies.OnCheckpointRollingPolicy;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.api.Schema;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.types.Row;
+
+import org.apache.commons.collections.IteratorUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.BufferedReader;
+import java.io.InputStreamReader;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+/** Tests online LogisticRegression model save and load. */
+public class OnlineModelSaveLoadTest {
+@Rule public final TemporaryFolder tempFolder = new TemporaryFolder();
+private StreamTableEnvironment tEnv;
+private StreamExecutionEnvironment env;
+private static final List modelRows =
+new ArrayList<>(
+Arrays.asList(
+Row.of(Vectors.dense(2.0, 4.5, 3.0)),
+Row.of(Vectors.dense(2.1, 4.6, 3.1)),
+Row.of(Vectors.dense(20.1, 5.6, 3.1)),
+Row.of(Vectors.dense(2.1, 4.7, 3.1;
+private String tmpPath;
+
+@Before
+public void before() {
+Configuration config = new Configuration();
+
config.set(ExecutionCheckpointingOptions.ENABLE_CHECKPOINTS_AFTER_TASKS_FINISH, 
true);
+env = StreamExecutionEnvironment.getExecutionEnvironment(config);
+env.enableCheckpointing(100);
+env.setRestartStrategy(RestartStrategies.noRestart());
+env.setParallelism(1);
+tEnv = StreamTableEnvironment.create(env);
+Schema modelSchema =
+Schema.newBuilder().column("f0", 
DataTypes.of(DenseVector.class)).build();
+Table modelDataTable = 
tEnv.fromDataStream(env.fromCollection(modelRows), modelSchema);
+try {
+tmpPath = tempFolder.newFolder().getAbsolutePath();

Review comment:
   I think meta and model data saveLoad are two different thing. Here we 
just give a demo for model data save and load.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-26529) PyFlink 'tuple' object has no attribute '_values'

2022-03-07 Thread James Schulte (Jira)
James Schulte created FLINK-26529:
-

 Summary: PyFlink 'tuple' object has no attribute '_values'
 Key: FLINK-26529
 URL: https://issues.apache.org/jira/browse/FLINK-26529
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.14.3
 Environment: JAVA_VERSION=8
SCALA_VERSION=2.12
FLINK_VERSION=1.14.3
PYTHON_VERSION=3.7.9

 

Running in Kubernetes using spotify/flink-on-kubernetes-operator
Reporter: James Schulte
 Attachments: flink_operators.py, main.py

 
{code:java}
Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
Error received from SDK harness for instruction 4: Traceback (most recent call 
last):  File 
"/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
 line 289, in _executeresponse = task()  File 
"/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
 line 362, in lambda: self.create_worker().do_instruction(request), 
request)  File 
"/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
 line 607, in do_instructiongetattr(request, request_type), 
request.instruction_id)  File 
"/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py",
 line 644, in process_bundle
bundle_processor.process_bundle(instruction_id))  File 
"/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
 line 1000, in process_bundleelement.data)  File 
"/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py",
 line 228, in process_encodedself.output(decoded_value)  File 
"apache_beam/runners/worker/operations.py", line 357, in 
apache_beam.runners.worker.operations.Operation.output  File 
"apache_beam/runners/worker/operations.py", line 359, in 
apache_beam.runners.worker.operations.Operation.output  File 
"apache_beam/runners/worker/operations.py", line 221, in 
apache_beam.runners.worker.operations.SingletonConsumerSet.receive  File 
"pyflink/fn_execution/beam/beam_operations_fast.pyx", line 158, in 
pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process  File 
"pyflink/fn_execution/beam/beam_operations_fast.pyx", line 174, in 
pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process  File 
"pyflink/fn_execution/beam/beam_operations_fast.pyx", line 104, in 
pyflink.fn_execution.beam.beam_operations_fast.IntermediateOutputProcessor.process_outputs
  File "pyflink/fn_execution/beam/beam_operations_fast.pyx", line 158, in 
pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process  File 
"pyflink/fn_execution/beam/beam_operations_fast.pyx", line 174, in 
pyflink.fn_execution.beam.beam_operations_fast.FunctionOperation.process  File 
"pyflink/fn_execution/beam/beam_operations_fast.pyx", line 92, in 
pyflink.fn_execution.beam.beam_operations_fast.NetworkOutputProcessor.process_outputs
  File "pyflink/fn_execution/beam/beam_coder_impl_fast.pyx", line 101, in 
pyflink.fn_execution.beam.beam_coder_impl_fast.FlinkLengthPrefixCoderBeamWrapper.encode_to_stream
  File "pyflink/fn_execution/coder_impl_fast.pyx", line 271, in 
pyflink.fn_execution.coder_impl_fast.IterableCoderImpl.encode_to_stream  File 
"pyflink/fn_execution/coder_impl_fast.pyx", line 399, in 
pyflink.fn_execution.coder_impl_fast.RowCoderImpl.encode_to_stream  File 
"pyflink/fn_execution/coder_impl_fast.pyx", line 389, in 
pyflink.fn_execution.coder_impl_fast.RowCoderImpl.encode_to_streamAttributeError:
 'tuple' object has no attribute '_values'
 {code}
Recieved this error after upgrading from Flink 1.13.1 -> 1.14.3 - no other 
changes

 

I've reviewed the release notes - can't see anything highlighting why this 
might be the case.

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26495) Dynamic table options does not work for view

2022-03-07 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-26495:
--

Assignee: Jane Chan

> Dynamic table options does not work for view
> 
>
> Key: FLINK-26495
> URL: https://issues.apache.org/jira/browse/FLINK-26495
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>
> The dynamic table options (aka. table hints) syntax
> {code:java}
> table_identifier /*+ OPTIONS(key=val [, key=val]*) */ {code}
> does not work for the view without any exception thrown or suggestions to 
> users. It is not user-friendly and misleading. We should either throw a 
> meaningful exception or support this feature for view.
>  
> h4. How to reproduce
> Run the following statements in SQL CLI
> {code:java}
> Flink SQL> create table datagen (f0 int, f1 double) with ('connector' = 
> 'datagen', 'number-of-rows' = '5');
> [INFO] Execute statement succeed.
> Flink SQL> create view my_view as select * from datagen;
> [INFO] Execute statement succeed.
> Flink SQL> explain plan for select * from my_view /*+ 
> OPTIONS('number-of-rows' = '1') */;
> == Abstract Syntax Tree ==
> LogicalProject(f0=[$0], f1=[$1])
> +- LogicalTableScan(table=[[default_catalog, default_database, datagen]])
> == Optimized Physical Plan ==
> TableSourceScan(table=[[default_catalog, default_database, datagen]], 
> fields=[f0, f1])
> == Optimized Execution Plan ==
> TableSourceScan(table=[[default_catalog, default_database, datagen]], 
> fields=[f0, f1]) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   3   4   5   6   >