[jira] [Commented] (FLINK-35968) Remove flink-cdc-runtime depedency from connectors

2024-08-02 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17870454#comment-17870454
 ] 

Jiabao Sun commented on FLINK-35968:


Thanks [~loserwang1024] for the volunteering, assigned to you.

> Remove flink-cdc-runtime depedency from connectors
> --
>
> Key: FLINK-35968
> URL: https://issues.apache.org/jira/browse/FLINK-35968
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.1
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Major
> Fix For: cdc-3.2.0
>
>
> Current, flink-cdc-source-connectors and flink-cdc-pipeline-connectors 
> depends on flink-cdc-runtime, which is not ideal for design and is redundant.
> This issue is aimed to remove it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35968) Remove flink-cdc-runtime depedency from connectors

2024-08-02 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-35968:
--

Assignee: Hongshun Wang

> Remove flink-cdc-runtime depedency from connectors
> --
>
> Key: FLINK-35968
> URL: https://issues.apache.org/jira/browse/FLINK-35968
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.1
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Major
> Fix For: cdc-3.2.0
>
>
> Current, flink-cdc-source-connectors and flink-cdc-pipeline-connectors 
> depends on flink-cdc-runtime, which is not ideal for design and is redundant.
> This issue is aimed to remove it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35889) mongo cdc restore from expired resume token and job status still running but expect failed

2024-07-24 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-35889:
--

Assignee: yux

> mongo cdc restore from expired resume token and job status still running but 
> expect failed
> --
>
> Key: FLINK-35889
> URL: https://issues.apache.org/jira/browse/FLINK-35889
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Darren_Han
>Assignee: yux
>Priority: Blocker
>
> version
> mongodb: 3.6
> flink: 1.14.6
> mongo-cdc:2.4.2
>  
> Restarting job from a savepoint/checpoint which contains expired resume 
> token/point, job status is always running and  do not capture change data, 
> printing logs continuously.
> Here is some example logs:
> 2024-07-23 11:11:04,214 INFO  
> com.mongodb.kafka.connect.source.MongoSourceTask             [] - An 
> exception occurred when trying to get the next item from the Change Stream
> com.mongodb.MongoQueryException: Query failed with error code 280 and error 
> message 'resume of change stream was not possible, as the resume token was 
> not found. \{_data: BinData(0, "xx")}'
> 2024-07-23 17:53:27,330 INFO  
> com.mongodb.kafka.connect.source.MongoSourceTask             [] - An 
> exception occurred when trying to get the next item from the Change Stream
> com.mongodb.MongoQueryException: Query failed with error code 280 and error 
> message 'resume of change notification was not possible, as the resume point 
> may no longer be in the oplog. ' on server
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35889) mongo cdc restore from expired resume token and job status still running but expect failed

2024-07-24 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868311#comment-17868311
 ] 

Jiabao Sun edited comment on FLINK-35889 at 7/24/24 9:36 AM:
-

Hi [~xiqian_yu], could you help investigate this?


was (Author: JIRAUSER304154):
Hi @yux, could you help investigate this?

> mongo cdc restore from expired resume token and job status still running but 
> expect failed
> --
>
> Key: FLINK-35889
> URL: https://issues.apache.org/jira/browse/FLINK-35889
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Darren_Han
>Priority: Blocker
>
> version
> mongodb: 3.6
> flink: 1.14.6
> mongo-cdc:2.4.2
>  
> Restarting job from a savepoint/checpoint which contains expired resume 
> token/point, job status is always running and  do not capture change data, 
> printing logs continuously.
> Here is some example logs:
> 2024-07-23 11:11:04,214 INFO  
> com.mongodb.kafka.connect.source.MongoSourceTask             [] - An 
> exception occurred when trying to get the next item from the Change Stream
> com.mongodb.MongoQueryException: Query failed with error code 280 and error 
> message 'resume of change stream was not possible, as the resume token was 
> not found. \{_data: BinData(0, "xx")}'
> 2024-07-23 17:53:27,330 INFO  
> com.mongodb.kafka.connect.source.MongoSourceTask             [] - An 
> exception occurred when trying to get the next item from the Change Stream
> com.mongodb.MongoQueryException: Query failed with error code 280 and error 
> message 'resume of change notification was not possible, as the resume point 
> may no longer be in the oplog. ' on server
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35889) mongo cdc restore from expired resume token and job status still running but expect failed

2024-07-24 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868311#comment-17868311
 ] 

Jiabao Sun commented on FLINK-35889:


Hi @yux, could you help investigate this?

> mongo cdc restore from expired resume token and job status still running but 
> expect failed
> --
>
> Key: FLINK-35889
> URL: https://issues.apache.org/jira/browse/FLINK-35889
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Darren_Han
>Priority: Blocker
>
> version
> mongodb: 3.6
> flink: 1.14.6
> mongo-cdc:2.4.2
>  
> Restarting job from a savepoint/checpoint which contains expired resume 
> token/point, job status is always running and  do not capture change data, 
> printing logs continuously.
> Here is some example logs:
> 2024-07-23 11:11:04,214 INFO  
> com.mongodb.kafka.connect.source.MongoSourceTask             [] - An 
> exception occurred when trying to get the next item from the Change Stream
> com.mongodb.MongoQueryException: Query failed with error code 280 and error 
> message 'resume of change stream was not possible, as the resume token was 
> not found. \{_data: BinData(0, "xx")}'
> 2024-07-23 17:53:27,330 INFO  
> com.mongodb.kafka.connect.source.MongoSourceTask             [] - An 
> exception occurred when trying to get the next item from the Change Stream
> com.mongodb.MongoQueryException: Query failed with error code 280 and error 
> message 'resume of change notification was not possible, as the resume point 
> may no longer be in the oplog. ' on server
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35623) Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0

2024-07-23 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35623.

Resolution: Implemented

main: a7551187d904ed819db085fc36c2cf735913ed5e

> Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0
> 
>
> Key: FLINK-35623
> URL: https://issues.apache.org/jira/browse/FLINK-35623
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.2.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: mongodb-1.3.0
>
>
> Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0
>  
> [https://www.mongodb.com/docs/drivers/java/sync/current/compatibility/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35868) Bump Mongo driver version to support Mongo 7.0+

2024-07-23 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-35868:
--

Assignee: yux

> Bump Mongo driver version to support Mongo 7.0+
> ---
>
> Key: FLINK-35868
> URL: https://issues.apache.org/jira/browse/FLINK-35868
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: yux
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
>
> Currently, MongoDB CDC connector depends on mongodb-driver v4.9.1, which 
> doesn't support Mongo Server 7.0+[1]. Upgrading dependency version would be 
> nice since Mongo 7.0 has been released nearly a year ago.
> [1] https://www.mongodb.com/docs/drivers/java/sync/current/compatibility/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35616) Support upsert into sharded collections for MongoRowDataSerializationSchema

2024-07-15 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35616.

Fix Version/s: mongodb-1.3.0
   Resolution: Implemented

mongodb-connector main: a2f1083c2b0020cde626681e4ebcd0bec649547c

> Support upsert into sharded collections for MongoRowDataSerializationSchema
> ---
>
> Key: FLINK-35616
> URL: https://issues.apache.org/jira/browse/FLINK-35616
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.2.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: mongodb-1.3.0
>
>
> {panel:}
> For a db.collection.update() operation that includes upsert: true and is on a 
> sharded collection, the full sharded key must be included in the filter:
> * For an update operation.
> * For a replace document operation (starting in MongoDB 4.2).
> {panel}
> https://www.mongodb.com/docs/manual/reference/method/db.collection.update/#upsert-on-a-sharded-collection
> We need to allow users to configure the full sharded key field names to 
> upsert into the sharded collection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35849) [flink-cdc] Use expose_snapshot to read snapshot splits of postgres cdc connector.

2024-07-15 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17866208#comment-17866208
 ] 

Jiabao Sun commented on FLINK-35849:


Thanks [~loserwang1024], assigned to you.

> [flink-cdc] Use expose_snapshot to read snapshot splits of postgres cdc 
> connector.
> --
>
> Key: FLINK-35849
> URL: https://issues.apache.org/jira/browse/FLINK-35849
> Project: Flink
>  Issue Type: New Feature
>  Components: Flink CDC
>Affects Versions: cdc-3.1.1
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Major
> Fix For: cdc-3.3.0
>
>
> In current postgres cdc connector, we use incremental framework to read 
> data[1], which include the following step:
>  # create a global slot in case that the wal log be recycle.
>  # Enumerator split the table into multiple chunks(named "snapshot split" in 
> cdc), than assigned this snapshot splits to the readers.
>  # The read read the snapshot data of the snapshot split and backfill log. 
> Each reader need a temporary slot to read log.
>  # when all snapshot snapshots are finished, enumerator will send a stream 
> split to reader. The one reader will read log.
>  
> However, read backfill log will also increase burden in source database. For 
> example, the Postgres cdc connector will establish many logical replication 
> connections to the Postgres database, which can easily reach the 
> max_sender_num or max_slot_number limit. Assuming there are 10 Postgres cdc 
> sources and each runs 4 parallel processes, a total of 10*(4+1) = 50 
> replication connections will be created.In many situations, the sink 
> databases provides idempotence. Therefore, We can also support at-least-once 
> semantics by skipping the backfill period, which will reduce budget on the 
> source databases. Users can choose between at-least-once or exactly-once 
> based on their demands.[2]
>  
> The two methods make a tradeoff between semantics and performance. Is there 
> any other method to do well in both?
> It seems expose_snapshot[3] can do both. When creating global slot, we can 
> save the the snapshot name, and search it in snapshot split reading(thus no 
> need to read backfill log). Then we just read the wal-log based on global 
> slot. It can also provide exactly-once semantics. 
> And expose_snapshot is also a default behavior when creating a new 
> replication slot, thus will not occur other side effects .
>  
>  
>  
>  
>  
> [1] [https://github.com/apache/flink-cdc/pull/2216]
>  [2][https://github.com/apache/flink-cdc/issues/2553]
>  [3] [https://www.postgresql.org/docs/14/protocol-replication.html]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35849) [flink-cdc] Use expose_snapshot to read snapshot splits of postgres cdc connector.

2024-07-15 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-35849:
--

Assignee: Hongshun Wang

> [flink-cdc] Use expose_snapshot to read snapshot splits of postgres cdc 
> connector.
> --
>
> Key: FLINK-35849
> URL: https://issues.apache.org/jira/browse/FLINK-35849
> Project: Flink
>  Issue Type: New Feature
>  Components: Flink CDC
>Affects Versions: cdc-3.1.1
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Major
> Fix For: cdc-3.3.0
>
>
> In current postgres cdc connector, we use incremental framework to read 
> data[1], which include the following step:
>  # create a global slot in case that the wal log be recycle.
>  # Enumerator split the table into multiple chunks(named "snapshot split" in 
> cdc), than assigned this snapshot splits to the readers.
>  # The read read the snapshot data of the snapshot split and backfill log. 
> Each reader need a temporary slot to read log.
>  # when all snapshot snapshots are finished, enumerator will send a stream 
> split to reader. The one reader will read log.
>  
> However, read backfill log will also increase burden in source database. For 
> example, the Postgres cdc connector will establish many logical replication 
> connections to the Postgres database, which can easily reach the 
> max_sender_num or max_slot_number limit. Assuming there are 10 Postgres cdc 
> sources and each runs 4 parallel processes, a total of 10*(4+1) = 50 
> replication connections will be created.In many situations, the sink 
> databases provides idempotence. Therefore, We can also support at-least-once 
> semantics by skipping the backfill period, which will reduce budget on the 
> source databases. Users can choose between at-least-once or exactly-once 
> based on their demands.[2]
>  
> The two methods make a tradeoff between semantics and performance. Is there 
> any other method to do well in both?
> It seems expose_snapshot[3] can do both. When creating global slot, we can 
> save the the snapshot name, and search it in snapshot split reading(thus no 
> need to read backfill log). Then we just read the wal-log based on global 
> slot. It can also provide exactly-once semantics. 
> And expose_snapshot is also a default behavior when creating a new 
> replication slot, thus will not occur other side effects .
>  
>  
>  
>  
>  
> [1] [https://github.com/apache/flink-cdc/pull/2216]
>  [2][https://github.com/apache/flink-cdc/issues/2553]
>  [3] [https://www.postgresql.org/docs/14/protocol-replication.html]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35616) Support upsert into sharded collections for MongoRowDataSerializationSchema

2024-06-16 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-35616:
---
Summary: Support upsert into sharded collections for 
MongoRowDataSerializationSchema  (was: Support upsert into sharded collections)

> Support upsert into sharded collections for MongoRowDataSerializationSchema
> ---
>
> Key: FLINK-35616
> URL: https://issues.apache.org/jira/browse/FLINK-35616
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.2.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>
> {panel:}
> For a db.collection.update() operation that includes upsert: true and is on a 
> sharded collection, the full sharded key must be included in the filter:
> * For an update operation.
> * For a replace document operation (starting in MongoDB 4.2).
> {panel}
> https://www.mongodb.com/docs/manual/reference/method/db.collection.update/#upsert-on-a-sharded-collection
> We need to allow users to configure the full sharded key field names to 
> upsert into the sharded collection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35623) Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0

2024-06-16 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-35623:
--

 Summary: Bump mongo-driver version from 4.7.2 to 5.1.1 to support 
MongoDB 7.0
 Key: FLINK-35623
 URL: https://issues.apache.org/jira/browse/FLINK-35623
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / MongoDB
Affects Versions: mongodb-1.2.0
Reporter: Jiabao Sun
Assignee: Jiabao Sun
 Fix For: mongodb-1.3.0


Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0

 

[https://www.mongodb.com/docs/drivers/java/sync/current/compatibility/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35618) Flink CDC add MongoDB pipeline data sink connector

2024-06-14 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-35618:
--

 Summary: Flink CDC add MongoDB pipeline data sink connector
 Key: FLINK-35618
 URL: https://issues.apache.org/jira/browse/FLINK-35618
 Project: Flink
  Issue Type: New Feature
  Components: Flink CDC
Affects Versions: cdc-3.2.0
Reporter: Jiabao Sun
Assignee: Jiabao Sun


Flink CDC add MongoDB pipeline data sink connector



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35616) Support upsert into sharded collections

2024-06-14 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-35616:
--

 Summary: Support upsert into sharded collections
 Key: FLINK-35616
 URL: https://issues.apache.org/jira/browse/FLINK-35616
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / MongoDB
Affects Versions: mongodb-1.2.0
Reporter: Jiabao Sun
Assignee: Jiabao Sun


{panel:}
For a db.collection.update() operation that includes upsert: true and is on a 
sharded collection, the full sharded key must be included in the filter:

* For an update operation.
* For a replace document operation (starting in MongoDB 4.2).
{panel}

https://www.mongodb.com/docs/manual/reference/method/db.collection.update/#upsert-on-a-sharded-collection

We need to allow users to configure the full sharded key field names to upsert 
into the sharded collection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35121) CDC pipeline connector should verify requiredOptions and optionalOptions

2024-06-13 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35121.

  Assignee: yux
Resolution: Implemented

Implemented by flink-cdc master: 2bd2e4ce24ec0cc6a11129e3e3b32af6a09dd977

> CDC pipeline connector should verify requiredOptions and optionalOptions
> 
>
> Key: FLINK-35121
> URL: https://issues.apache.org/jira/browse/FLINK-35121
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Hongshun Wang
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> At present, though we provide 
> org.apache.flink.cdc.common.factories.Factory#requiredOptions and 
> org.apache.flink.cdc.common.factories.Factory#optionalOptions, but both are 
> not used anywhere. This means not verifying requiredOptions and 
> optionalOptions.
> Thus, like what DynamicTableFactory does, provide 
> FactoryHelper to help verify requiredOptions and optionalOptions.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35545) Miss 3.1.0 version in snapshot flink-cdc doc version list

2024-06-07 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-35545:
---
Affects Version/s: cdc-3.1.0

> Miss 3.1.0 version in snapshot flink-cdc doc version list
> -
>
> Key: FLINK-35545
> URL: https://issues.apache.org/jira/browse/FLINK-35545
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Attachments: screenshot-1.png
>
>
> Link : [https://nightlies.apache.org/flink/flink-cdc-docs-master/]
> Miss 3.0.1 version in version list:
>  
> !screenshot-1.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35545) Miss 3.1.0 version in snapshot flink-cdc doc version list

2024-06-07 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-35545:
---
Summary: Miss 3.1.0 version in snapshot flink-cdc doc version list  (was: 
Miss 3.0.1 version in snapshot flink-cdc doc version list)

> Miss 3.1.0 version in snapshot flink-cdc doc version list
> -
>
> Key: FLINK-35545
> URL: https://issues.apache.org/jira/browse/FLINK-35545
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Attachments: screenshot-1.png
>
>
> Link : [https://nightlies.apache.org/flink/flink-cdc-docs-master/]
> Miss 3.0.1 version in version list:
>  
> !screenshot-1.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35545) Miss 3.1.0 version in snapshot flink-cdc doc version list

2024-06-07 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-35545:
--

Assignee: Zhongqiang Gong

> Miss 3.1.0 version in snapshot flink-cdc doc version list
> -
>
> Key: FLINK-35545
> URL: https://issues.apache.org/jira/browse/FLINK-35545
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Attachments: screenshot-1.png
>
>
> Link : [https://nightlies.apache.org/flink/flink-cdc-docs-master/]
> Miss 3.0.1 version in version list:
>  
> !screenshot-1.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35149) Fix DataSinkTranslator#sinkTo ignoring pre-write topology if not TwoPhaseCommittingSink

2024-06-06 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852690#comment-17852690
 ] 

Jiabao Sun commented on FLINK-35149:


flink-cdc release-3.1: fa9fb0b1c49848e77c211a5913d7f28c33e04ff0

> Fix DataSinkTranslator#sinkTo ignoring pre-write topology if not 
> TwoPhaseCommittingSink
> ---
>
> Key: FLINK-35149
> URL: https://issues.apache.org/jira/browse/FLINK-35149
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0, cdc-3.1.1
>
>
> Current , when sink is not instanceof TwoPhaseCommittingSink, use 
> input.transform rather than stream. It means that pre-write topology will be 
> ignored.
> {code:java}
> private void sinkTo(
> DataStream input,
> Sink sink,
> String sinkName,
> OperatorID schemaOperatorID) {
> DataStream stream = input;
> // Pre write topology
> if (sink instanceof WithPreWriteTopology) {
> stream = ((WithPreWriteTopology) 
> sink).addPreWriteTopology(stream);
> }
> if (sink instanceof TwoPhaseCommittingSink) {
> addCommittingTopology(sink, stream, sinkName, schemaOperatorID);
> } else {
> input.transform(
> SINK_WRITER_PREFIX + sinkName,
> CommittableMessageTypeInfo.noOutput(),
> new DataSinkWriterOperatorFactory<>(sink, schemaOperatorID));
> }
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35149) Fix DataSinkTranslator#sinkTo ignoring pre-write topology if not TwoPhaseCommittingSink

2024-06-06 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-35149:
---
Release Note:   (was: cdc release-3.1: 
fa9fb0b1c49848e77c211a5913d7f28c33e04ff0)

> Fix DataSinkTranslator#sinkTo ignoring pre-write topology if not 
> TwoPhaseCommittingSink
> ---
>
> Key: FLINK-35149
> URL: https://issues.apache.org/jira/browse/FLINK-35149
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0, cdc-3.1.1
>
>
> Current , when sink is not instanceof TwoPhaseCommittingSink, use 
> input.transform rather than stream. It means that pre-write topology will be 
> ignored.
> {code:java}
> private void sinkTo(
> DataStream input,
> Sink sink,
> String sinkName,
> OperatorID schemaOperatorID) {
> DataStream stream = input;
> // Pre write topology
> if (sink instanceof WithPreWriteTopology) {
> stream = ((WithPreWriteTopology) 
> sink).addPreWriteTopology(stream);
> }
> if (sink instanceof TwoPhaseCommittingSink) {
> addCommittingTopology(sink, stream, sinkName, schemaOperatorID);
> } else {
> input.transform(
> SINK_WRITER_PREFIX + sinkName,
> CommittableMessageTypeInfo.noOutput(),
> new DataSinkWriterOperatorFactory<>(sink, schemaOperatorID));
> }
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35149) Fix DataSinkTranslator#sinkTo ignoring pre-write topology if not TwoPhaseCommittingSink

2024-06-06 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35149.

Release Note: cdc release-3.1: fa9fb0b1c49848e77c211a5913d7f28c33e04ff0
  Resolution: Fixed

> Fix DataSinkTranslator#sinkTo ignoring pre-write topology if not 
> TwoPhaseCommittingSink
> ---
>
> Key: FLINK-35149
> URL: https://issues.apache.org/jira/browse/FLINK-35149
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0, cdc-3.1.1
>
>
> Current , when sink is not instanceof TwoPhaseCommittingSink, use 
> input.transform rather than stream. It means that pre-write topology will be 
> ignored.
> {code:java}
> private void sinkTo(
> DataStream input,
> Sink sink,
> String sinkName,
> OperatorID schemaOperatorID) {
> DataStream stream = input;
> // Pre write topology
> if (sink instanceof WithPreWriteTopology) {
> stream = ((WithPreWriteTopology) 
> sink).addPreWriteTopology(stream);
> }
> if (sink instanceof TwoPhaseCommittingSink) {
> addCommittingTopology(sink, stream, sinkName, schemaOperatorID);
> } else {
> input.transform(
> SINK_WRITER_PREFIX + sinkName,
> CommittableMessageTypeInfo.noOutput(),
> new DataSinkWriterOperatorFactory<>(sink, schemaOperatorID));
> }
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35527) Polish quickstart guide & clean stale links in docs

2024-06-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35527.

Fix Version/s: cdc-3.2.0
   Resolution: Fixed

cdc master: b1e157468a703e53c3217940182f5f1a021c3ea3
cdc release-3.1: dcf3966ec102c67f56d94790de2cf2ffa606d20f

> Polish quickstart guide & clean stale links in docs
> ---
>
> Key: FLINK-35527
> URL: https://issues.apache.org/jira/browse/FLINK-35527
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: yux
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0, cdc-3.1.1
>
>
> Currently, there's still a lot of stale links in Flink CDC docs, including 
> some download links pointing to Ververica maven repositories. Need to clean 
> them up to avoid user conflicts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35527) Polish quickstart guide & clean stale links in docs

2024-06-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-35527:
---
Fix Version/s: (was: cdc-3.2.0)

> Polish quickstart guide & clean stale links in docs
> ---
>
> Key: FLINK-35527
> URL: https://issues.apache.org/jira/browse/FLINK-35527
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: yux
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.1
>
>
> Currently, there's still a lot of stale links in Flink CDC docs, including 
> some download links pointing to Ververica maven repositories. Need to clean 
> them up to avoid user conflicts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35527) Polish quickstart guide & clean stale links in docs

2024-06-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-35527:
---
Fix Version/s: cdc-3.1.1

> Polish quickstart guide & clean stale links in docs
> ---
>
> Key: FLINK-35527
> URL: https://issues.apache.org/jira/browse/FLINK-35527
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: yux
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0, cdc-3.1.1
>
>
> Currently, there's still a lot of stale links in Flink CDC docs, including 
> some download links pointing to Ververica maven repositories. Need to clean 
> them up to avoid user conflicts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35527) Polish quickstart guide & clean stale links in docs

2024-06-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-35527:
--

Assignee: yux

> Polish quickstart guide & clean stale links in docs
> ---
>
> Key: FLINK-35527
> URL: https://issues.apache.org/jira/browse/FLINK-35527
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: yux
>Assignee: yux
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> Currently, there's still a lot of stale links in Flink CDC docs, including 
> some download links pointing to Ververica maven repositories. Need to clean 
> them up to avoid user conflicts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35295) Improve jdbc connection pool initialization failure message

2024-05-30 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850903#comment-17850903
 ] 

Jiabao Sun commented on FLINK-35295:


release-3.1: e18e7a2523ac1ea59471e5714eb60f544e9f4a04

> Improve jdbc connection pool initialization failure message
> ---
>
> Key: FLINK-35295
> URL: https://issues.apache.org/jira/browse/FLINK-35295
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0, cdc-3.1.1
>
>
> As described in ticket title.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35295) Improve jdbc connection pool initialization failure message

2024-05-30 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-35295:
---
Fix Version/s: cdc-3.1.1

> Improve jdbc connection pool initialization failure message
> ---
>
> Key: FLINK-35295
> URL: https://issues.apache.org/jira/browse/FLINK-35295
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0, cdc-3.1.1
>
>
> As described in ticket title.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25537) [JUnit5 Migration] Module: flink-core

2024-05-07 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844226#comment-17844226
 ] 

Jiabao Sun commented on FLINK-25537:


master: ffa3869c48a68c1dd3126fa949adc6953979711f

> [JUnit5 Migration] Module: flink-core
> -
>
> Key: FLINK-25537
> URL: https://issues.apache.org/jira/browse/FLINK-25537
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Reporter: Qingsheng Ren
>Assignee: Aiden Gong
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35245) Add metrics for flink-connector-tidb-cdc

2024-05-06 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun closed FLINK-35245.
--
Resolution: Implemented

Implemented via cdc-master: fa6e7ea51258dcd90f06036196618224156df367

> Add metrics for flink-connector-tidb-cdc
> 
>
> Key: FLINK-35245
> URL: https://issues.apache.org/jira/browse/FLINK-35245
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xie Yi
>Assignee: Xie Yi
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> As [https://github.com/apache/flink-cdc/issues/985] had been closed, but it 
> has not been resolved.
> Create  a new issue to track this issue



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35245) Add metrics for flink-connector-tidb-cdc

2024-05-06 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-35245:
---
Fix Version/s: cdc-3.2.0

> Add metrics for flink-connector-tidb-cdc
> 
>
> Key: FLINK-35245
> URL: https://issues.apache.org/jira/browse/FLINK-35245
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xie Yi
>Assignee: Xie Yi
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
>
> As [https://github.com/apache/flink-cdc/issues/985] had been closed, but it 
> has not been resolved.
> Create  a new issue to track this issue



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35245) Add metrics for flink-connector-tidb-cdc

2024-05-06 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-35245:
--

Assignee: Xie Yi

> Add metrics for flink-connector-tidb-cdc
> 
>
> Key: FLINK-35245
> URL: https://issues.apache.org/jira/browse/FLINK-35245
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xie Yi
>Assignee: Xie Yi
>Priority: Major
>  Labels: pull-request-available
>
> As [https://github.com/apache/flink-cdc/issues/985] had been closed, but it 
> has not been resolved.
> Create  a new issue to track this issue



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35274) Occasional failure issue with Flink CDC Db2 UT

2024-05-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun closed FLINK-35274.
--

> Occasional failure issue with Flink CDC Db2 UT
> --
>
> Key: FLINK-35274
> URL: https://issues.apache.org/jira/browse/FLINK-35274
> Project: Flink
>  Issue Type: Bug
>Reporter: Xin Gong
>Assignee: Xin Gong
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.1.0
>
>
> Occasional failure issue with Flink CDC Db2 UT. Because db2 redolog data 
> tableId don't have database name, it will cause table schame occasional not 
> found when task exception restart. I will fix it by supplement database name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35274) Occasional failure issue with Flink CDC Db2 UT

2024-05-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35274.

Resolution: Fixed

Fixed via cdc
* master: a7cb46f7621568486a069a7ae01a7b86ebb0a801
* release-3.1: d556f29475a52234a98bcc65db959483a10beb52

> Occasional failure issue with Flink CDC Db2 UT
> --
>
> Key: FLINK-35274
> URL: https://issues.apache.org/jira/browse/FLINK-35274
> Project: Flink
>  Issue Type: Bug
>Reporter: Xin Gong
>Assignee: Xin Gong
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.1.0
>
>
> Occasional failure issue with Flink CDC Db2 UT. Because db2 redolog data 
> tableId don't have database name, it will cause table schame occasional not 
> found when task exception restart. I will fix it by supplement database name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35274) Occasional failure issue with Flink CDC Db2 UT

2024-05-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-35274:
--

Assignee: Xin Gong

> Occasional failure issue with Flink CDC Db2 UT
> --
>
> Key: FLINK-35274
> URL: https://issues.apache.org/jira/browse/FLINK-35274
> Project: Flink
>  Issue Type: Bug
>Reporter: Xin Gong
>Assignee: Xin Gong
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.1.0
>
>
> Occasional failure issue with Flink CDC Db2 UT. Because db2 redolog data 
> tableId don't have database name, it will cause table schame occasional not 
> found when task exception restart. I will fix it by supplement database name.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35244) Correct the package for flink-connector-tidb-cdc test

2024-05-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun closed FLINK-35244.
--

>  Correct the package for flink-connector-tidb-cdc test
> --
>
> Key: FLINK-35244
> URL: https://issues.apache.org/jira/browse/FLINK-35244
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xie Yi
>Assignee: Xie Yi
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
> Attachments: image-2024-04-26-16-19-39-297.png
>
>
> test case for flink-connector-tidb-cdc should under
> *org.apache.flink.cdc.connectors.tidb* package
> instead of *org.apache.flink.cdc.connectors*
> !image-2024-04-26-16-19-39-297.png!
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35244) Correct the package for flink-connector-tidb-cdc test

2024-05-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35244.

Fix Version/s: cdc-3.2.0
   Resolution: Fixed

Resolved via cdc-master: 002b16ed4e155b01374040ff302b7536d9c41245

>  Correct the package for flink-connector-tidb-cdc test
> --
>
> Key: FLINK-35244
> URL: https://issues.apache.org/jira/browse/FLINK-35244
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xie Yi
>Assignee: Xie Yi
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.2.0
>
> Attachments: image-2024-04-26-16-19-39-297.png
>
>
> test case for flink-connector-tidb-cdc should under
> *org.apache.flink.cdc.connectors.tidb* package
> instead of *org.apache.flink.cdc.connectors*
> !image-2024-04-26-16-19-39-297.png!
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35244) Correct the package for flink-connector-tidb-cdc test

2024-05-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-35244:
---
Summary:  Correct the package for flink-connector-tidb-cdc test  (was: Move 
package for flink-connector-tidb-cdc test)

>  Correct the package for flink-connector-tidb-cdc test
> --
>
> Key: FLINK-35244
> URL: https://issues.apache.org/jira/browse/FLINK-35244
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xie Yi
>Assignee: Xie Yi
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-04-26-16-19-39-297.png
>
>
> test case for flink-connector-tidb-cdc should under
> *org.apache.flink.cdc.connectors.tidb* package
> instead of *org.apache.flink.cdc.connectors*
> !image-2024-04-26-16-19-39-297.png!
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32843) [JUnit5 Migration] The jobmaster package of flink-runtime module

2024-05-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-32843.

Fix Version/s: 1.20.0
   Resolution: Fixed

Resolved via master: beb0b167bdcf95f27be87a214a69a174fd49d256

> [JUnit5 Migration] The jobmaster package of flink-runtime module
> 
>
> Key: FLINK-32843
> URL: https://issues.apache.org/jira/browse/FLINK-32843
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Rui Fan
>Assignee: RocMarshal
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35244) Move package for flink-connector-tidb-cdc test

2024-04-26 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-35244:
--

Assignee: Xie Yi

> Move package for flink-connector-tidb-cdc test
> --
>
> Key: FLINK-35244
> URL: https://issues.apache.org/jira/browse/FLINK-35244
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Xie Yi
>Assignee: Xie Yi
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-04-26-16-19-39-297.png
>
>
> test case for flink-connector-tidb-cdc should under
> *org.apache.flink.cdc.connectors.tidb* package
> instead of *org.apache.flink.cdc.connectors*
> !image-2024-04-26-16-19-39-297.png!
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35235) Fix missing dependencies in the uber jar

2024-04-26 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35235.

  Assignee: LvYanquan
Resolution: Fixed

Resolved via

* cdc master: ec643c9dd7365261f3cee620d4d6bd5d042917e0
* cdc release-3.1: b96ea11cc7df6c3d57a155573f29c18bf9d787ae

> Fix missing dependencies in the uber jar
> 
>
> Key: FLINK-35235
> URL: https://issues.apache.org/jira/browse/FLINK-35235
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: 3.1.0
>Reporter: LvYanquan
>Assignee: LvYanquan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.0
>
> Attachments: image-2024-04-25-15-17-20-987.png, 
> image-2024-04-25-15-17-34-717.png
>
>
> Some class of Kafka were not included in fat jar.
> !image-2024-04-25-15-17-34-717.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34738) "Deployment - YARN" Page for Flink CDC Chinese Documentation

2024-04-17 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-34738:
--

Assignee: Vincent Woo

> "Deployment - YARN" Page for Flink CDC Chinese Documentation
> 
>
> Key: FLINK-34738
> URL: https://issues.apache.org/jira/browse/FLINK-34738
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: LvYanquan
>Assignee: Vincent Woo
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> Translate 
> [https://github.com/apache/flink-cdc/blob/master/docs/content/docs/deployment/yarn.md]
>  into Chinese.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35139) Release flink-connector-mongodb vX.X.X for Flink 1.19

2024-04-17 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838077#comment-17838077
 ] 

Jiabao Sun commented on FLINK-35139:


mongodb main: 660ffe4f33f3ce60da139159741644f48295652d

> Release flink-connector-mongodb vX.X.X for Flink 1.19
> -
>
> Key: FLINK-35139
> URL: https://issues.apache.org/jira/browse/FLINK-35139
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / MongoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: mongodb-1.2.0
>
>
> https://github.com/apache/flink-connector-mongodb



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35079) MongoConnector failed to resume token when current collection removed

2024-04-16 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35079.

Fix Version/s: cdc-3.1.0
   Resolution: Fixed

resolved via cdc master: 0562e35da75fb2c8e512d438adb8f80a87964dc4

> MongoConnector failed to resume token when current collection removed
> -
>
> Key: FLINK-35079
> URL: https://issues.apache.org/jira/browse/FLINK-35079
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Xiqian YU
>Assignee: Xiqian YU
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> When connector tries to create cursor with an expired resuming token during 
> stream task fetching stage, MongoDB connector will crash with such message: 
> "error due to Command failed with error 280 (ChangeStreamFatalError): 'cannot 
> resume stream; the resume token was not found."



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35127) CDC ValuesDataSourceITCase crashed due to OutOfMemoryError

2024-04-16 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837929#comment-17837929
 ] 

Jiabao Sun commented on FLINK-35127:


Hi [~kunni],
Could you help take a look?

> CDC ValuesDataSourceITCase crashed due to OutOfMemoryError
> --
>
> Key: FLINK-35127
> URL: https://issues.apache.org/jira/browse/FLINK-35127
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Jiabao Sun
>Priority: Major
>  Labels: test-stability
> Fix For: cdc-3.1.0
>
>
> {code}
> [INFO] Running 
> org.apache.flink.cdc.connectors.values.source.ValuesDataSourceITCase
> Error: Exception in thread "surefire-forkedjvm-command-thread" 
> java.lang.OutOfMemoryError: Java heap space
> Error:  
> Error:  Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "taskmanager_4-main-scheduler-thread-2"
> Error:  
> Error:  Exception: java.lang.OutOfMemoryError thrown from the 
> UncaughtExceptionHandler in thread "System Time Trigger for Source: values 
> (1/4)#0"
> {code}
> https://github.com/apache/flink-cdc/actions/runs/8698450229/job/23858750352?pr=3221#step:6:1949



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35127) CDC ValuesDataSourceITCase crashed due to OutOfMemoryError

2024-04-16 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-35127:
--

 Summary: CDC ValuesDataSourceITCase crashed due to OutOfMemoryError
 Key: FLINK-35127
 URL: https://issues.apache.org/jira/browse/FLINK-35127
 Project: Flink
  Issue Type: Bug
  Components: Flink CDC
Reporter: Jiabao Sun
 Fix For: cdc-3.1.0


{code}
[INFO] Running 
org.apache.flink.cdc.connectors.values.source.ValuesDataSourceITCase
Error: Exception in thread "surefire-forkedjvm-command-thread" 
java.lang.OutOfMemoryError: Java heap space
Error:  
Error:  Exception: java.lang.OutOfMemoryError thrown from the 
UncaughtExceptionHandler in thread "taskmanager_4-main-scheduler-thread-2"
Error:  
Error:  Exception: java.lang.OutOfMemoryError thrown from the 
UncaughtExceptionHandler in thread "System Time Trigger for Source: values 
(1/4)#0"
{code}

https://github.com/apache/flink-cdc/actions/runs/8698450229/job/23858750352?pr=3221#step:6:1949




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25537) [JUnit5 Migration] Module: flink-core

2024-04-15 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837471#comment-17837471
 ] 

Jiabao Sun commented on FLINK-25537:


master: 138f1f9d17b7a58b092ee7d9fc4c20d968a7b33b

> [JUnit5 Migration] Module: flink-core
> -
>
> Key: FLINK-25537
> URL: https://issues.apache.org/jira/browse/FLINK-25537
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Reporter: Qingsheng Ren
>Assignee: Aiden Gong
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35010) Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.1 for Flink Mongodb connector

2024-04-09 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun closed FLINK-35010.
--

> Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.1 for Flink 
> Mongodb connector
> --
>
> Key: FLINK-35010
> URL: https://issues.apache.org/jira/browse/FLINK-35010
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / MongoDB
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: mongodb-1.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35010) Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.1 for Flink Mongodb connector

2024-04-09 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35010.

Fix Version/s: mongodb-1.2.0
   Resolution: Fixed

Fixed via mongodb-connector main: ee1146dadf73e91ecb7a2b28cfa879e7fe3b3f22

> Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.1 for Flink 
> Mongodb connector
> --
>
> Key: FLINK-35010
> URL: https://issues.apache.org/jira/browse/FLINK-35010
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / MongoDB
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: mongodb-1.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35010) Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.1 for Flink Mongodb connector

2024-04-09 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-35010:
---
Summary: Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.1 for 
Flink Mongodb connector  (was: Bump org.apache.commons:commons-compress from 
1.24.0 to 1.26.0 for Flink Mongodb connector)

> Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.1 for Flink 
> Mongodb connector
> --
>
> Key: FLINK-35010
> URL: https://issues.apache.org/jira/browse/FLINK-35010
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / MongoDB
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35008) Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 for Flink Kafka connector

2024-04-08 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835156#comment-17835156
 ] 

Jiabao Sun commented on FLINK-35008:


I agree with Sergey's opinion. 

In version 1.26.0, the dependency of commons-codec is optional and the 
dependency error of COMPRESS-659 cause CI failure. 
To avoid this error, we have to explicitly add the dependency of commons-codec. 

Although the issue of COMPRESS-659 import error has been fixed in version 
1.26.1, the dependency of commons-codec has changed to a non-optional 
transitive dependency which is not necessary.
Using version 1.26.1, we don't need to explicitly declare the dependency of 
commons-codec, which may be better than using version 1.26.0.

> Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 for Flink 
> Kafka connector
> 
>
> Key: FLINK-35008
> URL: https://issues.apache.org/jira/browse/FLINK-35008
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34921) SystemProcessingTimeServiceTest fails due to missing output

2024-04-08 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834864#comment-17834864
 ] 

Jiabao Sun commented on FLINK-34921:


Maybe we shouldn't use ScheduledFuture.get() to check the scheduled task is 
completed.

https://stackoverflow.com/questions/28116301/scheduledfuture-get-is-still-blocked-after-executor-shutdown

> SystemProcessingTimeServiceTest fails due to missing output
> ---
>
> Key: FLINK-34921
> URL: https://issues.apache.org/jira/browse/FLINK-34921
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> This PR CI build with {{AdaptiveScheduler}} enabled failed:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58476=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=11224
> {code}
> "ForkJoinPool-61-worker-25" #863 daemon prio=5 os_prio=0 
> tid=0x7f8c19eba000 nid=0x60a5 waiting on condition [0x7f8bc2cf9000]
> Mar 21 17:19:42java.lang.Thread.State: WAITING (parking)
> Mar 21 17:19:42   at sun.misc.Unsafe.park(Native Method)
> Mar 21 17:19:42   - parking to wait for  <0xd81959b8> (a 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask)
> Mar 21 17:19:42   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Mar 21 17:19:42   at 
> java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429)
> Mar 21 17:19:42   at 
> java.util.concurrent.FutureTask.get(FutureTask.java:191)
> Mar 21 17:19:42   at 
> org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest$$Lambda$1443/1477662666.call(Unknown
>  Source)
> Mar 21 17:19:42   at 
> org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:63)
> Mar 21 17:19:42   at 
> org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:892)
> Mar 21 17:19:42   at 
> org.assertj.core.api.Assertions.catchThrowable(Assertions.java:1366)
> Mar 21 17:19:42   at 
> org.assertj.core.api.Assertions.assertThatThrownBy(Assertions.java:1210)
> Mar 21 17:19:42   at 
> org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest.testQuiesceAndAwaitingCancelsScheduledAtFixRateFuture(SystemProcessingTimeServiceTest.java:92)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34955) Upgrade commons-compress to 1.26.0

2024-04-07 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834779#comment-17834779
 ] 

Jiabao Sun commented on FLINK-34955:


I have rechecked the dependency of `commons-codec` in `commons-compress` and it 
is no longer optional. Even if upgraded to 1.26.1, `commons-codec` will still 
be a transitive dependency. 
Sorry for the disturbance.

> Upgrade commons-compress to 1.26.0
> --
>
> Key: FLINK-34955
> URL: https://issues.apache.org/jira/browse/FLINK-34955
> Project: Flink
>  Issue Type: Improvement
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.2, 1.20.0, 1.19.1
>
>
> commons-compress 1.24.0 has CVE issues, try to upgrade to 1.26.0, we can 
> refer to the maven link
> https://mvnrepository.com/artifact/org.apache.commons/commons-compress



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35010) Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 for Flink Mongodb connector

2024-04-07 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834778#comment-17834778
 ] 

Jiabao Sun commented on FLINK-35010:


I have rechecked the dependency of `commons-codec` in `commons-compress` and it 
is no longer optional. 
Even if upgraded to 1.26.1, `commons-codec` will still be a transitive 
dependency. 
Please ignore the previous noise, sorry for the disturbance.

> Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 for Flink 
> Mongodb connector
> --
>
> Key: FLINK-35010
> URL: https://issues.apache.org/jira/browse/FLINK-35010
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / MongoDB
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35008) Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 for Flink Kafka connector

2024-04-07 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834777#comment-17834777
 ] 

Jiabao Sun commented on FLINK-35008:


I have rechecked the dependency of `commons-codec` in `commons-compress` and it 
is no longer optional. 
Even if upgraded to 1.26.1, `commons-codec` will still be a transitive 
dependency. 
Please ignore the previous noise, sorry for the disturbance.

> Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 for Flink 
> Kafka connector
> 
>
> Key: FLINK-35008
> URL: https://issues.apache.org/jira/browse/FLINK-35008
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35010) Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 for Flink Mongodb connector

2024-04-06 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-35010:
--

Assignee: Zhongqiang Gong

> Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 for Flink 
> Mongodb connector
> --
>
> Key: FLINK-35010
> URL: https://issues.apache.org/jira/browse/FLINK-35010
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / MongoDB
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35010) Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 for Flink Mongodb connector

2024-04-06 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834607#comment-17834607
 ] 

Jiabao Sun commented on FLINK-35010:


I think we should bump commons-compress version to 1.26.1 due to 
https://issues.apache.org/jira/browse/COMPRESS-659.

> Bump org.apache.commons:commons-compress from 1.24.0 to 1.26.0 for Flink 
> Mongodb connector
> --
>
> Key: FLINK-35010
> URL: https://issues.apache.org/jira/browse/FLINK-35010
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / MongoDB
>Reporter: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35008) Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 for Flink Kafka connector

2024-04-06 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834606#comment-17834606
 ] 

Jiabao Sun commented on FLINK-35008:


Due to an incorrect dependency on the Charsets class in the commons-codec 
package in TarArchiveOutputStream, it is necessary to include the commons-codec 
dependency to avoid compilation errors. 
This issue has been fixed in version 1.26.1 of commons-compress.

https://github.com/GOODBOY008/flink-connector-mongodb/actions/runs/8557577952/job/23450146047#step:15:11104
{code}
Caused by: java.lang.RuntimeException: Failed to build JobManager image
at 
org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configureJobManagerContainer(FlinkTestcontainersConfigurator.java:67)
at 
org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configure(FlinkTestcontainersConfigurator.java:147)
at 
org.apache.flink.connector.testframe.container.FlinkContainers$Builder.build(FlinkContainers.java:197)
at 
org.apache.flink.tests.util.mongodb.MongoE2ECase.(MongoE2ECase.java:90)
... 56 more
Caused by: org.apache.flink.connector.testframe.container.ImageBuildException: 
Failed to build image "flink-configured-jobmanager"
at 
org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:234)
at 
org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configureJobManagerContainer(FlinkTestcontainersConfigurator.java:65)
... 59 more
Caused by: java.lang.RuntimeException: java.lang.NoClassDefFoundError: 
org/apache/commons/codec/Charsets
at org.rnorth.ducttape.timeouts.Timeouts.callFuture(Timeouts.java:68)
at 
org.rnorth.ducttape.timeouts.Timeouts.getWithTimeout(Timeouts.java:43)
at org.testcontainers.utility.LazyFuture.get(LazyFuture.java:45)
at 
org.apache.flink.connector.testframe.container.FlinkImageBuilder.buildBaseImage(FlinkImageBuilder.java:255)
at 
org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:206)
... 60 more
Caused by: java.lang.NoClassDefFoundError: org/apache/commons/codec/Charsets
at 
org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.(TarArchiveOutputStream.java:212)
at 
org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.(TarArchiveOutputStream.java:157)
at 
org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.(TarArchiveOutputStream.java:147)
at 
org.testcontainers.images.builder.ImageFromDockerfile.resolve(ImageFromDockerfile.java:129)
at 
org.testcontainers.images.builder.ImageFromDockerfile.resolve(ImageFromDockerfile.java:40)
at 
org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:17)
at org.testcontainers.utility.LazyFuture.get(LazyFuture.java:39)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.ClassNotFoundException: org.apache.commons.codec.Charsets
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 11 more
{code}


> Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 for Flink 
> Kafka connector
> 
>
> Key: FLINK-35008
> URL: https://issues.apache.org/jira/browse/FLINK-35008
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35008) Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 for Flink Kafka connector

2024-04-06 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834604#comment-17834604
 ] 

Jiabao Sun commented on FLINK-35008:


Hi [~martijnvisser], maybe we should bump the commons-compress version to 
1.26.1.
In version 1.26.0, there should not be a dependency on commons-codec.

see: https://issues.apache.org/jira/browse/COMPRESS-659

> Bump org.apache.commons:commons-compress from 1.25.0 to 1.26.0 for Flink 
> Kafka connector
> 
>
> Key: FLINK-35008
> URL: https://issues.apache.org/jira/browse/FLINK-35008
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34405) RightOuterJoinTaskTest#testCancelOuterJoinTaskWhileSort2 fails due to an interruption of the RightOuterJoinDriver#prepare method

2024-04-06 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834591#comment-17834591
 ] 

Jiabao Sun commented on FLINK-34405:


taskRunner Thread:  testDriver() -> AbstractOuterJoinDriver#prepare() :101 -> 
WAITING on ExternalSorter#getIterator().

The InterruptedException is always thrown by BinaryOperatorTestBase:209.
It will be dropped after cancel() method called, see BinaryOperatorTestBase:260.

> RightOuterJoinTaskTest#testCancelOuterJoinTaskWhileSort2 fails due to an 
> interruption of the RightOuterJoinDriver#prepare method
> 
>
> Key: FLINK-34405
> URL: https://issues.apache.org/jira/browse/FLINK-34405
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.17.2, 1.19.0, 1.18.1, 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: starter, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57357=logs=d89de3df-4600-5585-dadc-9bbc9a5e661c=be5a4b15-4b23-56b1-7582-795f58a645a2=9027
> {code}
> Feb 07 03:20:16 03:20:16.223 [ERROR] Failures: 
> Feb 07 03:20:16 03:20:16.223 [ERROR] 
> org.apache.flink.runtime.operators.RightOuterJoinTaskTest.testCancelOuterJoinTaskWhileSort2
> Feb 07 03:20:16 03:20:16.223 [ERROR]   Run 1: 
> RightOuterJoinTaskTest>AbstractOuterJoinTaskTest.testCancelOuterJoinTaskWhileSort2:435
>  
> Feb 07 03:20:16 expected: 
> Feb 07 03:20:16   null
> Feb 07 03:20:16  but was: 
> Feb 07 03:20:16   java.lang.Exception: The data preparation caused an error: 
> Interrupted
> Feb 07 03:20:16   at 
> org.apache.flink.runtime.operators.testutils.BinaryOperatorTestBase.testDriverInternal(BinaryOperatorTestBase.java:209)
> Feb 07 03:20:16   at 
> org.apache.flink.runtime.operators.testutils.BinaryOperatorTestBase.testDriver(BinaryOperatorTestBase.java:189)
> Feb 07 03:20:16   at 
> org.apache.flink.runtime.operators.AbstractOuterJoinTaskTest.access$100(AbstractOuterJoinTaskTest.java:48)
> Feb 07 03:20:16   ...(1 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35011) The change in visibility of MockDeserializationSchema cause compilation failure in kafka connector

2024-04-06 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-35011.

Resolution: Fixed

Fixed via master: 3590c2d86f4186771ffcd64712f756d31306eb88

> The change in visibility of MockDeserializationSchema cause compilation 
> failure in kafka connector
> --
>
> Key: FLINK-35011
> URL: https://issues.apache.org/jira/browse/FLINK-35011
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.20.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Flink Kafka connector can't compile with 1.20-SNAPSHOT, see 
> https://github.com/apache/flink-connector-kafka/actions/runs/8553981349/job/23438292087?pr=90#step:15:165
> Error message is:
> {code}
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
> (default-testCompile) on project flink-connector-kafka: Compilation failure
> Error:  
> /home/runner/work/flink-connector-kafka/flink-connector-kafka/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseTest.java:[60,39]
>  org.apache.flink.streaming.util.MockDeserializationSchema is not public in 
> org.apache.flink.streaming.util; cannot be accessed from outside package
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35011) The change in visibility of MockDeserializationSchema cause compilation failure in kafka connector

2024-04-04 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-35011:
--

 Summary: The change in visibility of MockDeserializationSchema 
cause compilation failure in kafka connector
 Key: FLINK-35011
 URL: https://issues.apache.org/jira/browse/FLINK-35011
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.20.0
Reporter: Jiabao Sun
Assignee: Jiabao Sun
 Fix For: 1.20.0


Flink Kafka connector can't compile with 1.20-SNAPSHOT, see 
https://github.com/apache/flink-connector-kafka/actions/runs/8553981349/job/23438292087?pr=90#step:15:165

Error message is:

{code}
Error:  Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
(default-testCompile) on project flink-connector-kafka: Compilation failure
Error:  
/home/runner/work/flink-connector-kafka/flink-connector-kafka/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBaseTest.java:[60,39]
 org.apache.flink.streaming.util.MockDeserializationSchema is not public in 
org.apache.flink.streaming.util; cannot be accessed from outside package
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25544) [JUnit5 Migration] Module: flink-streaming-java

2024-04-04 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833989#comment-17833989
 ] 

Jiabao Sun commented on FLINK-25544:


Thanks [~martijnvisser] for reporting this problem.
The visibility of MockDeserializationSchema should not be modified. 
I will check for other changes and create a new ticket to fix it.

> [JUnit5 Migration] Module: flink-streaming-java
> ---
>
> Key: FLINK-25544
> URL: https://issues.apache.org/jira/browse/FLINK-25544
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Assignee: Jiabao Sun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34948) CDC RowType can not convert to flink row type

2024-03-31 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-34948.

  Assignee: Qishang Zhong
Resolution: Fixed

Fixed via cdc master: d099603ef15d9a1ed7ec33718db7ab2438ef1ab5

> CDC RowType can not convert to flink row type
> -
>
> Key: FLINK-34948
> URL: https://issues.apache.org/jira/browse/FLINK-34948
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Qishang Zhong
>Assignee: Qishang Zhong
>Priority: Critical
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> Fix cdc {{RowType}} can not convert to flink type
> I meet the follow exception:
>  
> {code:java}
> java.lang.ArrayStoreException
>     at java.lang.System.arraycopy(Native Method)
>     at java.util.Arrays.copyOf(Arrays.java:3213)
>     at java.util.ArrayList.toArray(ArrayList.java:413)
>     at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1036) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34948) CDC RowType can not convert to flink row type

2024-03-31 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-34948:
---
Priority: Critical  (was: Minor)

> CDC RowType can not convert to flink row type
> -
>
> Key: FLINK-34948
> URL: https://issues.apache.org/jira/browse/FLINK-34948
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: Qishang Zhong
>Priority: Critical
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> Fix cdc {{RowType}} can not convert to flink type
> I meet the follow exception:
>  
> {code:java}
> java.lang.ArrayStoreException
>     at java.lang.System.arraycopy(Native Method)
>     at java.util.Arrays.copyOf(Arrays.java:3213)
>     at java.util.ArrayList.toArray(ArrayList.java:413)
>     at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1036) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34958) Add support Flink 1.20-SNAPSHOT and bump flink-connector-parent to 1.1.0 for mongodb connector

2024-03-28 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-34958:
---
Affects Version/s: mongodb-1.1.0
   (was: mongodb-1.0.2)

> Add support Flink 1.20-SNAPSHOT and bump flink-connector-parent to 1.1.0 for 
> mongodb connector
> --
>
> Key: FLINK-34958
> URL: https://issues.apache.org/jira/browse/FLINK-34958
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.1.0
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: mongodb-1.1.0
>
>
> Changes:
>  * Add support Flink 1.20-SNAPSHOT
>  * Bump flink-connector-parent to 1.1.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34958) Add support Flink 1.20-SNAPSHOT and bump flink-connector-parent to 1.1.0 for mongodb connector

2024-03-28 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-34958:
---
Fix Version/s: mongodb-1.2.0
   (was: mongodb-1.1.0)

> Add support Flink 1.20-SNAPSHOT and bump flink-connector-parent to 1.1.0 for 
> mongodb connector
> --
>
> Key: FLINK-34958
> URL: https://issues.apache.org/jira/browse/FLINK-34958
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.1.0
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: mongodb-1.2.0
>
>
> Changes:
>  * Add support Flink 1.20-SNAPSHOT
>  * Bump flink-connector-parent to 1.1.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34958) Add support Flink 1.20-SNAPSHOT and bump flink-connector-parent to 1.1.0 for mongodb connector

2024-03-28 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-34958:
---
Affects Version/s: mongodb-1.0.2

> Add support Flink 1.20-SNAPSHOT and bump flink-connector-parent to 1.1.0 for 
> mongodb connector
> --
>
> Key: FLINK-34958
> URL: https://issues.apache.org/jira/browse/FLINK-34958
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.0.2
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: mongodb-1.1.0
>
>
> Changes:
>  * Add support Flink 1.20-SNAPSHOT
>  * Bump flink-connector-parent to 1.1.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34958) Add support Flink 1.20-SNAPSHOT and bump flink-connector-parent to 1.1.0 for mongodb connector

2024-03-28 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-34958.

Fix Version/s: mongodb-1.1.0
   Resolution: Implemented

Implemented via (mongodb:main) 0dc2640922b3dd2d0ea8565d1bf6606b5d715b0b
a0fe686a6647cb6eba3908bbba336079569959e7

> Add support Flink 1.20-SNAPSHOT and bump flink-connector-parent to 1.1.0 for 
> mongodb connector
> --
>
> Key: FLINK-34958
> URL: https://issues.apache.org/jira/browse/FLINK-34958
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / MongoDB
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: mongodb-1.1.0
>
>
> Changes:
>  * Add support Flink 1.20-SNAPSHOT
>  * Bump flink-connector-parent to 1.1.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34753) Update outdated MongoDB CDC FAQ in doc

2024-03-28 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-34753:
--

Assignee: Xiao Huang

> Update outdated MongoDB CDC FAQ in doc
> --
>
> Key: FLINK-34753
> URL: https://issues.apache.org/jira/browse/FLINK-34753
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34753) Update outdated MongoDB CDC FAQ in doc

2024-03-28 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-34753.

Resolution: Fixed

Resolved by flink-cdc master: 927a0ec4743ac70c5d4edb811da7ffce09658e8b

> Update outdated MongoDB CDC FAQ in doc
> --
>
> Key: FLINK-34753
> URL: https://issues.apache.org/jira/browse/FLINK-34753
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34719) StreamRecordTest#testWithTimestamp fails on Azure

2024-03-18 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-34719.

Fix Version/s: 1.12.0
   Resolution: Fixed

> StreamRecordTest#testWithTimestamp fails on Azure
> -
>
> Key: FLINK-34719
> URL: https://issues.apache.org/jira/browse/FLINK-34719
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> The ClassCastException *message* expected in 
> StreamRecordTest#testWithTimestamp as well as 
> StreamRecordTest#testWithNoTimestamp fails on JDK 11, 17, and 21
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=50bf7a25-bdc4-5e56-5478-c7b4511dde53=10341]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=675bf62c-8558-587e-2555-dcad13acefb5=5878eed3-cc1e-5b12-1ed0-9e7139ce0992=9828]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=d06b80b4-9e88-5d40-12a2-18072cf60528=609ecd5a-3f6e-5d0c-2239-2096b155a4d0=9833]
> {code:java}
> Expecting throwable message:
> Mar 16 01:35:07   "class 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord cannot be cast 
> to class org.apache.flink.streaming.api.watermark.Watermark 
> (org.apache.flink.streaming.runtime.streamrecord.StreamRecord and 
> org.apache.flink.streaming.api.watermark.Watermark are in unnamed module of 
> loader 'app')"
> Mar 16 01:35:07 to contain:
> Mar 16 01:35:07   "cannot be cast to 
> org.apache.flink.streaming.api.watermark.Watermark"
> Mar 16 01:35:07 but did not.
> Mar 16 01:35:07 
> Mar 16 01:35:07 Throwable that failed the check:
> Mar 16 01:35:07 
> Mar 16 01:35:07 java.lang.ClassCastException: class 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord cannot be cast 
> to class org.apache.flink.streaming.api.watermark.Watermark 
> (org.apache.flink.streaming.runtime.streamrecord.StreamRecord and 
> org.apache.flink.streaming.api.watermark.Watermark are in unnamed module of 
> loader 'app')
> Mar 16 01:35:07   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElement.asWatermark(StreamElement.java:92)
> Mar 16 01:35:07   at 
> org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:63)
> Mar 16 01:35:07   at 
> org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:892)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34719) StreamRecordTest#testWithTimestamp fails on Azure

2024-03-18 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828129#comment-17828129
 ] 

Jiabao Sun commented on FLINK-34719:


Fixed via master: 8ec5e7e830b5bda30ead3638a1faa3567d80bb7b

> StreamRecordTest#testWithTimestamp fails on Azure
> -
>
> Key: FLINK-34719
> URL: https://issues.apache.org/jira/browse/FLINK-34719
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> The ClassCastException *message* expected in 
> StreamRecordTest#testWithTimestamp as well as 
> StreamRecordTest#testWithNoTimestamp fails on JDK 11, 17, and 21
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=50bf7a25-bdc4-5e56-5478-c7b4511dde53=10341]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=675bf62c-8558-587e-2555-dcad13acefb5=5878eed3-cc1e-5b12-1ed0-9e7139ce0992=9828]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=d06b80b4-9e88-5d40-12a2-18072cf60528=609ecd5a-3f6e-5d0c-2239-2096b155a4d0=9833]
> {code:java}
> Expecting throwable message:
> Mar 16 01:35:07   "class 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord cannot be cast 
> to class org.apache.flink.streaming.api.watermark.Watermark 
> (org.apache.flink.streaming.runtime.streamrecord.StreamRecord and 
> org.apache.flink.streaming.api.watermark.Watermark are in unnamed module of 
> loader 'app')"
> Mar 16 01:35:07 to contain:
> Mar 16 01:35:07   "cannot be cast to 
> org.apache.flink.streaming.api.watermark.Watermark"
> Mar 16 01:35:07 but did not.
> Mar 16 01:35:07 
> Mar 16 01:35:07 Throwable that failed the check:
> Mar 16 01:35:07 
> Mar 16 01:35:07 java.lang.ClassCastException: class 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord cannot be cast 
> to class org.apache.flink.streaming.api.watermark.Watermark 
> (org.apache.flink.streaming.runtime.streamrecord.StreamRecord and 
> org.apache.flink.streaming.api.watermark.Watermark are in unnamed module of 
> loader 'app')
> Mar 16 01:35:07   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElement.asWatermark(StreamElement.java:92)
> Mar 16 01:35:07   at 
> org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:63)
> Mar 16 01:35:07   at 
> org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:892)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34719) StreamRecordTest#testWithTimestamp fails on Azure

2024-03-18 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-34719:
--

Assignee: Jiabao Sun

> StreamRecordTest#testWithTimestamp fails on Azure
> -
>
> Key: FLINK-34719
> URL: https://issues.apache.org/jira/browse/FLINK-34719
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: test-stability
>
> The ClassCastException *message* expected in 
> StreamRecordTest#testWithTimestamp as well as 
> StreamRecordTest#testWithNoTimestamp fails on JDK 11, 17, and 21
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=50bf7a25-bdc4-5e56-5478-c7b4511dde53=10341]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=675bf62c-8558-587e-2555-dcad13acefb5=5878eed3-cc1e-5b12-1ed0-9e7139ce0992=9828]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=d06b80b4-9e88-5d40-12a2-18072cf60528=609ecd5a-3f6e-5d0c-2239-2096b155a4d0=9833]
> {code:java}
> Expecting throwable message:
> Mar 16 01:35:07   "class 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord cannot be cast 
> to class org.apache.flink.streaming.api.watermark.Watermark 
> (org.apache.flink.streaming.runtime.streamrecord.StreamRecord and 
> org.apache.flink.streaming.api.watermark.Watermark are in unnamed module of 
> loader 'app')"
> Mar 16 01:35:07 to contain:
> Mar 16 01:35:07   "cannot be cast to 
> org.apache.flink.streaming.api.watermark.Watermark"
> Mar 16 01:35:07 but did not.
> Mar 16 01:35:07 
> Mar 16 01:35:07 Throwable that failed the check:
> Mar 16 01:35:07 
> Mar 16 01:35:07 java.lang.ClassCastException: class 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord cannot be cast 
> to class org.apache.flink.streaming.api.watermark.Watermark 
> (org.apache.flink.streaming.runtime.streamrecord.StreamRecord and 
> org.apache.flink.streaming.api.watermark.Watermark are in unnamed module of 
> loader 'app')
> Mar 16 01:35:07   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElement.asWatermark(StreamElement.java:92)
> Mar 16 01:35:07   at 
> org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:63)
> Mar 16 01:35:07   at 
> org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:892)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34719) StreamRecordTest#testWithTimestamp fails on Azure

2024-03-18 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828012#comment-17828012
 ] 

Jiabao Sun commented on FLINK-34719:


Thanks [~rskraba] for reporting this.
I'm looking into this problem.

> StreamRecordTest#testWithTimestamp fails on Azure
> -
>
> Key: FLINK-34719
> URL: https://issues.apache.org/jira/browse/FLINK-34719
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Priority: Major
>  Labels: test-stability
>
> The ClassCastException *message* expected in 
> StreamRecordTest#testWithTimestamp as well as 
> StreamRecordTest#testWithNoTimestamp fails on JDK 11, 17, and 21
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=50bf7a25-bdc4-5e56-5478-c7b4511dde53=10341]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=675bf62c-8558-587e-2555-dcad13acefb5=5878eed3-cc1e-5b12-1ed0-9e7139ce0992=9828]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58352=logs=d06b80b4-9e88-5d40-12a2-18072cf60528=609ecd5a-3f6e-5d0c-2239-2096b155a4d0=9833]
> {code:java}
> Expecting throwable message:
> Mar 16 01:35:07   "class 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord cannot be cast 
> to class org.apache.flink.streaming.api.watermark.Watermark 
> (org.apache.flink.streaming.runtime.streamrecord.StreamRecord and 
> org.apache.flink.streaming.api.watermark.Watermark are in unnamed module of 
> loader 'app')"
> Mar 16 01:35:07 to contain:
> Mar 16 01:35:07   "cannot be cast to 
> org.apache.flink.streaming.api.watermark.Watermark"
> Mar 16 01:35:07 but did not.
> Mar 16 01:35:07 
> Mar 16 01:35:07 Throwable that failed the check:
> Mar 16 01:35:07 
> Mar 16 01:35:07 java.lang.ClassCastException: class 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord cannot be cast 
> to class org.apache.flink.streaming.api.watermark.Watermark 
> (org.apache.flink.streaming.runtime.streamrecord.StreamRecord and 
> org.apache.flink.streaming.api.watermark.Watermark are in unnamed module of 
> loader 'app')
> Mar 16 01:35:07   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElement.asWatermark(StreamElement.java:92)
> Mar 16 01:35:07   at 
> org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:63)
> Mar 16 01:35:07   at 
> org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:892)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34585) [JUnit5 Migration] Module: Flink CDC

2024-03-15 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17827462#comment-17827462
 ] 

Jiabao Sun commented on FLINK-34585:


Thanks [~kunni] for volunteering.
Assigned to you.

> [JUnit5 Migration] Module: Flink CDC
> 
>
> Key: FLINK-34585
> URL: https://issues.apache.org/jira/browse/FLINK-34585
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC
>Reporter: Hang Ruan
>Assignee: LvYanquan
>Priority: Major
>
> Most tests in Flink CDC are still using Junit 4. We need to use Junit 5 
> instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34585) [JUnit5 Migration] Module: Flink CDC

2024-03-15 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-34585:
--

Assignee: LvYanquan

> [JUnit5 Migration] Module: Flink CDC
> 
>
> Key: FLINK-34585
> URL: https://issues.apache.org/jira/browse/FLINK-34585
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC
>Reporter: Hang Ruan
>Assignee: LvYanquan
>Priority: Major
>
> Most tests in Flink CDC are still using Junit 4. We need to use Junit 5 
> instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25544) [JUnit5 Migration] Module: flink-streaming-java

2024-03-15 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17827460#comment-17827460
 ] 

Jiabao Sun commented on FLINK-25544:


master: 62f44e0118539c1ed0dedf47099326f97c9d0427

> [JUnit5 Migration] Module: flink-streaming-java
> ---
>
> Key: FLINK-25544
> URL: https://issues.apache.org/jira/browse/FLINK-25544
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Assignee: Jiabao Sun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25544) [JUnit5 Migration] Module: flink-streaming-java

2024-03-15 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-25544:
---
Release Note:   (was: master: 62f44e0118539c1ed0dedf47099326f97c9d0427)

> [JUnit5 Migration] Module: flink-streaming-java
> ---
>
> Key: FLINK-25544
> URL: https://issues.apache.org/jira/browse/FLINK-25544
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Assignee: Jiabao Sun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-25544) [JUnit5 Migration] Module: flink-streaming-java

2024-03-15 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-25544.

Fix Version/s: 1.20.0
 Release Note: master: 62f44e0118539c1ed0dedf47099326f97c9d0427
   Resolution: Fixed

> [JUnit5 Migration] Module: flink-streaming-java
> ---
>
> Key: FLINK-25544
> URL: https://issues.apache.org/jira/browse/FLINK-25544
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Assignee: Jiabao Sun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25544) [JUnit5 Migration] Module: flink-streaming-java

2024-03-07 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824476#comment-17824476
 ] 

Jiabao Sun commented on FLINK-25544:


master:
395928901cb99c019d8885d1c39839c33e5ed587
4bba35fa1f02a6a92e0db2d0e131c9a17bf17125
d1954b580020f62e5fdaff6830bccc3e569ce78d
6433aeb955a24fe0402d12bc170b4a9a58207e7e

> [JUnit5 Migration] Module: flink-streaming-java
> ---
>
> Key: FLINK-25544
> URL: https://issues.apache.org/jira/browse/FLINK-25544
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Assignee: Jiabao Sun
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25544) [JUnit5 Migration] Module: flink-streaming-java

2024-03-07 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824373#comment-17824373
 ] 

Jiabao Sun commented on FLINK-25544:


master: 6f7b24817a81995e90cfc2cd77efadb41be8cddc

> [JUnit5 Migration] Module: flink-streaming-java
> ---
>
> Key: FLINK-25544
> URL: https://issues.apache.org/jira/browse/FLINK-25544
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Assignee: Jiabao Sun
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34183) Add NOTICE files for Flink CDC project

2024-03-06 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-34183.

Fix Version/s: cdc-3.1.0
   Resolution: Implemented

Implement via flink-cdc master: 86272bf1029022adbf6d34132f4b34df14f2ad89

> Add NOTICE files for Flink CDC project
> --
>
> Key: FLINK-34183
> URL: https://issues.apache.org/jira/browse/FLINK-34183
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC
>Reporter: Leonard Xu
>Assignee: Hang Ruan
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34577) Add IssueNavigationLink for IDEA git log

2024-03-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-34577.

Fix Version/s: cdc-3.1.0
   Resolution: Implemented

Implemented via flink-cdc (master): 96888b2ce0a7981ebe5917b6c27deb4015d845d2

> Add IssueNavigationLink for IDEA git log
> 
>
> Key: FLINK-34577
> URL: https://issues.apache.org/jira/browse/FLINK-34577
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> Add IssueNavigationLink for IDEA git log



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34577) Add IssueNavigationLink for IDEA git log

2024-03-05 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-34577:
--

Assignee: Zhongqiang Gong

> Add IssueNavigationLink for IDEA git log
> 
>
> Key: FLINK-34577
> URL: https://issues.apache.org/jira/browse/FLINK-34577
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Major
>  Labels: pull-request-available
>
> Add IssueNavigationLink for IDEA git log



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25544) [JUnit5 Migration] Module: flink-streaming-java

2024-03-03 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823018#comment-17823018
 ] 

Jiabao Sun commented on FLINK-25544:


Hi [~Thesharing]. 
I assigned this ticket to myself as this ticket hasn't been updated for a long 
time, you can also help to review PR if you have time.

> [JUnit5 Migration] Module: flink-streaming-java
> ---
>
> Key: FLINK-25544
> URL: https://issues.apache.org/jira/browse/FLINK-25544
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Assignee: Zhilong Hong
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-25544) [JUnit5 Migration] Module: flink-streaming-java

2024-03-03 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-25544:
--

Assignee: Jiabao Sun  (was: Zhilong Hong)

> [JUnit5 Migration] Module: flink-streaming-java
> ---
>
> Key: FLINK-25544
> URL: https://issues.apache.org/jira/browse/FLINK-25544
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Hang Ruan
>Assignee: Jiabao Sun
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34492) fix scala style comment link when migrate scala to java

2024-03-01 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-34492.

Resolution: Fixed

master: 46cbf22147d783fb68f77fad95161dc5ef036c96

> fix scala style comment link when migrate scala to java
> ---
>
> Key: FLINK-34492
> URL: https://issues.apache.org/jira/browse/FLINK-34492
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
>  
> scala [[org.apache.calcite.rel.rules.CalcMergeRule]]
> java  {@link org.apache.calcite.rel.rules.CalcMergeRule}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34492) fix scala style comment link when migrate scala to java

2024-03-01 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-34492:
--

Assignee: Jacky Lau

> fix scala style comment link when migrate scala to java
> ---
>
> Key: FLINK-34492
> URL: https://issues.apache.org/jira/browse/FLINK-34492
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
>  
> scala [[org.apache.calcite.rel.rules.CalcMergeRule]]
> java  {@link org.apache.calcite.rel.rules.CalcMergeRule}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (FLINK-25537) [JUnit5 Migration] Module: flink-core

2024-02-26 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reopened FLINK-25537:


I noticed that there are remaining test in other packages.
Hi [~Aiden Gong], will you continue to finish it?

> [JUnit5 Migration] Module: flink-core
> -
>
> Key: FLINK-25537
> URL: https://issues.apache.org/jira/browse/FLINK-25537
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Reporter: Qingsheng Ren
>Assignee: Aiden Gong
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-25537) [JUnit5 Migration] Module: flink-core

2024-02-26 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-25537.

Fix Version/s: 1.20.0
   Resolution: Fixed

> [JUnit5 Migration] Module: flink-core
> -
>
> Key: FLINK-25537
> URL: https://issues.apache.org/jira/browse/FLINK-25537
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Reporter: Qingsheng Ren
>Assignee: Aiden Gong
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25537) [JUnit5 Migration] Module: flink-core

2024-02-26 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820667#comment-17820667
 ] 

Jiabao Sun commented on FLINK-25537:


master: 922cc2ad52203e4c474f3837fcc9a219dd293fa5

> [JUnit5 Migration] Module: flink-core
> -
>
> Key: FLINK-25537
> URL: https://issues.apache.org/jira/browse/FLINK-25537
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Reporter: Qingsheng Ren
>Assignee: Aiden Gong
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34461) MongoDB weekly builds fail with time out on Flink 1.18.1 for JDK17

2024-02-19 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun resolved FLINK-34461.

Fix Version/s: mongodb-1.1.0
   Resolution: Fixed

> MongoDB weekly builds fail with time out on Flink 1.18.1 for JDK17
> --
>
> Key: FLINK-34461
> URL: https://issues.apache.org/jira/browse/FLINK-34461
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.1.0
>Reporter: Martijn Visser
>Assignee: Jiabao Sun
>Priority: Critical
>  Labels: test-stability
> Fix For: mongodb-1.1.0
>
>
> The weekly tests for MongoDB consistently time out for the v1.0 branch while 
> testing Flink 1.18.1 for JDK17:
> https://github.com/apache/flink-connector-mongodb/actions/runs/7770329490/job/21190387348
> https://github.com/apache/flink-connector-mongodb/actions/runs/7858349600/job/21443232301
> https://github.com/apache/flink-connector-mongodb/actions/runs/7945225005/job/21691624903



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34461) MongoDB weekly builds fail with time out on Flink 1.18.1 for JDK17

2024-02-19 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818527#comment-17818527
 ] 

Jiabao Sun commented on FLINK-34461:


The reason for this issue is that the v1.0 branch is missing the backport of 
FLINK-33899. 
It has been fixed in PR-28 via 
v1.0 (5a8b0979d79e1da009115cde7375bf28c45c22ad, 
a56c003b8c5aca646e47d4950189b81c9e7e75c3).

Since the main branch has update nightly builds against the latest released 
v1.1 branch which already includes these two commits, the nightly CI will not 
fail.
main (aaf3867b2a72a61a0511f250c36580842623b6bc)

> MongoDB weekly builds fail with time out on Flink 1.18.1 for JDK17
> --
>
> Key: FLINK-34461
> URL: https://issues.apache.org/jira/browse/FLINK-34461
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.1.0
>Reporter: Martijn Visser
>Assignee: Jiabao Sun
>Priority: Critical
>  Labels: test-stability
>
> The weekly tests for MongoDB consistently time out for the v1.0 branch while 
> testing Flink 1.18.1 for JDK17:
> https://github.com/apache/flink-connector-mongodb/actions/runs/7770329490/job/21190387348
> https://github.com/apache/flink-connector-mongodb/actions/runs/7858349600/job/21443232301
> https://github.com/apache/flink-connector-mongodb/actions/runs/7945225005/job/21691624903



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34461) MongoDB weekly builds fail with time out on Flink 1.18.1 for JDK17

2024-02-19 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun reassigned FLINK-34461:
--

Assignee: Jiabao Sun

> MongoDB weekly builds fail with time out on Flink 1.18.1 for JDK17
> --
>
> Key: FLINK-34461
> URL: https://issues.apache.org/jira/browse/FLINK-34461
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.1.0
>Reporter: Martijn Visser
>Assignee: Jiabao Sun
>Priority: Critical
>  Labels: test-stability
>
> The weekly tests for MongoDB consistently time out for the v1.0 branch while 
> testing Flink 1.18.1 for JDK17:
> https://github.com/apache/flink-connector-mongodb/actions/runs/7770329490/job/21190387348
> https://github.com/apache/flink-connector-mongodb/actions/runs/7858349600/job/21443232301
> https://github.com/apache/flink-connector-mongodb/actions/runs/7945225005/job/21691624903



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34214) FLIP-377: Support fine-grained configuration to control filter push down for Table/SQL Sources

2024-02-11 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816377#comment-17816377
 ] 

Jiabao Sun commented on FLINK-34214:


Hi [~stayrascal]. 
When pushing to the database without filters, we can iterate through the data 
in batches using primary key indexing or natural order and filter the data 
using external computing resources. This greatly reduces the computational 
overhead on the database for filters that do not hit the indexes.

> FLIP-377: Support fine-grained configuration to control filter push down for 
> Table/SQL Sources
> --
>
> Key: FLINK-34214
> URL: https://issues.apache.org/jira/browse/FLINK-34214
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC, Connectors / MongoDB
>Affects Versions: mongodb-1.0.2, jdbc-3.1.2
>Reporter: jiabao.sun
>Assignee: jiabao.sun
>Priority: Major
> Fix For: jdbc-3.1.3, mongodb-1.2.0
>
>
> This improvement implements [FLIP-377 Support fine-grained configuration to 
> control filter push down for Table/SQL 
> Sources|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=276105768]
> This FLIP has 2 goals:
>  * Introduces a new configuration filter.handling.policy to the JDBC and 
> MongoDB connector.
>  * Suggests a convention option name if other connectors are going to add an 
> option for the same purpose.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34337) Sink.InitContextWrapper should implement metadataConsumer method

2024-02-01 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-34337:
--

 Summary: Sink.InitContextWrapper should implement metadataConsumer 
method
 Key: FLINK-34337
 URL: https://issues.apache.org/jira/browse/FLINK-34337
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.19.0
Reporter: Jiabao Sun
 Fix For: 1.19.0


Sink.InitContextWrapper should implement metadataConsumer method.

If the metadataConsumer method is not implemented, the behavior of the wrapped 
WriterInitContext's metadataConsumer will be lost.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34259) flink-connector-jdbc fails to compile with NPE on hasGenericTypesDisabled

2024-01-30 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812312#comment-17812312
 ] 

Jiabao Sun commented on FLINK-34259:


The PR was reopened, PTAL.

> flink-connector-jdbc fails to compile with NPE on hasGenericTypesDisabled
> -
>
> Key: FLINK-34259
> URL: https://issues.apache.org/jira/browse/FLINK-34259
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Reporter: Martijn Visser
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> https://github.com/apache/flink-connector-jdbc/actions/runs/7682035724/job/20935884874#step:14:150
> {code:java}
> Error:  Tests run: 10, Failures: 5, Errors: 4, Skipped: 0, Time elapsed: 
> 7.909 s <<< FAILURE! - in 
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest
> Error:  
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest.testInvalidConnectionInJdbcOutputFormat
>   Time elapsed: 3.254 s  <<< ERROR!
> java.lang.NullPointerException: Cannot invoke 
> "org.apache.flink.api.common.serialization.SerializerConfig.hasGenericTypesDisabled()"
>  because "config" is null
>   at 
> org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:85)
>   at 
> org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:99)
>   at 
> org.apache.flink.connector.jdbc.JdbcTestBase.getSerializer(JdbcTestBase.java:70)
>   at 
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest.testInvalidConnectionInJdbcOutputFormat(JdbcRowOutputFormatTest.java:336)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
> {code}
> Seems to be caused by FLINK-34122 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34259) flink-connector-jdbc fails to compile with NPE on hasGenericTypesDisabled

2024-01-30 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812299#comment-17812299
 ] 

Jiabao Sun commented on FLINK-34259:


[~martijnvisser] But I still have a question. In the previous changes by 
[FLINK-34090], there was no change in the compatibility of the public 
interface. Normally, when creating an ExecutionConfig object through the 
constructor of ExecutionConfig, a SerializerConfig object is also created, so 
the issue of NPE being thrown by hasGenericTypesDisabled should not occur. The 
NPE exception thrown in the JDBC connector test is mainly because the 
ExecutionConfig is mocked using Mockito, so 
serializerConfig.hasGenericTypesDisabled() will throw NPE. I'm not sure if this 
qualifies as breaking the public interface.

> flink-connector-jdbc fails to compile with NPE on hasGenericTypesDisabled
> -
>
> Key: FLINK-34259
> URL: https://issues.apache.org/jira/browse/FLINK-34259
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Reporter: Martijn Visser
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> https://github.com/apache/flink-connector-jdbc/actions/runs/7682035724/job/20935884874#step:14:150
> {code:java}
> Error:  Tests run: 10, Failures: 5, Errors: 4, Skipped: 0, Time elapsed: 
> 7.909 s <<< FAILURE! - in 
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest
> Error:  
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest.testInvalidConnectionInJdbcOutputFormat
>   Time elapsed: 3.254 s  <<< ERROR!
> java.lang.NullPointerException: Cannot invoke 
> "org.apache.flink.api.common.serialization.SerializerConfig.hasGenericTypesDisabled()"
>  because "config" is null
>   at 
> org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:85)
>   at 
> org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:99)
>   at 
> org.apache.flink.connector.jdbc.JdbcTestBase.getSerializer(JdbcTestBase.java:70)
>   at 
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest.testInvalidConnectionInJdbcOutputFormat(JdbcRowOutputFormatTest.java:336)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
> {code}
> Seems to be caused by FLINK-34122 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34259) flink-connector-jdbc fails to compile with NPE on hasGenericTypesDisabled

2024-01-30 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812293#comment-17812293
 ] 

Jiabao Sun commented on FLINK-34259:


Thanks [~martijnvisser] , I will close the PR for the JDBC connector.

> flink-connector-jdbc fails to compile with NPE on hasGenericTypesDisabled
> -
>
> Key: FLINK-34259
> URL: https://issues.apache.org/jira/browse/FLINK-34259
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Reporter: Martijn Visser
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> https://github.com/apache/flink-connector-jdbc/actions/runs/7682035724/job/20935884874#step:14:150
> {code:java}
> Error:  Tests run: 10, Failures: 5, Errors: 4, Skipped: 0, Time elapsed: 
> 7.909 s <<< FAILURE! - in 
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest
> Error:  
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest.testInvalidConnectionInJdbcOutputFormat
>   Time elapsed: 3.254 s  <<< ERROR!
> java.lang.NullPointerException: Cannot invoke 
> "org.apache.flink.api.common.serialization.SerializerConfig.hasGenericTypesDisabled()"
>  because "config" is null
>   at 
> org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:85)
>   at 
> org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:99)
>   at 
> org.apache.flink.connector.jdbc.JdbcTestBase.getSerializer(JdbcTestBase.java:70)
>   at 
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest.testInvalidConnectionInJdbcOutputFormat(JdbcRowOutputFormatTest.java:336)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
> {code}
> Seems to be caused by FLINK-34122 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34259) flink-connector-jdbc fails to compile with NPE on hasGenericTypesDisabled

2024-01-30 Thread Jiabao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812291#comment-17812291
 ] 

Jiabao Sun edited comment on FLINK-34259 at 1/30/24 12:45 PM:
--

[~martijnvisser]
It seems introduced by [FLINK-34090]
It doesn't seem to break public interfaces, and I think we only need to make 
some adjustments in the testing of the JDBC connector.


was (Author: jiabao.sun):
[~martijnvisser]
It seems introduced by 
[FLINK-34090](https://issues.apache.org/jira/browse/FLINK-34122) 
It doesn't seem to break public interfaces, and I think we only need to make 
some adjustments in the testing of the JDBC connector.

> flink-connector-jdbc fails to compile with NPE on hasGenericTypesDisabled
> -
>
> Key: FLINK-34259
> URL: https://issues.apache.org/jira/browse/FLINK-34259
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Reporter: Martijn Visser
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> https://github.com/apache/flink-connector-jdbc/actions/runs/7682035724/job/20935884874#step:14:150
> {code:java}
> Error:  Tests run: 10, Failures: 5, Errors: 4, Skipped: 0, Time elapsed: 
> 7.909 s <<< FAILURE! - in 
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest
> Error:  
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest.testInvalidConnectionInJdbcOutputFormat
>   Time elapsed: 3.254 s  <<< ERROR!
> java.lang.NullPointerException: Cannot invoke 
> "org.apache.flink.api.common.serialization.SerializerConfig.hasGenericTypesDisabled()"
>  because "config" is null
>   at 
> org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:85)
>   at 
> org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:99)
>   at 
> org.apache.flink.connector.jdbc.JdbcTestBase.getSerializer(JdbcTestBase.java:70)
>   at 
> org.apache.flink.connector.jdbc.JdbcRowOutputFormatTest.testInvalidConnectionInJdbcOutputFormat(JdbcRowOutputFormatTest.java:336)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
>   at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
> {code}
> Seems to be caused by FLINK-34122 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   >