[jira] [Commented] (FLINK-25858) Remove ArchUnit rules for JUnit 4 in ITCaseRules after the JUnit 4->5 migration is closed

2022-10-05 Thread Duncan Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17613308#comment-17613308
 ] 

Duncan Chan commented on FLINK-25858:
-

which task is blocking this one? Please assign it to me.

> Remove ArchUnit rules for JUnit 4 in ITCaseRules after the JUnit 4->5 
> migration is closed
> -
>
> Key: FLINK-25858
> URL: https://issues.apache.org/jira/browse/FLINK-25858
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Jing Ge
>Priority: Minor
>
> Some ArchUnit rules have been created for JUnit 4 test during the JUnit 4->5 
> migration. 
> Remove them after the migration is closed.  To make the work easier, comment 
> with "JUnit 4" text has been added.
>  
> org.apache.flink.architecture.rules.ITCaseRules



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28049) Introduce FLIP-208 functionality to stop Source based on consumed records

2022-10-05 Thread Sergey Troshkov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17613297#comment-17613297
 ] 

Sergey Troshkov commented on FLINK-28049:
-

Hi [~afedulov]! I am interested in this issue. I would like to implement this.

> Introduce FLIP-208 functionality to stop Source based on consumed records
> -
>
> Key: FLINK-28049
> URL: https://issues.apache.org/jira/browse/FLINK-28049
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Alexander Fedulov
>Priority: Major
>
> https://cwiki.apache.org/confluence/x/fZbkCw



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29513) Update Kafka version to 3.2.3

2022-10-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29513:
---
Labels: pull-request-available  (was: )

> Update Kafka version to 3.2.3
> -
>
> Key: FLINK-29513
> URL: https://issues.apache.org/jira/browse/FLINK-29513
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> Kafka 3.2.3 contains certain security fixes (see 
> https://downloads.apache.org/kafka/3.2.3/RELEASE_NOTES.html). We should 
> upgrade the dependency in Flink



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] MartijnVisser commented on pull request #20973: [FLINK-29514][Deployment/YARN] Bump Minikdc to v3.2.4

2022-10-05 Thread GitBox


MartijnVisser commented on PR #20973:
URL: https://github.com/apache/flink/pull/20973#issuecomment-1269245937

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] MartijnVisser commented on pull request #20972: [FLINK-29513][Connector/Kafka] Update Kafka to version 3.2.3

2022-10-05 Thread GitBox


MartijnVisser commented on PR #20972:
URL: https://github.com/apache/flink/pull/20972#issuecomment-1269245496

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] MartijnVisser commented on pull request #20170: [FLINK-28405][Connector/Kafka] Update Confluent Platform images used for testing to v7.2.2

2022-10-05 Thread GitBox


MartijnVisser commented on PR #20170:
URL: https://github.com/apache/flink/pull/20170#issuecomment-1269244849

   @zentol Seems I've finally fixed this so would be great if you could have a 
final look


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20975: [FLINK-29526][state/changelog] fix java doc mistake in SequenceNumber…

2022-10-05 Thread GitBox


flinkbot commented on PR #20975:
URL: https://github.com/apache/flink/pull/20975#issuecomment-1269242781

   
   ## CI report:
   
   * c66a24b69aeef3692996da9066099f0c6d610a5e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29526) Java doc mistake in SequenceNumberRange#contains()

2022-10-05 Thread Feifan Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feifan Wang updated FLINK-29526:

Description: 
!image-2022-10-06-10-50-16-927.png|width=554,height=106!

Hi [~masteryhx] , It seems a typo, I have submit a pr for it.

  was:!image-2022-10-06-10-50-16-927.png|width=554,height=106!


> Java doc mistake in SequenceNumberRange#contains()
> --
>
> Key: FLINK-29526
> URL: https://issues.apache.org/jira/browse/FLINK-29526
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Feifan Wang
>Priority: Not a Priority
>  Labels: pull-request-available
> Attachments: image-2022-10-06-10-50-16-927.png
>
>
> !image-2022-10-06-10-50-16-927.png|width=554,height=106!
> Hi [~masteryhx] , It seems a typo, I have submit a pr for it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29526) Java doc mistake in SequenceNumberRange#contains()

2022-10-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29526:
---
Labels: pull-request-available  (was: )

> Java doc mistake in SequenceNumberRange#contains()
> --
>
> Key: FLINK-29526
> URL: https://issues.apache.org/jira/browse/FLINK-29526
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Feifan Wang
>Priority: Not a Priority
>  Labels: pull-request-available
> Attachments: image-2022-10-06-10-50-16-927.png
>
>
> !image-2022-10-06-10-50-16-927.png|width=554,height=106!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zoltar9264 opened a new pull request, #20975: [FLINK-29526][state/changelog] fix java doc mistake in SequenceNumber…

2022-10-05 Thread GitBox


zoltar9264 opened a new pull request, #20975:
URL: https://github.com/apache/flink/pull/20975

   …Range#contains()
   
   ## What is the purpose of the change
   
   fix java doc mistake in SequenceNumberRange#contains(), described in 
[FLINK-29526](https://issues.apache.org/jira/browse/FLINK-29526).
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-29523) Support STR_TO_MAP、SUBSTR built-in function in Table API

2022-10-05 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-29523:
---

Assignee: zhangjingcun

> Support STR_TO_MAP、SUBSTR built-in function in Table API
> 
>
> Key: FLINK-29523
> URL: https://issues.apache.org/jira/browse/FLINK-29523
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29522) Support SPLIT_INDEX built-in function in Table API

2022-10-05 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-29522:
---

Assignee: zhangjingcun

> Support SPLIT_INDEX built-in function in Table API
> --
>
> Key: FLINK-29522
> URL: https://issues.apache.org/jira/browse/FLINK-29522
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29525) Support INSTR、LEFT、RIGHT built-in function in Table API

2022-10-05 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-29525:
---

Assignee: zhangjingcun

> Support INSTR、LEFT、RIGHT built-in function in Table API
> ---
>
> Key: FLINK-29525
> URL: https://issues.apache.org/jira/browse/FLINK-29525
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29524) Support DECODE、ENCODE built-in function in Table API

2022-10-05 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-29524:
---

Assignee: zhangjingcun

> Support DECODE、ENCODE built-in function in Table API
> 
>
> Key: FLINK-29524
> URL: https://issues.apache.org/jira/browse/FLINK-29524
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29518) Support YEAR、QUARTER、MONTH、WEEK、HOUR、MINUTE、SECOND built-in function in Table API

2022-10-05 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-29518:
---

Assignee: zhangjingcun

> Support YEAR、QUARTER、MONTH、WEEK、HOUR、MINUTE、SECOND built-in function in Table 
> API
> -
>
> Key: FLINK-29518
> URL: https://issues.apache.org/jira/browse/FLINK-29518
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29519) Support DAYOFYEAR、DAYOFMONTH built-in function in Table API

2022-10-05 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-29519:
---

Assignee: zhangjingcun

> Support DAYOFYEAR、DAYOFMONTH built-in function in Table API
> ---
>
> Key: FLINK-29519
> URL: https://issues.apache.org/jira/browse/FLINK-29519
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29521) Support REVERSE built-in function in Table API

2022-10-05 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-29521:
---

Assignee: zhangjingcun

> Support REVERSE built-in function in Table API
> --
>
> Key: FLINK-29521
> URL: https://issues.apache.org/jira/browse/FLINK-29521
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29520) Support PARSE_URL built-in function in Table API

2022-10-05 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-29520:
---

Assignee: zhangjingcun

> Support PARSE_URL built-in function in Table API
> 
>
> Key: FLINK-29520
> URL: https://issues.apache.org/jira/browse/FLINK-29520
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29526) Java doc mistake in SequenceNumberRange#contains()

2022-10-05 Thread Feifan Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feifan Wang updated FLINK-29526:

Priority: Not a Priority  (was: Major)

> Java doc mistake in SequenceNumberRange#contains()
> --
>
> Key: FLINK-29526
> URL: https://issues.apache.org/jira/browse/FLINK-29526
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Feifan Wang
>Priority: Not a Priority
> Attachments: image-2022-10-06-10-50-16-927.png
>
>
> !image-2022-10-06-10-50-16-927.png|width=554,height=106!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29526) Java doc mistake in SequenceNumberRange#contains()

2022-10-05 Thread Feifan Wang (Jira)
Feifan Wang created FLINK-29526:
---

 Summary: Java doc mistake in SequenceNumberRange#contains()
 Key: FLINK-29526
 URL: https://issues.apache.org/jira/browse/FLINK-29526
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Reporter: Feifan Wang
 Attachments: image-2022-10-06-10-50-16-927.png

!image-2022-10-06-10-50-16-927.png|width=554,height=106!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29244) Add metric lastMaterializationDuration to ChangelogMaterializationMetricGroup

2022-10-05 Thread Feifan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17613284#comment-17613284
 ] 

Feifan Wang commented on FLINK-29244:
-

Thanks [~masteryhx] , I have submit a 
[PR|https://github.com/apache/flink/pull/20965] for this ticket, but it seems 
not linked to this issue automatically. Can you help me review it ?

> Add metric lastMaterializationDuration to  ChangelogMaterializationMetricGroup
> --
>
> Key: FLINK-29244
> URL: https://issues.apache.org/jira/browse/FLINK-29244
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Feifan Wang
>Assignee: Feifan Wang
>Priority: Major
>
> Materialization duration can help us evaluate the efficiency of 
> materialization and the impact on the job.
>  
> How do you think about ? [~roman] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29525) Support INSTR、LEFT、RIGHT built-in function in Table API

2022-10-05 Thread zhangjingcun (Jira)
zhangjingcun created FLINK-29525:


 Summary: Support INSTR、LEFT、RIGHT built-in function in Table API
 Key: FLINK-29525
 URL: https://issues.apache.org/jira/browse/FLINK-29525
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Table SQL / API
Affects Versions: 1.16.0
Reporter: zhangjingcun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29524) Support DECODE、ENCODE built-in function in Table API

2022-10-05 Thread zhangjingcun (Jira)
zhangjingcun created FLINK-29524:


 Summary: Support DECODE、ENCODE built-in function in Table API
 Key: FLINK-29524
 URL: https://issues.apache.org/jira/browse/FLINK-29524
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Table SQL / API
Affects Versions: 1.16.0
Reporter: zhangjingcun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29523) Support STR_TO_MAP、SUBSTR built-in function in Table API

2022-10-05 Thread zhangjingcun (Jira)
zhangjingcun created FLINK-29523:


 Summary: Support STR_TO_MAP、SUBSTR built-in function in Table API
 Key: FLINK-29523
 URL: https://issues.apache.org/jira/browse/FLINK-29523
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Table SQL / API
Affects Versions: 1.16.0
Reporter: zhangjingcun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29522) Support SPLIT_INDEX built-in function in Table API

2022-10-05 Thread zhangjingcun (Jira)
zhangjingcun created FLINK-29522:


 Summary: Support SPLIT_INDEX built-in function in Table API
 Key: FLINK-29522
 URL: https://issues.apache.org/jira/browse/FLINK-29522
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Table SQL / API
Affects Versions: 1.16.0
Reporter: zhangjingcun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29521) Support REVERSE built-in function in Table API

2022-10-05 Thread zhangjingcun (Jira)
zhangjingcun created FLINK-29521:


 Summary: Support REVERSE built-in function in Table API
 Key: FLINK-29521
 URL: https://issues.apache.org/jira/browse/FLINK-29521
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Table SQL / API
Affects Versions: 1.16.0
Reporter: zhangjingcun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29520) Support PARSE_URL built-in function in Table API

2022-10-05 Thread zhangjingcun (Jira)
zhangjingcun created FLINK-29520:


 Summary: Support PARSE_URL built-in function in Table API
 Key: FLINK-29520
 URL: https://issues.apache.org/jira/browse/FLINK-29520
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Table SQL / API
Affects Versions: 1.16.0
Reporter: zhangjingcun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29519) Support DAYOFYEAR、DAYOFMONTH built-in function in Table API

2022-10-05 Thread zhangjingcun (Jira)
zhangjingcun created FLINK-29519:


 Summary: Support DAYOFYEAR、DAYOFMONTH built-in function in Table 
API
 Key: FLINK-29519
 URL: https://issues.apache.org/jira/browse/FLINK-29519
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Table SQL / API
Affects Versions: 1.16.0
Reporter: zhangjingcun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29518) Support YEAR、QUARTER、MONTH、WEEK、HOUR、MINUTE、SECOND built-in function in Table API

2022-10-05 Thread zhangjingcun (Jira)
zhangjingcun created FLINK-29518:


 Summary: Support YEAR、QUARTER、MONTH、WEEK、HOUR、MINUTE、SECOND 
built-in function in Table API
 Key: FLINK-29518
 URL: https://issues.apache.org/jira/browse/FLINK-29518
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Table SQL / API
Affects Versions: 1.16.0
Reporter: zhangjingcun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29517) Support DATE_FORMAT built-in function in Table API

2022-10-05 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-29517:
---

Assignee: zhangjingcun

> Support DATE_FORMAT built-in function in Table API
> --
>
> Key: FLINK-29517
> URL: https://issues.apache.org/jira/browse/FLINK-29517
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29516) Support TIMESTAMPADD built-in function in Table API

2022-10-05 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-29516:
---

Assignee: zhangjingcun

> Support TIMESTAMPADD built-in function in Table API
> ---
>
> Key: FLINK-29516
> URL: https://issues.apache.org/jira/browse/FLINK-29516
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: zhangjingcun
>Assignee: zhangjingcun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29517) Support DATE_FORMAT built-in function in Table API

2022-10-05 Thread zhangjingcun (Jira)
zhangjingcun created FLINK-29517:


 Summary: Support DATE_FORMAT built-in function in Table API
 Key: FLINK-29517
 URL: https://issues.apache.org/jira/browse/FLINK-29517
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Table SQL / API
Affects Versions: 1.16.0
Reporter: zhangjingcun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29516) Support TIMESTAMPADD built-in function in Table API

2022-10-05 Thread zhangjingcun (Jira)
zhangjingcun created FLINK-29516:


 Summary: Support TIMESTAMPADD built-in function in Table API
 Key: FLINK-29516
 URL: https://issues.apache.org/jira/browse/FLINK-29516
 Project: Flink
  Issue Type: Improvement
  Components: API / Python, Table SQL / API
Affects Versions: 1.16.0
Reporter: zhangjingcun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #20974: [FLINK-29352]Support CONVERT_TZ built-in function in Table API

2022-10-05 Thread GitBox


flinkbot commented on PR #20974:
URL: https://github.com/apache/flink/pull/20974#issuecomment-1269188146

   
   ## CI report:
   
   * e9749eb63f7486bb37f5b0ba7286123c21aae9a5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] cun8cun8 opened a new pull request, #20974: [FLINK-29352]Support CONVERT_TZ built-in function in Table API

2022-10-05 Thread GitBox


cun8cun8 opened a new pull request, #20974:
URL: https://github.com/apache/flink/pull/20974

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29352) Support CONVERT_TZ built-in function in Table API

2022-10-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29352:
---
Labels: pull-request-available  (was: )

> Support CONVERT_TZ built-in function in Table API
> -
>
> Key: FLINK-29352
> URL: https://issues.apache.org/jira/browse/FLINK-29352
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Table SQL / API
>Affects Versions: 1.16.0
>Reporter: Luning Wang
>Assignee: zhangjingcun
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] cun8cun8 closed pull request #20944: [FLINK-29352]Support CONVERT_TZ built-in function in Table API

2022-10-05 Thread GitBox


cun8cun8 closed pull request #20944: [FLINK-29352]Support CONVERT_TZ built-in 
function in Table API
URL: https://github.com/apache/flink/pull/20944


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-29515) Document KafkaSource behavior with deleted topics

2022-10-05 Thread Mason Chen (Jira)
Mason Chen created FLINK-29515:
--

 Summary: Document KafkaSource behavior with deleted topics
 Key: FLINK-29515
 URL: https://issues.apache.org/jira/browse/FLINK-29515
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka, Documentation
Affects Versions: 1.17.0
Reporter: Mason Chen






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] leletan commented on a diff in pull request #20852: [FLINK-27101][checkpointing][rest] Add restful API to trigger checkpoints

2022-10-05 Thread GitBox


leletan commented on code in PR #20852:
URL: https://github.com/apache/flink/pull/20852#discussion_r988446731


##
flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/checkpoints/CheckpointTriggerInfo.java:
##
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.messages.checkpoints;
+
+import org.apache.flink.runtime.rest.messages.ResponseBody;
+import 
org.apache.flink.runtime.rest.messages.json.SerializedThrowableDeserializer;
+import 
org.apache.flink.runtime.rest.messages.json.SerializedThrowableSerializer;
+import org.apache.flink.util.SerializedThrowable;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonCreator;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonInclude;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonProperty;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.annotation.JsonDeserialize;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.annotation.JsonSerialize;
+
+import javax.annotation.Nullable;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/** Represents information about a finished triggered checkpoint. */
+@JsonInclude(JsonInclude.Include.NON_NULL)
+public class CheckpointTriggerInfo implements ResponseBody {
+
+private static final String FIELD_NAME_CHECKPOINT_ID = "checkpointId";
+
+private static final String FIELD_NAME_FAILURE_CAUSE = "failure-cause";
+
+@JsonProperty(FIELD_NAME_CHECKPOINT_ID)
+@Nullable
+private final Long checkpointId;
+
+@JsonProperty(FIELD_NAME_FAILURE_CAUSE)
+@JsonSerialize(using = SerializedThrowableSerializer.class)
+@JsonDeserialize(using = SerializedThrowableDeserializer.class)
+@Nullable
+private final SerializedThrowable failureCause;
+
+@JsonCreator
+public CheckpointTriggerInfo(
+@JsonProperty(FIELD_NAME_CHECKPOINT_ID) @Nullable final Long 
checkpointId,
+@JsonProperty(FIELD_NAME_FAILURE_CAUSE)
+@JsonDeserialize(using = 
SerializedThrowableDeserializer.class)
+@Nullable
+final SerializedThrowable failureCause) {
+checkArgument(
+checkpointId != null ^ failureCause != null,
+"Either checkpointId or failureCause must be set");
+
+this.checkpointId = checkpointId;
+this.failureCause = failureCause;
+}
+
+@Nullable
+public Long getCheckpointId() {
+return checkpointId;
+}
+
+@Nullable
+public SerializedThrowable getFailureCause() {
+return failureCause;
+}

Review Comment:
   Sure. Do we want to fix 
[SavepointInfo](https://github.com/apache/flink/blob/99c2a415e9eeefafacf70762b6f54070f7911ceb/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/savepoints/SavepointInfo.java#L74)
 as well?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] leletan commented on a diff in pull request #20852: [FLINK-27101][checkpointing][rest] Add restful API to trigger checkpoints

2022-10-05 Thread GitBox


leletan commented on code in PR #20852:
URL: https://github.com/apache/flink/pull/20852#discussion_r988441596


##
flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/checkpoints/CheckpointTriggerInfo.java:
##
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.messages.checkpoints;
+
+import org.apache.flink.runtime.rest.messages.ResponseBody;
+import 
org.apache.flink.runtime.rest.messages.json.SerializedThrowableDeserializer;
+import 
org.apache.flink.runtime.rest.messages.json.SerializedThrowableSerializer;
+import org.apache.flink.util.SerializedThrowable;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonCreator;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonInclude;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonProperty;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.annotation.JsonDeserialize;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.annotation.JsonSerialize;
+
+import javax.annotation.Nullable;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/** Represents information about a finished triggered checkpoint. */
+@JsonInclude(JsonInclude.Include.NON_NULL)
+public class CheckpointTriggerInfo implements ResponseBody {
+
+private static final String FIELD_NAME_CHECKPOINT_ID = "checkpointId";
+
+private static final String FIELD_NAME_FAILURE_CAUSE = "failure-cause";

Review Comment:
   Was thinking of the same, but hesitated when seeing this in the 
[SavepointInfo](https://github.com/apache/flink/blob/99c2a415e9eeefafacf70762b6f54070f7911ceb/flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/savepoints/SavepointInfo.java#L42),
 should we fix that as well?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20973: [FLINK-29514][Deployment/YARN] Bump Minikdc to v3.2.4

2022-10-05 Thread GitBox


flinkbot commented on PR #20973:
URL: https://github.com/apache/flink/pull/20973#issuecomment-1269046874

   
   ## CI report:
   
   * 133d24a675cc24dc6aab61d50165f96f469c4daa UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29514) Bump Minikdc to v3.2.4

2022-10-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29514:
---
Labels: pull-request-available  (was: )

> Bump Minikdc to v3.2.4
> --
>
> Key: FLINK-29514
> URL: https://issues.apache.org/jira/browse/FLINK-29514
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Deployment / YARN
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> Bump Minikdc to v3.2.4 to remove false positive scans on CVEs like 
> CVE-2021-29425 and CVE-2020-15250



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] MartijnVisser opened a new pull request, #20973: [FLINK-29514][Deployment/YARN] Bump Minikdc to v3.2.4

2022-10-05 Thread GitBox


MartijnVisser opened a new pull request, #20973:
URL: https://github.com/apache/flink/pull/20973

   ## What is the purpose of the change
   
   * Bump Minikdc to v3.2.4 to avoid getting falsely flagged as vulnerable for 
CVEs which don't impact Flink
   
   ## Brief change log
   
   * Updated dependency in POM
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): yes
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-29514) Bump Minikdc to v3.2.4

2022-10-05 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-29514:
--

 Summary: Bump Minikdc to v3.2.4
 Key: FLINK-29514
 URL: https://issues.apache.org/jira/browse/FLINK-29514
 Project: Flink
  Issue Type: Technical Debt
  Components: Deployment / YARN
Reporter: Martijn Visser
Assignee: Martijn Visser


Bump Minikdc to v3.2.4 to remove false positive scans on CVEs like 
CVE-2021-29425 and CVE-2020-15250



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28405) Update Confluent Platform images to v7.2.2

2022-10-05 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-28405:
---
Description: 
We have updated the used Kafka Clients to v3.1.1 via FLINK-28060 and then to 
v3.2.1 via FLINK-28060, but we are using Confluent Platform 6.2.2 which 
supports up to Kafka 2.8.0.

We should update to Confluent Platform v7.2.2 (latest version of 7.2), which 
includes support for Kafka 3.2.1. 

  was:
We have updated the used Kafka Clients to v3.1.1 via FLINK-28060, but we are 
using Confluent Platform 6.2.2 which supports up to Kafka 2.8.0.

We should update to Confluent Platform v7.1.3, which includes support for Kafka 
3.1.0. 


> Update Confluent Platform images to v7.2.2
> --
>
> Key: FLINK-28405
> URL: https://issues.apache.org/jira/browse/FLINK-28405
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.16.0
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> We have updated the used Kafka Clients to v3.1.1 via FLINK-28060 and then to 
> v3.2.1 via FLINK-28060, but we are using Confluent Platform 6.2.2 which 
> supports up to Kafka 2.8.0.
> We should update to Confluent Platform v7.2.2 (latest version of 7.2), which 
> includes support for Kafka 3.2.1. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28405) Update Confluent Platform images to v7.2.2

2022-10-05 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-28405:
---
Summary: Update Confluent Platform images to v7.2.2  (was: Update Confluent 
Platform images to v7.1.3)

> Update Confluent Platform images to v7.2.2
> --
>
> Key: FLINK-28405
> URL: https://issues.apache.org/jira/browse/FLINK-28405
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.16.0
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> We have updated the used Kafka Clients to v3.1.1 via FLINK-28060, but we are 
> using Confluent Platform 6.2.2 which supports up to Kafka 2.8.0.
> We should update to Confluent Platform v7.1.3, which includes support for 
> Kafka 3.1.0. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] MartijnVisser commented on pull request #20343: [FLINK-25916][connector-kafka] Using upsert-kafka with a flush buffer…

2022-10-05 Thread GitBox


MartijnVisser commented on PR #20343:
URL: https://github.com/apache/flink/pull/20343#issuecomment-1265807076

   @PatrickRen WDYT?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] leletan commented on pull request #20852: [FLINK-27101][checkpointing][rest] Add restful API to trigger checkpoints

2022-10-05 Thread GitBox


leletan commented on PR #20852:
URL: https://github.com/apache/flink/pull/20852#issuecomment-1265689101

   Thanks, @pnowojski and @stevenzwu! 
   Combing the feedbacks and here is my plan:
   1. fix the build
   2. rebase all the changes so far into one commit. 
   3. do a follow up commit for the class renaming in this PR
   Let me know if you have concern about above. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] fsk119 commented on a diff in pull request #20931: [FLINK-29486][sql-client] Implement ClientParser for implementing remote SQL client later

2022-10-05 Thread GitBox


fsk119 commented on code in PR #20931:
URL: https://github.com/apache/flink/pull/20931#discussion_r985739642


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/ClientParser.java:
##
@@ -0,0 +1,137 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImplConstants;
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImplTokenManager;
+import org.apache.flink.sql.parser.impl.SimpleCharStream;
+import org.apache.flink.sql.parser.impl.Token;
+import org.apache.flink.table.api.SqlParserEOFException;
+import org.apache.flink.table.client.gateway.SqlExecutionException;
+import org.apache.flink.table.operations.Operation;
+
+import javax.annotation.Nonnull;
+
+import java.io.StringReader;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * ClientParser use {@link FlinkSqlParserImplTokenManager} to do lexical 
analysis. It cannot
+ * recognize special hive keywords yet.
+ */
+public class ClientParser implements SqlCommandParser, 
FlinkSqlParserImplConstants {
+
+/** A dumb implementation. TODO: remove this after unifying the 
SqlMultiLineParser. */
+@Override
+public Optional parseCommand(String command) {
+return Optional.empty();

Review Comment:
   I think it should be
   
   ```
   parseStatement(statement);
   return Optional.empty();
   ```



##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/parser/ClientParserTest.java:
##
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+import org.apache.flink.table.api.SqlParserEOFException;
+import org.apache.flink.table.client.gateway.SqlExecutionException;
+
+import org.junit.jupiter.params.ParameterizedTest;
+import org.junit.jupiter.params.provider.MethodSource;
+import org.junit.jupiter.params.provider.ValueSource;
+
+import javax.annotation.Nullable;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.Optional;
+
+import static org.apache.flink.core.testutils.FlinkAssertions.anyCauseMatches;
+import static 
org.apache.flink.table.client.cli.parser.StatementType.BEGIN_STATEMENT_SET;
+import static org.apache.flink.table.client.cli.parser.StatementType.CLEAR;
+import static org.apache.flink.table.client.cli.parser.StatementType.END;
+import static org.apache.flink.table.client.cli.parser.StatementType.EXPLAIN;
+import static org.apache.flink.table.client.cli.parser.StatementType.HELP;
+import static org.apache.flink.table.client.cli.parser.StatementType.OTHER;
+import static org.apache.flink.table.client.cli.parser.StatementType.QUIT;
+import static 
org.apache.flink.table.client.cli.parser.StatementType.SHOW_CREATE;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+/**
+ * Testing whether {@link ClientParser} can parse statement to get {@link 
StatementType} correctly.
+ */
+@SuppressWarnings("OptionalUsedAsFieldOrParameterType")
+public class ClientParserTest {
+
+private final ClientParser clientParser = new ClientParser();
+
+@ParameterizedTest
+@MethodSource("generateTestData")
+public void testParseStatement(TestSpec testData) {
+Optional type = 

[GitHub] [flink] leletan commented on pull request #20852: [FLINK-27101][checkpointing][rest] Add restful API to trigger checkpoints

2022-10-05 Thread GitBox


leletan commented on PR #20852:
URL: https://github.com/apache/flink/pull/20852#issuecomment-1265618274

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] m8719-github commented on pull request #20936: [FLINK-13703][flink-formats/flink-avro] AvroTypeInfo requires objects to be strict POJOs (mutable, with setters)

2022-10-05 Thread GitBox


m8719-github commented on PR #20936:
URL: https://github.com/apache/flink/pull/20936#issuecomment-1265589920

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora merged pull request #388: [FLINK-29413] Make it possible to associate triggered and completed savepoints

2022-10-05 Thread GitBox


gyfora merged PR #388:
URL: https://github.com/apache/flink-kubernetes-operator/pull/388


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-22243) Reactive Mode parallelism changes are not shown in the job graph visualization in the UI

2022-10-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-22243:
---
Labels: pull-request-available  (was: )

> Reactive Mode parallelism changes are not shown in the job graph 
> visualization in the UI
> 
>
> Key: FLINK-22243
> URL: https://issues.apache.org/jira/browse/FLINK-22243
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Robert Metzger
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png
>
>
> As reported here FLINK-22134, the parallelism in the visual job graph on top 
> of the detail page is not in sync with the parallelism listed in the task 
> list below, when reactive mode causes a parallelism change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] morhidi commented on a diff in pull request #388: [FLINK-29413] Make it possible to associate triggered and completed savepoints

2022-10-05 Thread GitBox


morhidi commented on code in PR #388:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/388#discussion_r985735711


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/crd/status/Savepoint.java:
##
@@ -38,7 +36,42 @@ public class Savepoint {
 /** Savepoint trigger mechanism. */
 private SavepointTriggerType triggerType = SavepointTriggerType.UNKNOWN;
 
-public static Savepoint of(String location, SavepointTriggerType type) {
-return new Savepoint(System.currentTimeMillis(), location, type);
+private SavepointFormatType formatType = SavepointFormatType.UNKNOWN;
+
+/**
+ * Nonce value used when the savepoint was triggered manually {@link
+ * SavepointTriggerType#MANUAL}, defaults to 0.
+ */
+private Long triggerNonce = 0L;
+
+public Savepoint(
+long timeStamp,
+String location,
+SavepointTriggerType triggerType,
+SavepointFormatType formatType,
+Long triggerNonce) {
+this.timeStamp = timeStamp;
+this.location = location;
+this.triggerType = triggerType;
+this.formatType = formatType;
+setTriggerNonce(triggerNonce);
+}
+
+public static Savepoint of(String location, SavepointTriggerType 
triggerType) {
+return new Savepoint(
+System.currentTimeMillis(), location, triggerType, 
SavepointFormatType.UNKNOWN, 0L);
+}
+
+public static Savepoint of(
+String location, SavepointTriggerType triggerType, 
SavepointFormatType formatType) {
+return new Savepoint(System.currentTimeMillis(), location, 
triggerType, formatType, 0L);
+}

Review Comment:
   I could've used nulls in the static helpers, but I added the null handling 
for a different purpose. Mainly to handle deserialization scenarios for older 
savepoints with no nonce. I guess it's using the default constructors + setters.



##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/crd/status/Savepoint.java:
##
@@ -38,7 +36,42 @@ public class Savepoint {
 /** Savepoint trigger mechanism. */
 private SavepointTriggerType triggerType = SavepointTriggerType.UNKNOWN;
 
-public static Savepoint of(String location, SavepointTriggerType type) {
-return new Savepoint(System.currentTimeMillis(), location, type);
+private SavepointFormatType formatType = SavepointFormatType.UNKNOWN;
+
+/**
+ * Nonce value used when the savepoint was triggered manually {@link
+ * SavepointTriggerType#MANUAL}, defaults to 0.
+ */
+private Long triggerNonce = 0L;
+
+public Savepoint(
+long timeStamp,
+String location,
+SavepointTriggerType triggerType,
+SavepointFormatType formatType,
+Long triggerNonce) {
+this.timeStamp = timeStamp;
+this.location = location;
+this.triggerType = triggerType;
+this.formatType = formatType;
+setTriggerNonce(triggerNonce);
+}
+
+public static Savepoint of(String location, SavepointTriggerType 
triggerType) {
+return new Savepoint(
+System.currentTimeMillis(), location, triggerType, 
SavepointFormatType.UNKNOWN, 0L);
+}
+
+public static Savepoint of(
+String location, SavepointTriggerType triggerType, 
SavepointFormatType formatType) {
+return new Savepoint(System.currentTimeMillis(), location, 
triggerType, formatType, 0L);
+}

Review Comment:
   The static helpers where there for convenince.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20941: [FLINK-22243] Sync parallelism in JobGraph with VertexParallelismStore

2022-10-05 Thread GitBox


flinkbot commented on PR #20941:
URL: https://github.com/apache/flink/pull/20941#issuecomment-1265412264

   
   ## CI report:
   
   * 41b6416c2d22ccd89ec8fafce56fc97a1d4c289a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dawidwys opened a new pull request, #20941: [FLINK-22243] Sync parallelism in JobGraph with VertexParallelismStore

2022-10-05 Thread GitBox


dawidwys opened a new pull request, #20941:
URL: https://github.com/apache/flink/pull/20941

   
   ## What is the purpose of the change
   
   In adaptive scheduler the actual parallelism is encoded in the 
VertexParallelismStore. This has two consequences:
   1. The Web UI does not show correct parallelism of Operators, because it 
uses the JsonPlan generated for the JobGraph
   2. Calling InputOutputFormatVertex#initializeOnMaster uses incorrect 
parallelism for InitializeOnMaster#initializeGlobal
   
   ## Verifying this change
   Added a test in `ReactiveModeITCase`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (**yes** / no / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-opensearch] reta commented on pull request #1: [FLINK-25756] [connectors/opensearch] Dedicated Opensearch connectors

2022-10-05 Thread GitBox


reta commented on PR #1:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/1#issuecomment-1265396809

   @MartijnVisser could you please take a look? thank you :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #388: [FLINK-29413] Make it possible to associate triggered and completed savepoints

2022-10-05 Thread GitBox


gyfora commented on code in PR #388:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/388#discussion_r985699753


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/crd/status/Savepoint.java:
##
@@ -38,7 +36,42 @@ public class Savepoint {
 /** Savepoint trigger mechanism. */
 private SavepointTriggerType triggerType = SavepointTriggerType.UNKNOWN;
 
-public static Savepoint of(String location, SavepointTriggerType type) {
-return new Savepoint(System.currentTimeMillis(), location, type);
+private SavepointFormatType formatType = SavepointFormatType.UNKNOWN;
+
+/**
+ * Nonce value used when the savepoint was triggered manually {@link
+ * SavepointTriggerType#MANUAL}, defaults to 0.
+ */
+private Long triggerNonce = 0L;
+
+public Savepoint(
+long timeStamp,
+String location,
+SavepointTriggerType triggerType,
+SavepointFormatType formatType,
+Long triggerNonce) {
+this.timeStamp = timeStamp;
+this.location = location;
+this.triggerType = triggerType;
+this.formatType = formatType;
+setTriggerNonce(triggerNonce);
+}
+
+public static Savepoint of(String location, SavepointTriggerType 
triggerType) {
+return new Savepoint(
+System.currentTimeMillis(), location, triggerType, 
SavepointFormatType.UNKNOWN, 0L);
+}
+
+public static Savepoint of(
+String location, SavepointTriggerType triggerType, 
SavepointFormatType formatType) {
+return new Savepoint(System.currentTimeMillis(), location, 
triggerType, formatType, 0L);
+}

Review Comment:
   What is the purpose of these methods if you don't pass the 
savepointTriggerNonce ? You added logic to handle the nulls but you still 
explicitly pass `0L` here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #377: [FLINK-28979] Add owner reference to flink deployment object

2022-10-05 Thread GitBox


gyfora commented on code in PR #377:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/377#discussion_r985697814


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/AbstractFlinkResourceReconciler.java:
##
@@ -97,6 +100,8 @@ public final void reconcile(CR cr, Context ctx) throws 
Exception {
 return;
 }
 
+setOwnerReference(cr, deployConfig);
+

Review Comment:
   I think we should call this method from inside the `deploy` method otherwise 
the ownerreference might not be set correctly during rollbacks, restarts etc



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] yuzelin commented on a diff in pull request #20931: [FLINK-29486][sql-client] Implement ClientParser for implementing remote SQL client later

2022-10-05 Thread GitBox


yuzelin commented on code in PR #20931:
URL: https://github.com/apache/flink/pull/20931#discussion_r985673122


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/ClientParser.java:
##
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImplTokenManager;
+import org.apache.flink.sql.parser.impl.SimpleCharStream;
+import org.apache.flink.sql.parser.impl.Token;
+import org.apache.flink.table.client.gateway.SqlExecutionException;
+import org.apache.flink.table.operations.Operation;
+
+import javax.annotation.Nonnull;
+
+import java.io.StringReader;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+
+/** ClientParser use {@link FlinkSqlParserImplTokenManager} to do lexical 
analysis. */
+public class ClientParser implements SqlCommandParser {
+
+/** A dumb implementation. TODO: remove this after unifying the 
SqlMultiLineParser. */
+@Override
+public Optional parseCommand(String command) {
+return Optional.empty();
+}
+
+public Optional parseStatement(@Nonnull String statement)
+throws SqlExecutionException {
+String trimmedStatement = statement.trim();
+FlinkSqlParserImplTokenManager tokenManager =
+new FlinkSqlParserImplTokenManager(
+new SimpleCharStream(new 
StringReader(trimmedStatement)));
+List tokenList = new ArrayList<>();
+Token token;
+do {
+token = tokenManager.getNextToken();
+tokenList.add(token);
+} while (token.endColumn != trimmedStatement.length());
+return getStatementType(tokenList);
+}
+
+// 
-
+private Optional getStatementType(List tokenList) {
+Token firstToken = tokenList.get(0);
+
+if (firstToken.kind == EOF || firstToken.kind == EMPTY || 
firstToken.kind == SEMICOLON) {
+return Optional.empty();
+}
+
+if (firstToken.kind == IDENTIFIER) {
+// unrecognized token
+return getPotentialCommandType(firstToken.image);
+} else if (firstToken.kind == EXPLAIN) {
+return Optional.of(StatementType.EXPLAIN);
+} else if (firstToken.kind == SHOW) {
+return getPotentialShowCreateType(tokenList);
+} else {
+return Optional.of(StatementType.OTHER);
+}

Review Comment:
   After checking the implementation of `SqlMultiLineParser`, here when the 
statement is incomplete, an SqlExecutionException should be thrown. I added the 
codes.



##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/ClientParser.java:
##
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImplTokenManager;
+import org.apache.flink.sql.parser.impl.SimpleCharStream;
+import org.apache.flink.sql.parser.impl.Token;
+import org.apache.flink.table.client.gateway.SqlExecutionException;
+import org.apache.flink.table.operations.Operation;
+
+import javax.annotation.Nonnull;
+
+import java.io.StringReader;
+import java.util.ArrayList;
+import 

[GitHub] [flink-ml] jiangxin369 commented on pull request #158: [FLINK-29409] Add Transformer and Estimator for VarianceThresholdSelector

2022-10-05 Thread GitBox


jiangxin369 commented on PR #158:
URL: https://github.com/apache/flink-ml/pull/158#issuecomment-1265316166

   > Thanks for the update. Looks good overall. Left just some minor comments.
   
   Thanks for your review, the above comments are updated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] grzegorz8 commented on a diff in pull request #20140: [Flink 16024][Connector][JDBC] Support FilterPushdown

2022-10-05 Thread GitBox


grzegorz8 commented on code in PR #20140:
URL: https://github.com/apache/flink/pull/20140#discussion_r985666582


##
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/table/JdbcFilterPushdownPreparedStatementVisitor.java:
##
@@ -0,0 +1,178 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.jdbc.table;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.table.expressions.CallExpression;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.ExpressionDefaultVisitor;
+import org.apache.flink.table.expressions.FieldReferenceExpression;
+import org.apache.flink.table.expressions.ResolvedExpression;
+import org.apache.flink.table.expressions.ValueLiteralExpression;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.types.logical.BigIntType;
+import org.apache.flink.table.types.logical.BooleanType;
+import org.apache.flink.table.types.logical.DateType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.DoubleType;
+import org.apache.flink.table.types.logical.FloatType;
+import org.apache.flink.table.types.logical.IntType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.SmallIntType;
+import org.apache.flink.table.types.logical.TimestampType;
+import org.apache.flink.table.types.logical.VarCharType;
+
+import java.io.Serializable;
+import java.math.BigDecimal;
+import java.sql.Date;
+import java.sql.Timestamp;
+import java.time.LocalDate;
+import java.time.LocalDateTime;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+/**
+ * Visitor that convert Expression to ParameterizedPredicate. Return 
Optional.empty() if we cannot
+ * push down the filter.
+ */
+@PublicEvolving
+public class JdbcFilterPushdownPreparedStatementVisitor
+extends ExpressionDefaultVisitor> {
+
+Function quoteIdentifierFunction;
+
+public static Set> supportedDataTypes;
+
+static {
+supportedDataTypes = new HashSet<>();
+supportedDataTypes.add(IntType.class);
+supportedDataTypes.add(BigIntType.class);
+supportedDataTypes.add(BooleanType.class);
+supportedDataTypes.add(DecimalType.class);
+supportedDataTypes.add(DoubleType.class);
+supportedDataTypes.add(FloatType.class);
+supportedDataTypes.add(SmallIntType.class);
+supportedDataTypes.add(VarCharType.class);
+supportedDataTypes.add(TimestampType.class);
+}
+
+public JdbcFilterPushdownPreparedStatementVisitor(
+Function quoteIdentifierFunction) {
+this.quoteIdentifierFunction = quoteIdentifierFunction;
+}
+
+@Override
+public Optional visit(CallExpression call) {
+if 
(BuiltInFunctionDefinitions.EQUALS.equals(call.getFunctionDefinition())) {
+return renderBinaryOperator("=", call.getResolvedChildren());
+}
+if 
(BuiltInFunctionDefinitions.LESS_THAN.equals(call.getFunctionDefinition())) {
+return renderBinaryOperator("<", call.getResolvedChildren());
+}
+if 
(BuiltInFunctionDefinitions.LESS_THAN_OR_EQUAL.equals(call.getFunctionDefinition()))
 {
+return renderBinaryOperator("<=", call.getResolvedChildren());
+}
+if 
(BuiltInFunctionDefinitions.GREATER_THAN.equals(call.getFunctionDefinition())) {
+return renderBinaryOperator(">", call.getResolvedChildren());
+}
+if 
(BuiltInFunctionDefinitions.GREATER_THAN_OR_EQUAL.equals(call.getFunctionDefinition()))
 {
+return renderBinaryOperator(">=", call.getResolvedChildren());
+}
+if 
(BuiltInFunctionDefinitions.OR.equals(call.getFunctionDefinition())) {
+return renderBinaryOperator("OR", call.getResolvedChildren());
+}
+if 
(BuiltInFunctionDefinitions.AND.equals(call.getFunctionDefinition())) {
+return renderBinaryOperator("AND", 

[GitHub] [flink] liuzhuang2017 commented on pull request #20939: [hotfix][docs]Fixed some typos of docs.

2022-10-05 Thread GitBox


liuzhuang2017 commented on PR #20939:
URL: https://github.com/apache/flink/pull/20939#issuecomment-1265152795

   @flinkbot run azure
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dianfu closed pull request #20742: [FLINK-29156]Support LISTAGG in the Table API

2022-10-05 Thread GitBox


dianfu closed pull request #20742: [FLINK-29156]Support LISTAGG in the Table API
URL: https://github.com/apache/flink/pull/20742


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] yuzelin commented on a diff in pull request #20931: [FLINK-29486][sql-client] Implement ClientParser for implementing remote SQL client later

2022-10-05 Thread GitBox


yuzelin commented on code in PR #20931:
URL: https://github.com/apache/flink/pull/20931#discussion_r985409681


##
flink-table/flink-sql-client/pom.xml:
##
@@ -511,6 +511,12 @@ under the License.
provided

 
+
+org.apache.flink
+flink-sql-parser
+${project.version}
+

Review Comment:
   > Added new configuration to shade plugin. I pushed a new commit to see if 
it works.
   
   It works.



##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/StatementType.java:
##
@@ -0,0 +1,30 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+/** Enumerates the possible types of input statements. */
+public enum StatementType {

Review Comment:
   Added these two types and corresponding tests.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuzhuang2017 commented on pull request #20940: [hotfix][docs] Fix inaccessible ES links.

2022-10-05 Thread GitBox


liuzhuang2017 commented on PR #20940:
URL: https://github.com/apache/flink/pull/20940#issuecomment-1264868739

   @MartijnVisser , Sorry to bother you again, can you help me review this pr? 
Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20940: [hotfix][docs] Fix inaccessible ES links.

2022-10-05 Thread GitBox


flinkbot commented on PR #20940:
URL: https://github.com/apache/flink/pull/20940#issuecomment-1264870593

   
   ## CI report:
   
   * 87dd4d3845e5f1ce4c3f1da7badf4a6036137f2d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuzhuang2017 opened a new pull request, #20940: [hotfix][docs] Fix inaccessible ES links.

2022-10-05 Thread GitBox


liuzhuang2017 opened a new pull request, #20940:
URL: https://github.com/apache/flink/pull/20940

   
   ## What is the purpose of the change
   
   - Fix inaccessible ES links.
   
   
   ## Brief change log
   
   - Fix inaccessible ES links.
   
   
   ## Verifying this change
   
   - This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20939: [hotfix][docs]Fixed some typos of docs.

2022-10-05 Thread GitBox


flinkbot commented on PR #20939:
URL: https://github.com/apache/flink/pull/20939#issuecomment-1264814652

   
   ## CI report:
   
   * b5f403ceb76893a5e477159d87a11bbf56f6a04b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuzhuang2017 opened a new pull request, #20939: [hotfix][docs]Fixed some typos of docs.

2022-10-05 Thread GitBox


liuzhuang2017 opened a new pull request, #20939:
URL: https://github.com/apache/flink/pull/20939

   
   ## What is the purpose of the change
   
   
   - Fixed some typos of docs.
   
   ## Brief change log
   
   
   - Fixed some typos of docs.
   
   ## Verifying this change
   
   - This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuzhuang2017 commented on pull request #20939: [hotfix][docs]Fixed some typos of docs.

2022-10-05 Thread GitBox


liuzhuang2017 commented on PR #20939:
URL: https://github.com/apache/flink/pull/20939#issuecomment-1264811207

   @MartijnVisser , Sorry to bother you again, can you help me review this pr? 
Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuzhuang2017 commented on pull request #20938: [hotfix][table] Add the `dayofweek` function description.

2022-10-05 Thread GitBox


liuzhuang2017 commented on PR #20938:
URL: https://github.com/apache/flink/pull/20938#issuecomment-1264770996

   @wuchong ,Can you help me review this pr?Thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] yuzelin commented on a diff in pull request #20931: [FLINK-29486][sql-client] Implement ClientParser for implementing remote SQL client later

2022-10-05 Thread GitBox


yuzelin commented on code in PR #20931:
URL: https://github.com/apache/flink/pull/20931#discussion_r985271871


##
flink-table/flink-sql-client/pom.xml:
##
@@ -511,6 +511,12 @@ under the License.
provided

 
+
+org.apache.flink
+flink-sql-parser
+${project.version}
+

Review Comment:
   Added configuration to the shade plugin.  I pushed this commit and see if it 
works.



##
flink-table/flink-sql-client/pom.xml:
##
@@ -511,6 +511,12 @@ under the License.
provided

 
+
+org.apache.flink
+flink-sql-parser
+${project.version}
+

Review Comment:
   Added new configuration to shade plugin. I pushed a new commit to see if it 
works.



##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/ClientParser.java:
##
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImplTokenManager;
+import org.apache.flink.sql.parser.impl.SimpleCharStream;
+import org.apache.flink.sql.parser.impl.Token;
+import org.apache.flink.table.client.gateway.SqlExecutionException;
+import org.apache.flink.table.operations.Operation;
+
+import javax.annotation.Nonnull;
+
+import java.io.StringReader;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+
+/** ClientParser use {@link FlinkSqlParserImplTokenManager} to do lexical 
analysis. */

Review Comment:
   Added.



##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/ClientParser.java:
##
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImplTokenManager;
+import org.apache.flink.sql.parser.impl.SimpleCharStream;
+import org.apache.flink.sql.parser.impl.Token;
+import org.apache.flink.table.client.gateway.SqlExecutionException;
+import org.apache.flink.table.operations.Operation;
+
+import javax.annotation.Nonnull;
+
+import java.io.StringReader;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+
+/** ClientParser use {@link FlinkSqlParserImplTokenManager} to do lexical 
analysis. */
+public class ClientParser implements SqlCommandParser {
+
+/** A dumb implementation. TODO: remove this after unifying the 
SqlMultiLineParser. */
+@Override
+public Optional parseCommand(String command) {
+return Optional.empty();
+}
+
+public Optional parseStatement(@Nonnull String statement)
+throws SqlExecutionException {
+String trimmedStatement = statement.trim();
+FlinkSqlParserImplTokenManager tokenManager =
+new FlinkSqlParserImplTokenManager(
+new SimpleCharStream(new 
StringReader(trimmedStatement)));

Review Comment:
   Did.



##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/ClientParser.java:
##
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * 

[GitHub] [flink] fsk119 commented on a diff in pull request #20931: [FLINK-29486][sql-client] Implement ClientParser for implementing remote SQL client later

2022-10-05 Thread GitBox


fsk119 commented on code in PR #20931:
URL: https://github.com/apache/flink/pull/20931#discussion_r985252646


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/StatementType.java:
##
@@ -0,0 +1,30 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+/** Enumerates the possible types of input statements. */
+public enum StatementType {

Review Comment:
   How about BEGIN STATEMENT SET/END?



##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/parser/ClientParserTest.java:
##
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.cli.parser;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+
+import org.junit.Ignore;
+import org.junit.jupiter.params.ParameterizedTest;
+import org.junit.jupiter.params.provider.MethodSource;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.Optional;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Testing whether {@link ClientParser} can parse statement to get {@link 
StatementType} correctly.
+ */
+@SuppressWarnings("OptionalUsedAsFieldOrParameterType")
+public class ClientParserTest {
+
+private final ClientParser clientParser = new ClientParser();
+
+private static final Optional QUIT = 
Optional.of(StatementType.QUIT);
+private static final Optional CLEAR = 
Optional.of(StatementType.CLEAR);
+private static final Optional HELP = 
Optional.of(StatementType.HELP);
+private static final Optional EXPLAIN = 
Optional.of(StatementType.EXPLAIN);
+private static final Optional SHOW_CREATE =
+Optional.of(StatementType.SHOW_CREATE);
+private static final Optional OTHER = 
Optional.of(StatementType.OTHER);
+private static final Optional EMPTY = Optional.empty();
+
+@Ignore
+@ParameterizedTest
+@MethodSource("generateTestData")
+public void testParseStatement(Tuple2> 
testData) {
+Optional type = 
clientParser.parseStatement(testData.f0);
+assertThat(type).isEqualTo(testData.f1);
+}
+
+private static List>> 
generateTestData() {
+return Arrays.asList(
+Tuple2.of("quit;", QUIT),
+Tuple2.of("quit", QUIT),
+Tuple2.of("QUIT", QUIT),
+Tuple2.of("Quit", QUIT),
+Tuple2.of("QuIt", QUIT),
+Tuple2.of("clear;", CLEAR),
+Tuple2.of("help;", HELP),
+Tuple2.of("EXPLAIN PLAN FOR what_ever", EXPLAIN),
+Tuple2.of("SHOW CREATE TABLE(what_ever);", SHOW_CREATE),
+Tuple2.of("SHOW CREATE VIEW (what_ever)", SHOW_CREATE),
+Tuple2.of("SHOW CREATE syntax_error;", OTHER),
+Tuple2.of("--SHOW CREATE TABLE ignore_comment", EMPTY),

Review Comment:
   Test `SHOW TABLES -- comment ;` and muli-line cases:
   
   ```
   SHOW\n
   create\t TABLE `tbl`;
   ```
   Take a look at presto test cases `TestStatementSplitter`



##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/parser/ClientParser.java:
##
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright 

[GitHub] [flink] flinkbot commented on pull request #20938: [hotfix][table] Add the `dayofweek` function description.

2022-10-05 Thread GitBox


flinkbot commented on PR #20938:
URL: https://github.com/apache/flink/pull/20938#issuecomment-1264670040

   
   ## CI report:
   
   * 1415776206501369c0bf1ca29fc2d430aabb916f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuzhuang2017 opened a new pull request, #20938: [hotfix][table] Add the `dayofweek` function description.

2022-10-05 Thread GitBox


liuzhuang2017 opened a new pull request, #20938:
URL: https://github.com/apache/flink/pull/20938

   
   ## What is the purpose of the change
   
   
   - Add the `dayofweek` function description.
   
   ## Brief change log
   
   - Add the `dayofweek` function description.
   
   
   ## Verifying this change
   
   - This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (`no`)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (`no`)
 - The serializers: (`no`)
 - The runtime per-record code paths (performance sensitive): (`no`)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (`no`)
 - The S3 file system connector: (`no`)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (`no`)
 - If yes, how is the feature documented? (`not applicable`)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-29513) Update Kafka version to 3.2.3

2022-10-05 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-29513:
--

 Summary: Update Kafka version to 3.2.3
 Key: FLINK-29513
 URL: https://issues.apache.org/jira/browse/FLINK-29513
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / Kafka
Reporter: Martijn Visser
Assignee: Martijn Visser


Kafka 3.2.3 contains certain security fixes (see 
https://downloads.apache.org/kafka/3.2.3/RELEASE_NOTES.html). We should upgrade 
the dependency in Flink



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25916) Using upsert-kafka with a flush buffer results in Null Pointer Exception

2022-10-05 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17613117#comment-17613117
 ] 

Martijn Visser commented on FLINK-25916:


[~jjimenezMM] [~mhv] That's because no maintainers has yet had to opportunity 
to review it. 

> Using upsert-kafka with a flush buffer results in Null Pointer Exception
> 
>
> Key: FLINK-25916
> URL: https://issues.apache.org/jira/browse/FLINK-25916
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Runtime
>Affects Versions: 1.14.3, 1.15.0
> Environment: CentOS 7.9 x64
> Intel Xeon Gold 6140 CPU
>Reporter: Corey Shaw
>Priority: Critical
>  Labels: pull-request-available
>
> Flink Version: 1.14.3
> upsert-kafka version: 1.14.3
>  
> I have been trying to buffer output from the upsert-kafka connector using the 
> documented parameters {{sink.buffer-flush.max-rows}} and 
> {{sink.buffer-flush.interval}}
> Whenever I attempt to run an INSERT query with buffering, I receive the 
> following error (shortened for brevity):
> {code:java}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.flush(ReducingUpsertWriter.java:145)
>  
> at 
> org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.lambda$registerFlush$3(ReducingUpsertWriter.java:124)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1693)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$null$22(StreamTask.java:1684)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) 
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:338)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:324)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:201)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)
>  
> at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
>  
> at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) 
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766) 
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) 
> at java.lang.Thread.run(Thread.java:829) [?:?] {code}
>  
> If I remove the parameters related to flush buffering, then everything works 
> as expected with no problems at all.  For reference, here is the full setup 
> with source, destination, and queries.  Yes, I realize the INSERT could use 
> an overhaul, but that's not the issue at hand :).
> {code:java}
> CREATE TABLE `source_topic` (
>     `timeGMT` INT,
>     `eventtime` AS TO_TIMESTAMP(FROM_UNIXTIME(`timeGMT`)),
>     `visIdHigh` BIGINT,
>     `visIdLow` BIGINT,
>     `visIdStr` AS CONCAT(IF(`visIdHigh` IS NULL, '', CAST(`visIdHigh` AS 
> STRING)), IF(`visIdLow` IS NULL, '', CAST(`visIdLow` AS STRING))),
>     WATERMARK FOR eventtime AS eventtime - INTERVAL '25' SECONDS
> ) WITH (
>     'connector' = 'kafka',
>     'properties.group.id' = 'flink_metrics',
>     'properties.bootstrap.servers' = 'brokers.example.com:9093',
>     'topic' = 'source_topic',
>     'scan.startup.mode' = 'earliest-offset',
>     'value.format' = 'avro-confluent',
>     'value.avro-confluent.url' = 'http://schema.example.com',
>     'value.fields-include' = 'EXCEPT_KEY'
> );
>  CREATE TABLE dest_topic (
> `messageType` VARCHAR,
> `observationID` BIGINT,
> `obsYear` BIGINT,
> `obsMonth` BIGINT,
> `obsDay` BIGINT,
> `obsHour` BIGINT,
> `obsMinute` BIGINT,
> `obsTz` VARCHAR(5),
> `value` BIGINT,
> PRIMARY KEY (observationID, messageType) NOT ENFORCED
> ) WITH (
> 'connector' = 'upsert-kafka',
> 'key.format' = 'json',
> 'properties.bootstrap.servers' = 'brokers.example.com:9092',
> 'sink.buffer-flush.max-rows' = '5',
> 'sink.buffer-flush.interval' = '1000',
> 'topic' = 'dest_topic ',
> 'value.format' = 'json'
> );
> INSERT INTO adobenow_metrics
>     SELECT `messageType`, `observationID`, obsYear, obsMonth, obsDay, 
> obsHour, obsMinute, obsTz, SUM(`value`) AS `value` FROM (
>         SELECT `messageType`, 

[jira] [Commented] (FLINK-29501) Allow overriding JobVertex parallelisms during job submission

2022-10-05 Thread Maximilian Michels (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17613080#comment-17613080
 ] 

Maximilian Michels commented on FLINK-29501:


{quote}I don't really follow. Will you suspend the job, and restart it from 
another JM with a different configuration?
Or is this something meant to be specific to the YARN per-job mode (which loads 
the jobgraph from a file)?
{quote}
Yes, redeploy means deploying a new cluster and re-running the job submissions 
with a different base configuration which includes the parallelism changes in 
the JobGraph.

> Allow overriding JobVertex parallelisms during job submission
> -
>
> Key: FLINK-29501
> URL: https://issues.apache.org/jira/browse/FLINK-29501
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Configuration, Runtime / REST
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
>
> It is a common scenario that users want to make changes to the parallelisms 
> in the JobGraph. For example, because they discover that the job needs more 
> or less resources. There is the option to do this globally via the job 
> parallelism. However, for fine-tuned jobs jobs with potentially many 
> branches, tuning on the job vertex level is required.
> This is to propose a way such that users can apply a mapping \{{jobVertexId 
> => parallelism}} before the job is submitted without having to modify the 
> JobGraph manually.
> One way to achieving this would be to add an optional map field to the Rest 
> API jobs endpoint. However, in deployment modes like the application mode, 
> this might not make sense because users do not have control the rest endpoint.
> Similarly to how other job parameters are passed in the application mode, we 
> propose to add the overrides as a configuration parameter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-9171) Flink HCatolog integration

2022-10-05 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17613068#comment-17613068
 ] 

Martijn Visser commented on FLINK-9171:
---

I've opened a discussion thread to remove the HCatalog connector 
https://lists.apache.org/thread/j8jc5zrhnqlv8y3lkmc3wdo9ysgmsr84

> Flink HCatolog integration 
> ---
>
> Key: FLINK-9171
> URL: https://issues.apache.org/jira/browse/FLINK-9171
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Hive
>Reporter: Shuyi Chen
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> This is a parent task for all HCatalog related integration in Flink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-26622) Azure failed due to Connection timed out

2022-10-05 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-26622.
--
Resolution: Cannot Reproduce

> Azure failed due to Connection timed out 
> -
>
> Key: FLINK-26622
> URL: https://issues.apache.org/jira/browse/FLINK-26622
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.4
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, build-stability
>
> {code:java}
> [ERROR] Failed to execute goal on project flink-connector-hive_2.12: Could 
> not resolve dependencies for project 
> org.apache.flink:flink-connector-hive_2.12:jar:1.14-SNAPSHOT: Failed to 
> collect dependencies at org.apache.hive.hcatalog:hive-hcatalog-core:jar:1.2.1 
> -> org.apache.hive:hive-cli:jar:1.2.1: Failed to read artifact descriptor for 
> org.apache.hive:hive-cli:jar:1.2.1: Could not transfer artifact 
> org.apache.hive:hive-cli:pom:1.2.1 from/to google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/): Connect 
> to maven-central-eu.storage-download.googleapis.com:443 
> [maven-central-eu.storage-download.googleapis.com/74.125.193.128] failed: 
> Connection timed out (Connection timed out) -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :flink-connector-hive_2.12
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32959=logs=08866332-78f7-59e4-4f7e-49a56faa3179=7f606211-1454-543c-70ab-c7a028a1ce8c=10112



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29511) Sort properties/schemas in OpenAPI spec

2022-10-05 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-29511:
-
Description: 
The properties/schema order is currently based on whatever order they were 
looked up, which varies as the spec is being extended.
Sort them by name to prevent this.

  was:
The properties order is currently based on whatever order properties were 
looked up, which varies as the spec is being extended.
Sort the properties by name to prevent this.


> Sort properties/schemas in OpenAPI spec
> ---
>
> Key: FLINK-29511
> URL: https://issues.apache.org/jira/browse/FLINK-29511
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Documentation, Runtime / REST
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.17.0
>
>
> The properties/schema order is currently based on whatever order they were 
> looked up, which varies as the spec is being extended.
> Sort them by name to prevent this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29511) Sort properties/schemas in OpenAPI spec

2022-10-05 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-29511:
-
Summary: Sort properties/schemas in OpenAPI spec  (was: Sort properties in 
OpenAPI spec)

> Sort properties/schemas in OpenAPI spec
> ---
>
> Key: FLINK-29511
> URL: https://issues.apache.org/jira/browse/FLINK-29511
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Documentation, Runtime / REST
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.17.0
>
>
> The properties order is currently based on whatever order properties were 
> looked up, which varies as the spec is being extended.
> Sort the properties by name to prevent this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29512) Align SubtaskCommittableManager checkpointId with CheckpointCommittableManagerImpl checkpointId during recovery

2022-10-05 Thread Fabian Paul (Jira)
Fabian Paul created FLINK-29512:
---

 Summary: Align SubtaskCommittableManager checkpointId with 
CheckpointCommittableManagerImpl checkpointId during recovery
 Key: FLINK-29512
 URL: https://issues.apache.org/jira/browse/FLINK-29512
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common
Affects Versions: 1.15.2, 1.17.0, 1.16.1
Reporter: Fabian Paul


Similar to the issue described in 
https://issues.apache.org/jira/browse/FLINK-29509 during the recovery of 
committables, the subtaskCommittables checkpointId is set to always 1 
[https://github.com/apache/flink/blob/9152af41c2d401e5eacddee1bb10d1b6bea6c61a/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CommittableCollectorSerializer.java#L193]
 while the holding CheckpointCommittableManager is initialized with the 
checkpointId that is written into state 
[https://github.com/apache/flink/blob/9152af41c2d401e5eacddee1bb10d1b6bea6c61a/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CommittableCollectorSerializer.java#L155
 
.|https://github.com/apache/flink/blob/9152af41c2d401e5eacddee1bb10d1b6bea6c61a/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CommittableCollectorSerializer.java#L155.]

 

This leads to that during a recovery, the post-commit topology will receive a 
committable summary with the recovered checkpoint id and multiple 
`CommittableWithLinage`s with the reset checkpointId causing orphaned 
`CommittableWithLinages` without a `CommittableSummary` failing the job.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29511) Sort properties in OpenAPI spec

2022-10-05 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-29511:
-
Description: 
The properties order is currently based on whatever order properties were 
looked up, which varies as the spec is being extended.
Sort the properties by name to prevent this.

  was:
The properties order is currently based on whatever order a HashMap provides, 
which varies as the spec is being extended.
Sort the properties by name to prevent this.


> Sort properties in OpenAPI spec
> ---
>
> Key: FLINK-29511
> URL: https://issues.apache.org/jira/browse/FLINK-29511
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Documentation, Runtime / REST
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.17.0
>
>
> The properties order is currently based on whatever order properties were 
> looked up, which varies as the spec is being extended.
> Sort the properties by name to prevent this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29511) Sort properties in OpenAPI spec

2022-10-05 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-29511:


 Summary: Sort properties in OpenAPI spec
 Key: FLINK-29511
 URL: https://issues.apache.org/jira/browse/FLINK-29511
 Project: Flink
  Issue Type: Technical Debt
  Components: Documentation, Runtime / REST
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.17.0


The properties order is currently based on whatever order a HashMap provides, 
which varies as the spec is being extended.
Sort the properties by name to prevent this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29456) Add methods that accept OperatorIdentifier

2022-10-05 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-29456.

Resolution: Fixed

master: 8d72490377551a35851a3319c0f49b408d31a566

> Add methods that accept OperatorIdentifier
> --
>
> Key: FLINK-29456
> URL: https://issues.apache.org/jira/browse/FLINK-29456
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / State Processor
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.17.0
>
>
> Add new variants of all methods in the SavepointReader/-Writer that accept an 
> OperatorIdentifier.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25916) Using upsert-kafka with a flush buffer results in Null Pointer Exception

2022-10-05 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-25916:
---
Priority: Critical  (was: Major)

> Using upsert-kafka with a flush buffer results in Null Pointer Exception
> 
>
> Key: FLINK-25916
> URL: https://issues.apache.org/jira/browse/FLINK-25916
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Runtime
>Affects Versions: 1.14.3, 1.15.0
> Environment: CentOS 7.9 x64
> Intel Xeon Gold 6140 CPU
>Reporter: Corey Shaw
>Priority: Critical
>  Labels: pull-request-available
>
> Flink Version: 1.14.3
> upsert-kafka version: 1.14.3
>  
> I have been trying to buffer output from the upsert-kafka connector using the 
> documented parameters {{sink.buffer-flush.max-rows}} and 
> {{sink.buffer-flush.interval}}
> Whenever I attempt to run an INSERT query with buffering, I receive the 
> following error (shortened for brevity):
> {code:java}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.flush(ReducingUpsertWriter.java:145)
>  
> at 
> org.apache.flink.streaming.connectors.kafka.table.ReducingUpsertWriter.lambda$registerFlush$3(ReducingUpsertWriter.java:124)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1693)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$null$22(StreamTask.java:1684)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) 
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:338)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:324)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:201)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)
>  
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)
>  
> at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
>  
> at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) 
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766) 
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575) 
> at java.lang.Thread.run(Thread.java:829) [?:?] {code}
>  
> If I remove the parameters related to flush buffering, then everything works 
> as expected with no problems at all.  For reference, here is the full setup 
> with source, destination, and queries.  Yes, I realize the INSERT could use 
> an overhaul, but that's not the issue at hand :).
> {code:java}
> CREATE TABLE `source_topic` (
>     `timeGMT` INT,
>     `eventtime` AS TO_TIMESTAMP(FROM_UNIXTIME(`timeGMT`)),
>     `visIdHigh` BIGINT,
>     `visIdLow` BIGINT,
>     `visIdStr` AS CONCAT(IF(`visIdHigh` IS NULL, '', CAST(`visIdHigh` AS 
> STRING)), IF(`visIdLow` IS NULL, '', CAST(`visIdLow` AS STRING))),
>     WATERMARK FOR eventtime AS eventtime - INTERVAL '25' SECONDS
> ) WITH (
>     'connector' = 'kafka',
>     'properties.group.id' = 'flink_metrics',
>     'properties.bootstrap.servers' = 'brokers.example.com:9093',
>     'topic' = 'source_topic',
>     'scan.startup.mode' = 'earliest-offset',
>     'value.format' = 'avro-confluent',
>     'value.avro-confluent.url' = 'http://schema.example.com',
>     'value.fields-include' = 'EXCEPT_KEY'
> );
>  CREATE TABLE dest_topic (
> `messageType` VARCHAR,
> `observationID` BIGINT,
> `obsYear` BIGINT,
> `obsMonth` BIGINT,
> `obsDay` BIGINT,
> `obsHour` BIGINT,
> `obsMinute` BIGINT,
> `obsTz` VARCHAR(5),
> `value` BIGINT,
> PRIMARY KEY (observationID, messageType) NOT ENFORCED
> ) WITH (
> 'connector' = 'upsert-kafka',
> 'key.format' = 'json',
> 'properties.bootstrap.servers' = 'brokers.example.com:9092',
> 'sink.buffer-flush.max-rows' = '5',
> 'sink.buffer-flush.interval' = '1000',
> 'topic' = 'dest_topic ',
> 'value.format' = 'json'
> );
> INSERT INTO adobenow_metrics
>     SELECT `messageType`, `observationID`, obsYear, obsMonth, obsDay, 
> obsHour, obsMinute, obsTz, SUM(`value`) AS `value` FROM (
>         SELECT `messageType`, `observationID`, obsYear, obsMonth, obsDay, 
> obsHour, obsMinute, '-' AS obsTz, 1 AS `value`, `visIdStr` 

[jira] [Closed] (FLINK-29499) DispatcherOperationCaches should implement AutoCloseableAsync

2022-10-05 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-29499.

Resolution: Fixed

master: 3730e24cc4283f877ac35b189dff355579d1de68

> DispatcherOperationCaches should implement AutoCloseableAsync
> -
>
> Key: FLINK-29499
> URL: https://issues.apache.org/jira/browse/FLINK-29499
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Runtime / REST
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-26469) Adaptive job shows error in WebUI when not enough resource are available

2022-10-05 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-26469.

Fix Version/s: 1.17.0
   1.15.3
   1.16.1
   Resolution: Fixed

Fixed in:
* master
** 9600a1858bf608a40a0b4c108b70c230e890ccc3
* 1.16
** 264afe134e70fb4d93032ba818b0fe05e964a03b
* 1.15
** 507913052b3a02d64d6f816e3f87cb059ef52990

> Adaptive job shows error in WebUI when not enough resource are available
> 
>
> Key: FLINK-26469
> URL: https://issues.apache.org/jira/browse/FLINK-26469
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Niklas Semmler
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.17.0, 1.15.3, 1.16.1
>
>
> When there is no resource and job is in CREATED state, the job page shows the 
> error: "Job failed during initialization of JobManager". 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29510) Add NoticeFileChecker tests

2022-10-05 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-29510:


 Summary: Add NoticeFileChecker tests
 Key: FLINK-29510
 URL: https://issues.apache.org/jira/browse/FLINK-29510
 Project: Flink
  Issue Type: Technical Debt
  Components: Build System, Tests
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.17.0


The NoticeFileChecker is too important to not be covered by tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29459) Sink v2 has bugs in supporting legacy v1 implementations with global committer

2022-10-05 Thread Fabian Paul (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17612982#comment-17612982
 ] 

Fabian Paul commented on FLINK-29459:
-

It also looks like the first and second point are the same problem, aren't they?

> Sink v2 has bugs in supporting legacy v1 implementations with global committer
> --
>
> Key: FLINK-29459
> URL: https://issues.apache.org/jira/browse/FLINK-29459
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.16.0, 1.17.0, 1.15.2
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Major
> Fix For: 1.17.0, 1.15.3, 1.16.1
>
>
> Currently when supporting Sink implementation using version 1 interface, 
> there are issues after restoring from a checkpoint after failover:
>  # In global committer operator, when restoring SubtaskCommittableManager, 
> the subtask id is replaced with the one in the current operator. This means 
> that the id originally is the id of the sender task (0 ~ N - 1), but after 
> restoring it has to be 0. This would cause Duplication Key exception during 
> restoring.
>  # For Committer operator, the subtaskId of CheckpointCommittableManagerImpl 
> is always restored to 0 after failover for all the subtasks. This makes the 
> summary sent to the Global Committer is attached with wrong subtask id.
>  # For Committer operator, the checkpoint id of SubtaskCommittableManager is 
> always restored to 1 after failover, this make the following committable sent 
> to the global committer is attached with wrong checkpoint id. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29459) Sink v2 has bugs in supporting legacy v1 implementations with global committer

2022-10-05 Thread Fabian Paul (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17612980#comment-17612980
 ] 

Fabian Paul commented on FLINK-29459:
-

[~gaoyunhaii] thanks for your analysis. I am currently looking into the issues, 
and I think it is a good idea to split the different problems into different 
tickets. 

I already created https://issues.apache.org/jira/browse/FLINK-29509 to fix the 
subtask id problem during recovery.

Let me know if you have already started with that.

> Sink v2 has bugs in supporting legacy v1 implementations with global committer
> --
>
> Key: FLINK-29459
> URL: https://issues.apache.org/jira/browse/FLINK-29459
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.16.0, 1.17.0, 1.15.2
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Major
> Fix For: 1.17.0, 1.15.3, 1.16.1
>
>
> Currently when supporting Sink implementation using version 1 interface, 
> there are issues after restoring from a checkpoint after failover:
>  # In global committer operator, when restoring SubtaskCommittableManager, 
> the subtask id is replaced with the one in the current operator. This means 
> that the id originally is the id of the sender task (0 ~ N - 1), but after 
> restoring it has to be 0. This would cause Duplication Key exception during 
> restoring.
>  # For Committer operator, the subtaskId of CheckpointCommittableManagerImpl 
> is always restored to 0 after failover for all the subtasks. This makes the 
> summary sent to the Global Committer is attached with wrong subtask id.
>  # For Committer operator, the checkpoint id of SubtaskCommittableManager is 
> always restored to 1 after failover, this make the following committable sent 
> to the global committer is attached with wrong checkpoint id. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-27938) [JUnit5 Migration] Module: flink-connector-hbase-base

2022-10-05 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-27938.
---
Fix Version/s: 1.17.0
   Resolution: Fixed

master: adc476a0df2355226df19c37c0834ee0d9c19023

> [JUnit5 Migration] Module: flink-connector-hbase-base
> -
>
> Key: FLINK-27938
> URL: https://issues.apache.org/jira/browse/FLINK-27938
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Sergey Nuyanzin
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-27938) [JUnit5 Migration] Module: flink-connector-hbase-base

2022-10-05 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-27938:
-

Assignee: Sergey Nuyanzin

> [JUnit5 Migration] Module: flink-connector-hbase-base
> -
>
> Key: FLINK-27938
> URL: https://issues.apache.org/jira/browse/FLINK-27938
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Sergey Nuyanzin
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29509) Set correct subtaskId during recovery of committables

2022-10-05 Thread Fabian Paul (Jira)
Fabian Paul created FLINK-29509:
---

 Summary: Set correct subtaskId during recovery of committables
 Key: FLINK-29509
 URL: https://issues.apache.org/jira/browse/FLINK-29509
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common
Affects Versions: 1.15.2, 1.17.0, 1.16.1
Reporter: Fabian Paul


When we recover the `CheckpointCommittableManager` we ignore the subtaskId it 
is recovered on. 
[https://github.com/apache/flink/blob/d191bda7e63a2c12416cba56090e5cd75426079b/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/sink/committables/CheckpointCommittableManagerImpl.java#L58]

This becomes a problem when a sink uses a post-commit topology because multiple 
committer operators might forward committable summaries coming from the same 
subtaskId.

 

It should be possible to use the subtaskId already present in the 
`CommittableCollector` when creating the `CheckpointCommittableManager`s.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27942) [JUnit5 Migration] Module: flink-connector-rabbitmq

2022-10-05 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-27942:
--
Fix Version/s: 1.17.0

> [JUnit5 Migration] Module: flink-connector-rabbitmq
> ---
>
> Key: FLINK-27942
> URL: https://issues.apache.org/jira/browse/FLINK-27942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Sergey Nuyanzin
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-27942) [JUnit5 Migration] Module: flink-connector-rabbitmq

2022-10-05 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-27942.
---
Resolution: Fixed

master: d191bda7e63a2c12416cba56090e5cd75426079b

> [JUnit5 Migration] Module: flink-connector-rabbitmq
> ---
>
> Key: FLINK-27942
> URL: https://issues.apache.org/jira/browse/FLINK-27942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Sergey Nuyanzin
>Priority: Minor
>  Labels: pull-request-available, stale-assigned
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29419) HybridShuffle.testHybridFullExchangesRestart hangs

2022-10-05 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17612972#comment-17612972
 ] 

Matthias Pohl commented on FLINK-29419:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41564=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=9625

> HybridShuffle.testHybridFullExchangesRestart hangs
> --
>
> Key: FLINK-29419
> URL: https://issues.apache.org/jira/browse/FLINK-29419
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Weijie Guo
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> 2022-09-26T10:56:44.0766792Z Sep 26 10:56:44 "ForkJoinPool-1-worker-25" #27 
> daemon prio=5 os_prio=0 tid=0x7f41a4efa000 nid=0x6d76 waiting on 
> condition [0x7f40ac135000]
> 2022-09-26T10:56:44.0767432Z Sep 26 10:56:44java.lang.Thread.State: 
> WAITING (parking)
> 2022-09-26T10:56:44.0767892Z Sep 26 10:56:44  at sun.misc.Unsafe.park(Native 
> Method)
> 2022-09-26T10:56:44.0768644Z Sep 26 10:56:44  - parking to wait for  
> <0xa0704e18> (a java.util.concurrent.CompletableFuture$Signaller)
> 2022-09-26T10:56:44.0769287Z Sep 26 10:56:44  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2022-09-26T10:56:44.0769949Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> 2022-09-26T10:56:44.0770623Z Sep 26 10:56:44  at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313)
> 2022-09-26T10:56:44.0771349Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> 2022-09-26T10:56:44.0772092Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2022-09-26T10:56:44.0772777Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:57)
> 2022-09-26T10:56:44.0773534Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.BatchShuffleITCaseBase.executeJob(BatchShuffleITCaseBase.java:115)
> 2022-09-26T10:56:44.0774333Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridFullExchangesRestart(HybridShuffleITCase.java:59)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41343=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29508) Some NOTICE files are not checked for correctness

2022-10-05 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-29508:


 Summary: Some NOTICE files are not checked for correctness
 Key: FLINK-29508
 URL: https://issues.apache.org/jira/browse/FLINK-29508
 Project: Flink
  Issue Type: Technical Debt
  Components: Build System
Affects Versions: 1.16.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.16.0


We have 3 modules that are not being deployed (and thus auto-excluded since 
FLINK-29301) which are still relevant for production though.

We should amend the checker to take into account whether the non-deployed 
module is bundled by another deployed module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >