[jira] [Updated] (FLINK-30005) Translate "Schema Migration Limitations for State Schema Evolution" into Chinese

2022-11-13 Thread hao wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hao wang updated FLINK-30005:
-
Description: 
Translate paragraph "Schema Migration Limitations" in 
"https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/";
 page into Chinese.

This doc located in 
"flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"

> Translate "Schema Migration Limitations for State Schema Evolution" into 
> Chinese
> 
>
> Key: FLINK-30005
> URL: https://issues.apache.org/jira/browse/FLINK-30005
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.16.0
>Reporter: hao wang
>Priority: Minor
>
> Translate paragraph "Schema Migration Limitations" in 
> "https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/";
>  page into Chinese.
> This doc located in 
> "flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30005) Translate "Schema Migration Limitations for State Schema Evolution" into Chinese

2022-11-13 Thread hao wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hao wang updated FLINK-30005:
-
Description: 
Translate paragraph "Schema Migration Limitations" in 
"[https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/]";
 page into Chinese.

This doc located in 
"flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"

  was:
Translate paragraph "Schema Migration Limitations" in 
"https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/";
 page into Chinese.

This doc located in 
"flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"


> Translate "Schema Migration Limitations for State Schema Evolution" into 
> Chinese
> 
>
> Key: FLINK-30005
> URL: https://issues.apache.org/jira/browse/FLINK-30005
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.16.0
>Reporter: hao wang
>Priority: Minor
>
> Translate paragraph "Schema Migration Limitations" in 
> "[https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/]";
>  page into Chinese.
> This doc located in 
> "flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30005) Translate "Schema Migration Limitations for State Schema Evolution" into Chinese

2022-11-13 Thread hao wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hao wang updated FLINK-30005:
-
Description: 
Translate paragraph "Schema Migration Limitations" in 
"[https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/|https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/]";
 page into Chinese.

This doc located in 
"flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"

  was:
Translate paragraph "Schema Migration Limitations" in 
"[https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/]";
 page into Chinese.

This doc located in 
"flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"


> Translate "Schema Migration Limitations for State Schema Evolution" into 
> Chinese
> 
>
> Key: FLINK-30005
> URL: https://issues.apache.org/jira/browse/FLINK-30005
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.16.0
>Reporter: hao wang
>Priority: Minor
>
> Translate paragraph "Schema Migration Limitations" in 
> "[https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/|https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/]";
>  page into Chinese.
> This doc located in 
> "flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30005) Translate "Schema Migration Limitations for State Schema Evolution" into Chinese

2022-11-13 Thread hao wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hao wang updated FLINK-30005:
-
Description: 
Translate paragraph "Schema Migration Limitations" in 
[https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/|https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/]
 page into Chinese.

This doc located in 
"flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"

  was:
Translate paragraph "Schema Migration Limitations" in 
"[https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/|https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/]";
 page into Chinese.

This doc located in 
"flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"


> Translate "Schema Migration Limitations for State Schema Evolution" into 
> Chinese
> 
>
> Key: FLINK-30005
> URL: https://issues.apache.org/jira/browse/FLINK-30005
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.16.0
>Reporter: hao wang
>Priority: Minor
>
> Translate paragraph "Schema Migration Limitations" in 
> [https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/|https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/]
>  page into Chinese.
> This doc located in 
> "flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30004) Cannot resume deployment after suspend with savepoint due to leftover confgmaps

2022-11-13 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora updated FLINK-30004:
---
Affects Version/s: kubernetes-operator-1.2.0
   (was: 1.2)

> Cannot resume deployment after suspend with savepoint due to leftover 
> confgmaps
> ---
>
> Key: FLINK-30004
> URL: https://issues.apache.org/jira/browse/FLINK-30004
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.2.0
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Major
>
> Due to the possibility of incomplete cleanup of HA data in Flink 1.14, the 
> deployment can get into a limbo state that requires manual intervention after 
> suspend with savepoint. If the config maps are not cleaned up the resumed job 
> will be considered finished and the operator recognize the JM deployment as 
> missing. Due to check for HA data which are now cleaned up, the job fails to 
> start and manual redeployment with initial savepoint is necessary.
> This can be avoided by removing any leftover HA config maps after the job has 
> successfully stopped with savepoint (upgrade mode savepoint).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30007) Document how users can request a Jira account / file a bug

2022-11-13 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-30007:
--

 Summary: Document how users can request a Jira account / file a 
bug 
 Key: FLINK-30007
 URL: https://issues.apache.org/jira/browse/FLINK-30007
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Project Website
Reporter: Martijn Visser
Assignee: Martijn Visser


Follow-up of https://lists.apache.org/thread/y8vx7qr32xny31qq00f1jzpnz4kw8hpg



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29907) Externalize AWS connectors from Flink core

2022-11-13 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-29907:
--
Fix Version/s: aws-connector-3.0.0
   (was: aws-connector-2.0.0)

> Externalize AWS connectors from Flink core
> --
>
> Key: FLINK-29907
> URL: https://issues.apache.org/jira/browse/FLINK-29907
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / AWS
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
> Fix For: aws-connector-3.0.0
>
>
> Externlize the following modules from Flink core to the connectors repo:
> - {{flink-connector-aws-base}}
> - {{flink-connector-kinesis}}
> - {{flink-connector-sql-kinesis}}
> - {{flink-connector-aws-kinesis-streams}}
> - {{flink-connector-sql-aws-kinesis-streams}}
> - {{flink-connector-aws-kinesis-firehose}}
> - {{flink-connector-sql-aws-kinesis-firehose}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29444) Setup release scripts

2022-11-13 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-29444:
-

Assignee: Danny Cranmer

> Setup release scripts
> -
>
> Key: FLINK-29444
> URL: https://issues.apache.org/jira/browse/FLINK-29444
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / DynamoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
> Fix For: aws-connector-2.0.0
>
>
> See https://issues.apache.org/jira/browse/FLINK-29320



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-aws] dannycranmer opened a new pull request, #20: [FLINK-29444][Connectors/AWS] Syncing parent pom to elasticsearch in prep for release

2022-11-13 Thread GitBox


dannycranmer opened a new pull request, #20:
URL: https://github.com/apache/flink-connector-aws/pull/20

   ## What is the purpose of the change
   
   Prep for connector release
   
   ## Brief change log
   
   - Syncing parent pom to elasticsearch in prep for release
   - Adding git submodule for connector release scripts
   
   ## Verifying this change
   
   Build locally
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? n/a
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29444) Setup release scripts

2022-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29444:
---
Labels: pull-request-available  (was: )

> Setup release scripts
> -
>
> Key: FLINK-29444
> URL: https://issues.apache.org/jira/browse/FLINK-29444
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / DynamoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-2.0.0
>
>
> See https://issues.apache.org/jira/browse/FLINK-29320



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30008) Add Flink 1.16.0 Support

2022-11-13 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-30008:
-

 Summary: Add Flink 1.16.0 Support
 Key: FLINK-30008
 URL: https://issues.apache.org/jira/browse/FLINK-30008
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / AWS
Reporter: Danny Cranmer
 Fix For: aws-connector-2.0.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-aws] dannycranmer opened a new pull request, #21: [FLINK-30008][Connectors/AWS] Upgrade to Flink 1.16.0 and add CI build

2022-11-13 Thread GitBox


dannycranmer opened a new pull request, #21:
URL: https://github.com/apache/flink-connector-aws/pull/21

   ## What is the purpose of the change
   
   Add support for Flink 1.16
   
   ## Brief change log
   
   * Set default Flink version to 1.16.0
   * Remove snapshot repo
   * Add CI build profile for 1.16.0
   
   ## Verifying this change
   
   Run locally
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): yes
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? n/a
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-30008) Add Flink 1.16.0 Support

2022-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-30008:
---
Labels: pull-request-available  (was: )

> Add Flink 1.16.0 Support
> 
>
> Key: FLINK-30008
> URL: https://issues.apache.org/jira/browse/FLINK-30008
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / AWS
>Reporter: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-2.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] TQeeeee opened a new pull request, #21307: [FLINK-30005][docs]Translate "State Schema Evolution" into Chinese

2022-11-13 Thread GitBox


TQe opened a new pull request, #21307:
URL: https://github.com/apache/flink/pull/21307

   
   
   ## What is the purpose of the change
   
   Translate paragraph "Schema Migration Limitations" in "State Schema 
Evolution" into Chinese
   
   
   ## Brief change log
   
   Translate paragraph "Schema Migration Limitations" in "State Schema 
Evolution" into Chinese
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   don't need
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (docs)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-30005) Translate "Schema Migration Limitations for State Schema Evolution" into Chinese

2022-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-30005:
---
Labels: pull-request-available  (was: )

> Translate "Schema Migration Limitations for State Schema Evolution" into 
> Chinese
> 
>
> Key: FLINK-30005
> URL: https://issues.apache.org/jira/browse/FLINK-30005
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Affects Versions: 1.16.0
>Reporter: hao wang
>Priority: Minor
>  Labels: pull-request-available
>
> Translate paragraph "Schema Migration Limitations" in 
> [https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/|https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/]
>  page into Chinese.
> This doc located in 
> "flink/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #21307: [FLINK-30005][docs]Translate "State Schema Evolution" into Chinese

2022-11-13 Thread GitBox


flinkbot commented on PR #21307:
URL: https://github.com/apache/flink/pull/21307#issuecomment-1312720686

   
   ## CI report:
   
   * 8cc5502674904b6fbc84b61f615e022eaf46a11f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #21308: [hotfix][docs][table] Fix versioned table example

2022-11-13 Thread GitBox


flinkbot commented on PR #21308:
URL: https://github.com/apache/flink/pull/21308#issuecomment-1312724552

   
   ## CI report:
   
   * b2abc7df7f7b466f2fa3938be20f630094a08e33 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-30006) Cannot remove columns that are incorrectly considered constants from an Aggregate In Streaming

2022-11-13 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-30006:

Description: 
In Streaming, columns generated by dynamic functions are incorrectly considered 
constants and removed from an Aggregate via optimization rule 
`CoreRules.AGGREGATE_PROJECT_PULL_UP_CONSTANTS` (inside the RelMdPredicates, it 
only considers the non-deterministic functions, but this doesn't applicable for 
streaming)

an example query:
{code}
SELECT 
 cat, gmt_date, SUM(cnt), count(*)
FROM t1
WHERE gmt_date = current_date
GROUP BY cat, gmt_date
{code}

the wrong plan:
{code}
Calc(select=[cat, CAST(CURRENT_DATE() AS DATE) AS gmt_date, EXPR$2, EXPR$3])
+- GroupAggregate(groupBy=[cat], select=[cat, SUM(cnt) AS EXPR$2, COUNT(*) AS 
EXPR$3])
   +- Exchange(distribution=[hash[cat]])
  +- Calc(select=[cat, cnt], where=[=(gmt_date, CURRENT_DATE())])
 +- TableSourceScan(table=[[default_catalog, default_database, t1, 
filter=[], project=[cat, cnt, gmt_date], metadata=[]]], fields=[cat, cnt, 
gmt_date])
{code}

In addition to this issue, we need to check all optimization rules in streaming 
completely to avoid similar problems.

  was:
In Streaming, columns generated by dynamic functions are incorrectly considered 
constants and removed from an Aggregate via optimization rule 
`CoreRules.AGGREGATE_PROJECT_PULL_UP_CONSTANTS`

an example query:
{code}
SELECT 
 cat, gmt_date, SUM(cnt), count(*)
FROM t1
WHERE gmt_date = current_date
GROUP BY cat, gmt_date
{code}

the wrong plan:
{code}
Calc(select=[cat, CAST(CURRENT_DATE() AS DATE) AS gmt_date, EXPR$2, EXPR$3])
+- GroupAggregate(groupBy=[cat], select=[cat, SUM(cnt) AS EXPR$2, COUNT(*) AS 
EXPR$3])
   +- Exchange(distribution=[hash[cat]])
  +- Calc(select=[cat, cnt], where=[=(gmt_date, CURRENT_DATE())])
 +- TableSourceScan(table=[[default_catalog, default_database, t1, 
filter=[], project=[cat, cnt, gmt_date], metadata=[]]], fields=[cat, cnt, 
gmt_date])
{code}

In addition to this issue, we need to check all optimization rules in streaming 
completely to avoid similar problems.


> Cannot remove columns that are incorrectly considered constants from an 
> Aggregate In Streaming
> --
>
> Key: FLINK-30006
> URL: https://issues.apache.org/jira/browse/FLINK-30006
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: lincoln lee
>Priority: Major
> Fix For: 1.17.0
>
>
> In Streaming, columns generated by dynamic functions are incorrectly 
> considered constants and removed from an Aggregate via optimization rule 
> `CoreRules.AGGREGATE_PROJECT_PULL_UP_CONSTANTS` (inside the RelMdPredicates, 
> it only considers the non-deterministic functions, but this doesn't 
> applicable for streaming)
> an example query:
> {code}
> SELECT 
>  cat, gmt_date, SUM(cnt), count(*)
> FROM t1
> WHERE gmt_date = current_date
> GROUP BY cat, gmt_date
> {code}
> the wrong plan:
> {code}
> Calc(select=[cat, CAST(CURRENT_DATE() AS DATE) AS gmt_date, EXPR$2, EXPR$3])
> +- GroupAggregate(groupBy=[cat], select=[cat, SUM(cnt) AS EXPR$2, COUNT(*) AS 
> EXPR$3])
>+- Exchange(distribution=[hash[cat]])
>   +- Calc(select=[cat, cnt], where=[=(gmt_date, CURRENT_DATE())])
>  +- TableSourceScan(table=[[default_catalog, default_database, t1, 
> filter=[], project=[cat, cnt, gmt_date], metadata=[]]], fields=[cat, cnt, 
> gmt_date])
> {code}
> In addition to this issue, we need to check all optimization rules in 
> streaming completely to avoid similar problems.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] SteNicholas opened a new pull request, #375: [FLINK-28552] GenerateUtils#generateCompare supports MULTISET and MAP

2022-11-13 Thread GitBox


SteNicholas opened a new pull request, #375:
URL: https://github.com/apache/flink-table-store/pull/375

   Currently, changelog mode cannot support map and multiset as the field type. 
More specifically,
   
   - `MULTISET` is not supported at all, including append-only mode. (
   ```
   Caused by: java.lang.UnsupportedOperationException: Unsupported type: 
MULTISET
   
   at 
org.apache.flink.table.store.shaded.org.apache.flink.orc.OrcSplitReaderUtil.logicalTypeToOrcType(OrcSplitReaderUtil.java:214)
   at 
org.apache.flink.table.store.shaded.org.apache.flink.orc.OrcSplitReaderUtil.logicalTypeToOrcType(OrcSplitReaderUtil.java:210)
   at 
org.apache.flink.table.store.format.orc.OrcFileFormat.createWriterFactory(OrcFileFormat.java:94)
   at 
org.apache.flink.table.store.file.data.AppendOnlyWriter$RowRollingWriter.lambda$createRollingRowWriter$0(AppendOnlyWriter.java:229)
   at 
org.apache.flink.table.store.file.writer.RollingFileWriter.openCurrentWriter(RollingFileWriter.java:73)
   at 
org.apache.flink.table.store.file.writer.RollingFileWriter.write(RollingFileWriter.java:61)
   at 
org.apache.flink.table.store.file.data.AppendOnlyWriter.write(AppendOnlyWriter.java:108)
   at 
org.apache.flink.table.store.file.data.AppendOnlyWriter.write(AppendOnlyWriter.java:56)
   at 
org.apache.flink.table.store.table.AppendOnlyFileStoreTable$3.writeSinkRecord(AppendOnlyFileStoreTable.java:119)
   at 
org.apache.flink.table.store.table.sink.AbstractTableWrite.write(AbstractTableWrite.java:76)
   at 
org.apache.flink.table.store.connector.sink.StoreWriteOperator.processElement(StoreWriteOperator.java:124)
   ... 13 more)
   ``` 
   
   Stacktrace
   ```
   java.lang.UnsupportedOperationException
   at 
org.apache.flink.table.store.codegen.GenerateUtils$.generateCompare(GenerateUtils.scala:139)
   at 
org.apache.flink.table.store.codegen.GenerateUtils$.$anonfun$generateRowCompare$1(GenerateUtils.scala:289)
   at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32)
   at 
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29)
   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:194)
   at 
org.apache.flink.table.store.codegen.GenerateUtils$.generateRowCompare(GenerateUtils.scala:263)
   at 
org.apache.flink.table.store.codegen.ComparatorCodeGenerator$.gen(ComparatorCodeGenerator.scala:45)
   at 
org.apache.flink.table.store.codegen.ComparatorCodeGenerator.gen(ComparatorCodeGenerator.scala)
   at 
org.apache.flink.table.store.codegen.CodeGeneratorImpl.generateRecordComparator(CodeGeneratorImpl.java:53)
   at 
org.apache.flink.table.store.codegen.CodeGenUtils.generateRecordComparator(CodeGenUtils.java:66)
   at 
org.apache.flink.table.store.file.utils.KeyComparatorSupplier.(KeyComparatorSupplier.java:40)
   at 
org.apache.flink.table.store.file.KeyValueFileStore.(KeyValueFileStore.java:59)
   at 
org.apache.flink.table.store.table.ChangelogValueCountFileStoreTable.(ChangelogValueCountFileStoreTable.java:73)
   at 
org.apache.flink.table.store.table.FileStoreTableFactory.create(FileStoreTableFactory.java:70)
   at 
org.apache.flink.table.store.table.FileStoreTableFactory.create(FileStoreTableFactory.java:50)
   at 
org.apache.flink.table.store.spark.SimpleTableTestHelper.(SimpleTableTestHelper.java:58)
   at 
org.apache.flink.table.store.spark.SparkReadITCase.startMetastoreAndSpark(SparkReadITCase.java:93)
   ```
   `GenerateUtils#generateCompare` should support `MULTISET` and `MAP`.
   
   **The brief change log**
   
   - `GenerateUtils#generateCompare` supports `MULTISET` and `MAP`.
   - `OrcFileFormat` supports to refine the `MULTISET` to `MAP`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28552) GenerateUtils#generateCompare supports MULTISET and MAP

2022-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28552:
---
Labels: pull-request-available  (was: )

> GenerateUtils#generateCompare supports MULTISET and MAP
> ---
>
> Key: FLINK-28552
> URL: https://issues.apache.org/jira/browse/FLINK-28552
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> Currently, changelog mode cannot support map and multiset as the field type.
> More specifically,
>  * MULTISET is not supported at all, including append-only mode. (
> Caused by: java.lang.UnsupportedOperationException: Unsupported type: 
> MULTISET
> at 
> org.apache.flink.table.store.shaded.org.apache.flink.orc.OrcSplitReaderUtil.logicalTypeToOrcType(OrcSplitReaderUtil.java:214)
> at 
> org.apache.flink.table.store.shaded.org.apache.flink.orc.OrcSplitReaderUtil.logicalTypeToOrcType(OrcSplitReaderUtil.java:210)
> at 
> org.apache.flink.table.store.format.orc.OrcFileFormat.createWriterFactory(OrcFileFormat.java:94)
> at 
> org.apache.flink.table.store.file.data.AppendOnlyWriter$RowRollingWriter.lambda$createRollingRowWriter$0(AppendOnlyWriter.java:229)
> at 
> org.apache.flink.table.store.file.writer.RollingFileWriter.openCurrentWriter(RollingFileWriter.java:73)
> at 
> org.apache.flink.table.store.file.writer.RollingFileWriter.write(RollingFileWriter.java:61)
> at 
> org.apache.flink.table.store.file.data.AppendOnlyWriter.write(AppendOnlyWriter.java:108)
> at 
> org.apache.flink.table.store.file.data.AppendOnlyWriter.write(AppendOnlyWriter.java:56)
> at 
> org.apache.flink.table.store.table.AppendOnlyFileStoreTable$3.writeSinkRecord(AppendOnlyFileStoreTable.java:119)
> at 
> org.apache.flink.table.store.table.sink.AbstractTableWrite.write(AbstractTableWrite.java:76)
> at 
> org.apache.flink.table.store.connector.sink.StoreWriteOperator.processElement(StoreWriteOperator.java:124)
> ... 13 more)
>  * MAP cannot be pk for key-value mode, and cannot be the fields for 
> value-count mode.
>  
> Stacktrace
> java.lang.UnsupportedOperationException
>     at 
> org.apache.flink.table.store.codegen.GenerateUtils$.generateCompare(GenerateUtils.scala:139)
>     at 
> org.apache.flink.table.store.codegen.GenerateUtils$.$anonfun$generateRowCompare$1(GenerateUtils.scala:289)
>     at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32)
>     at 
> scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29)
>     at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:194)
>     at 
> org.apache.flink.table.store.codegen.GenerateUtils$.generateRowCompare(GenerateUtils.scala:263)
>     at 
> org.apache.flink.table.store.codegen.ComparatorCodeGenerator$.gen(ComparatorCodeGenerator.scala:45)
>     at 
> org.apache.flink.table.store.codegen.ComparatorCodeGenerator.gen(ComparatorCodeGenerator.scala)
>     at 
> org.apache.flink.table.store.codegen.CodeGeneratorImpl.generateRecordComparator(CodeGeneratorImpl.java:53)
>     at 
> org.apache.flink.table.store.codegen.CodeGenUtils.generateRecordComparator(CodeGenUtils.java:66)
>     at 
> org.apache.flink.table.store.file.utils.KeyComparatorSupplier.(KeyComparatorSupplier.java:40)
>     at 
> org.apache.flink.table.store.file.KeyValueFileStore.(KeyValueFileStore.java:59)
>     at 
> org.apache.flink.table.store.table.ChangelogValueCountFileStoreTable.(ChangelogValueCountFileStoreTable.java:73)
>     at 
> org.apache.flink.table.store.table.FileStoreTableFactory.create(FileStoreTableFactory.java:70)
>     at 
> org.apache.flink.table.store.table.FileStoreTableFactory.create(FileStoreTableFactory.java:50)
>     at 
> org.apache.flink.table.store.spark.SimpleTableTestHelper.(SimpleTableTestHelper.java:58)
>     at 
> org.apache.flink.table.store.spark.SparkReadITCase.startMetastoreAndSpark(SparkReadITCase.java:93)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-web] lindong28 commented on a diff in pull request #577: [FLINK-29668] Remove Gelly

2022-11-13 Thread GitBox


lindong28 commented on code in PR #577:
URL: https://github.com/apache/flink-web/pull/577#discussion_r1020930301


##
usecases.md:
##
@@ -67,7 +67,7 @@ Another aspect is a simpler application architecture. A batch 
analytics pipeline
 
 ### How does Flink support data analytics applications?
 
-Flink provides very good support for continuous streaming as well as batch 
analytics. Specifically, it features an ANSI-compliant SQL interface with 
unified semantics for batch and streaming queries. SQL queries compute the same 
result regardless whether they are run on a static data set of recorded events 
or on a real-time event stream. Rich support for user-defined functions ensures 
that custom code can be executed in SQL queries. If even more custom logic is 
required, Flink's DataStream API or DataSet API provide more low-level control. 
Moreover, Flink's Gelly library provides algorithms and building blocks for 
large-scale and high-performance graph analytics on batch data sets.
+Flink provides very good support for continuous streaming as well as batch 
analytics. Specifically, it features an ANSI-compliant SQL interface with 
unified semantics for batch and streaming queries. SQL queries compute the same 
result regardless whether they are run on a static data set of recorded events 
or on a real-time event stream. Rich support for user-defined functions ensures 
that custom code can be executed in SQL queries. If even more custom logic is 
required, Flink's DataStream API or DataSet API provide more low-level control. 

Review Comment:
   I think the 
[iteration](https://nightlies.apache.org/flink/flink-ml-docs-stable/docs/development/iteration/)
 capability supported in Flink ML can not be used to support typical graph 
processing capabilities (e.g. vertex-centric or gather-sum-apply). And there is 
currently no plan to support these graph processing capabilities in Flink ML.
   
   So it seems better not to add the paragraph suggested above. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] jd-hatzenbuhler commented on pull request #20808: [FLINK-28520][runtime] RestClient doesn't use SNI TLS extension

2022-11-13 Thread GitBox


jd-hatzenbuhler commented on PR #20808:
URL: https://github.com/apache/flink/pull/20808#issuecomment-1312769743

   
https://github.com/apache/flink/commit/ff64ef0e5f286ba62338832cd68aff60196b1f42 
Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=43040)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-29939) Add metrics for Kubernetes Client Response 5xx count and rate

2022-11-13 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-29939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Márton Balassi reassigned FLINK-29939:
--

Assignee: Zhou Jiang

> Add metrics for Kubernetes Client Response 5xx count and rate
> -
>
> Key: FLINK-29939
> URL: https://issues.apache.org/jira/browse/FLINK-29939
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.3.0
>Reporter: Zhou Jiang
>Assignee: Zhou Jiang
>Priority: Minor
>
> Operator now publishes k8s client response count by response code. In 
> addition to the accumulative count, adding rate for k8s client error 
> responses could help to setup alerts detect underlying cluster API server 
> status proactively. This is for enhancement of metrics when Flink Operator is 
> deployed to shared / multi-tenant k8s clusters. 
>  
> Why is rate needed for certain response codes?
> To detect issues proactively by setting up alerts in certain cases. It could 
> not the total number but the rate indicates the start / end of unavailability 
> issue.
>  
> Why do some 4xx matter in prod?
> For example - noisy neighbor issue may happen at random time in shared 
> clusters, and operator may start to see increased number of 429 if cluster 
> does not have fairness in rate limiting. Another example is about churn: when 
> the cluster has namespaces quota defined and namespace is under pod churn, 
> there could be increasing number of 409. In these cases, metrics and alerting 
> on count / rate of certain 4xx is critical to understand start / end of prod 
> outage.
>  
> Why is 5xx needed ?
> For faster identify infrastructure issue. With 5xx response count + rate, 
> It's more straightforward than enumerating possible 5xx codes when setting up 
> prod alerts.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] mbalassi commented on a diff in pull request #21147: [FLINK-28330][runtime][security] Remove old delegation token framework code

2022-11-13 Thread GitBox


mbalassi commented on code in PR #21147:
URL: https://github.com/apache/flink/pull/21147#discussion_r1020951099


##
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java:
##
@@ -1150,26 +1150,14 @@ private ApplicationReport startAppMaster(
 final ContainerLaunchContext amContainer =
 setupApplicationMasterContainer(yarnClusterEntrypoint, 
hasKrb5, processSpec);
 
-// New delegation token framework
 if 
(configuration.getBoolean(SecurityOptions.KERBEROS_FETCH_DELEGATION_TOKEN)) {
-setTokensFor(amContainer);
-}
-// Old delegation token framework
-if (UserGroupInformation.isSecurityEnabled()) {
-LOG.info("Adding delegation token to the AM container.");
-final List pathsToObtainToken = new ArrayList<>();
-boolean fetchToken =
-
configuration.getBoolean(SecurityOptions.KERBEROS_FETCH_DELEGATION_TOKEN);
-if (fetchToken) {
-List yarnAccessList =
-ConfigUtils.decodeListFromConfig(
-configuration,
-
SecurityOptions.KERBEROS_HADOOP_FILESYSTEMS_TO_ACCESS,
-Path::new);
-pathsToObtainToken.addAll(yarnAccessList);
-pathsToObtainToken.addAll(fileUploader.getRemotePaths());
+KerberosLoginProvider kerberosLoginProvider = new 
KerberosLoginProvider(configuration);
+if (kerberosLoginProvider.isLoginPossible()) {
+setTokensFor(amContainer);
+} else {
+LOG.info(
+"Cannot use kerberos delegation token manager no valid 
kerberos credentials provided.");

Review Comment:
   nit: token manager`,` no valid



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] zjureel opened a new pull request, #376: [FLINK-27843] Schema evolution for data file meta

2022-11-13 Thread GitBox


zjureel opened a new pull request, #376:
URL: https://github.com/apache/flink-table-store/pull/376

   Currently, the table store uses the latest schema id to read the data file 
meta. When the schema evolves, it will cause errors, for example:
   1. the schema of underlying data is [1->a, 2->b, 3->c, 4->d] and schema id 
is 0, where 1/2/3/4 is field id and a/b/c/d is field name
   2. After schema evolution, schema id is 1, and the new schema is [1->a, 
3->c, 5->f, 6->b, 7->g]
   When table store reads the field stats from data file meta, it should 
mapping schema 1 to 0 according to their field ids.
   
   This PR will read and parse the data according to the schema id in the meta 
file when reading the data file meta, and create index mapping from the table 
schema and the meta schema, so that the table store can read the correct file 
meta data through its latest schema.
   
   The main codes are as follows:
   1. Added `SchemaFieldTypeExtractor` to extract key fields for 
`ChangelogValueCountFileStoreTable` and `ChangelogWithKeyFileStoreTable`
   2. Added `SchemaEvolutionUtil` to create index mapping from table schema to 
meta file schema
   3. Updated `FieldStatsArraySerializer` to read field stats with given index 
mapping
   
   The main tests include:
   1. Added `SchemaEvolutionUtilTest` to create index mapping between two 
schemas.
   2. Added `FieldStatsArraSerializerTest` to read meta from table schema
   3. Added `AppendOnlyTableFileMetaFilterTest`, 
`ChangelogValueCountFileMetaFilterTest` and 
`ChangelogWithKeyFileMetaFilterTest` to filter old field, new field, partition 
field and primary key in data file meta in table scan.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27843) Schema evolution for data file meta

2022-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27843:
---
Labels: pull-request-available  (was: )

> Schema evolution for data file meta
> ---
>
> Key: FLINK-27843
> URL: https://issues.apache.org/jira/browse/FLINK-27843
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: pull-request-available
> Fix For: table-store-0.3.0
>
>
> There are quite a few metadata operations on DataFileMeta, such as getting 
> the statistics of each column and the partition of the file.
> We need to evolution to the latest schema based on schemaId when we get this 
> information



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] FlechazoW closed pull request #21295: [Typo] Fix the typo 'retriable', which means 'retryable'.

2022-11-13 Thread GitBox


FlechazoW closed pull request #21295: [Typo] Fix the typo 'retriable', which 
means 'retryable'.
URL: https://github.com/apache/flink/pull/21295


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-29992) Join execution plan parsing error

2022-11-13 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-29992:
--

Assignee: luoyuxia

> Join execution plan parsing error
> -
>
> Key: FLINK-29992
> URL: https://issues.apache.org/jira/browse/FLINK-29992
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0, 1.17.0
>Reporter: HunterXHunter
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> //
> tableEnv.executeSql(" CREATE CATALOG hive WITH (\n"
> + "  'type' = 'hive',\n"
> + " 'default-database' = 'flinkdebug',\n"
> + " 'hive-conf-dir' = '/programe/hadoop/hive-3.1.2/conf'\n"
> + " )");
> tableEnv.executeSql("create table datagen_tbl (\n"
> + "id STRING\n"
> + ",name STRING\n"
> + ",age bigint\n"
> + ",ts bigint\n"
> + ",`par` STRING\n"
> + ",pro_time as PROCTIME()\n"
> + ") with (\n"
> + "  'connector'='datagen'\n"
> + ",'rows-per-second'='10'\n"
> + " \n"
> + ")");
> String dml1 = "select * "
> + " from datagen_tbl as p "
> + " join hive.flinkdebug.default_hive_src_tbl "
> + " FOR SYSTEM_TIME AS OF p.pro_time AS c"
> + " ON p.id = c.id";
> // Execution succeeded
>   System.out.println(tableEnv.explainSql(dml1));
> String dml2 = "select p.id "
> + " from datagen_tbl as p "
> + " join hive.flinkdebug.default_hive_src_tbl "
> + " FOR SYSTEM_TIME AS OF p.pro_time AS c"
> + " ON p.id = c.id";
> // Throw an exception
>  System.out.println(tableEnv.explainSql(dml2)); {code}
> {code:java}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: FlinkLogicalCalc(select=[id]) +- 
> FlinkLogicalJoin(condition=[=($0, $1)], joinType=[inner])    :- 
> FlinkLogicalCalc(select=[id])    :  +- 
> FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
> datagen_tbl]], fields=[id, name, age, ts, par])    +- 
> FlinkLogicalSnapshot(period=[$cor1.pro_time])       +- 
> FlinkLogicalTableSourceScan(table=[[hive, flinkdebug, default_hive_src_tbl, 
> project=[id]]], fields=[id])This exception indicates that the query uses an 
> unsupported SQL feature. Please check the documentation for the set of 
> currently supported SQL features.    at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70)
>      at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29992) Join execution plan parsing error

2022-11-13 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-29992:
---
Affects Version/s: 1.15.3

> Join execution plan parsing error
> -
>
> Key: FLINK-29992
> URL: https://issues.apache.org/jira/browse/FLINK-29992
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0, 1.17.0, 1.15.3
>Reporter: HunterXHunter
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> //
> tableEnv.executeSql(" CREATE CATALOG hive WITH (\n"
> + "  'type' = 'hive',\n"
> + " 'default-database' = 'flinkdebug',\n"
> + " 'hive-conf-dir' = '/programe/hadoop/hive-3.1.2/conf'\n"
> + " )");
> tableEnv.executeSql("create table datagen_tbl (\n"
> + "id STRING\n"
> + ",name STRING\n"
> + ",age bigint\n"
> + ",ts bigint\n"
> + ",`par` STRING\n"
> + ",pro_time as PROCTIME()\n"
> + ") with (\n"
> + "  'connector'='datagen'\n"
> + ",'rows-per-second'='10'\n"
> + " \n"
> + ")");
> String dml1 = "select * "
> + " from datagen_tbl as p "
> + " join hive.flinkdebug.default_hive_src_tbl "
> + " FOR SYSTEM_TIME AS OF p.pro_time AS c"
> + " ON p.id = c.id";
> // Execution succeeded
>   System.out.println(tableEnv.explainSql(dml1));
> String dml2 = "select p.id "
> + " from datagen_tbl as p "
> + " join hive.flinkdebug.default_hive_src_tbl "
> + " FOR SYSTEM_TIME AS OF p.pro_time AS c"
> + " ON p.id = c.id";
> // Throw an exception
>  System.out.println(tableEnv.explainSql(dml2)); {code}
> {code:java}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: FlinkLogicalCalc(select=[id]) +- 
> FlinkLogicalJoin(condition=[=($0, $1)], joinType=[inner])    :- 
> FlinkLogicalCalc(select=[id])    :  +- 
> FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
> datagen_tbl]], fields=[id, name, age, ts, par])    +- 
> FlinkLogicalSnapshot(period=[$cor1.pro_time])       +- 
> FlinkLogicalTableSourceScan(table=[[hive, flinkdebug, default_hive_src_tbl, 
> project=[id]]], fields=[id])This exception indicates that the query uses an 
> unsupported SQL feature. Please check the documentation for the set of 
> currently supported SQL features.    at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70)
>      at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] leonardBang merged pull request #21302: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source

2022-11-13 Thread GitBox


leonardBang merged PR #21302:
URL: https://github.com/apache/flink/pull/21302


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] leonardBang commented on pull request #21302: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source

2022-11-13 Thread GitBox


leonardBang commented on PR #21302:
URL: https://github.com/apache/flink/pull/21302#issuecomment-1312940296

   @luoyuxia Could you also open PRs for release-1.14  ,  release-1.14   and 
release-1.16 ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29992) Join execution plan parsing error

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633438#comment-17633438
 ] 

Leonard Xu commented on FLINK-29992:


master:a4f9bfd1483ef64b0ed167bd29c98596e3bd5f49
release-1.16: TODO
release-1.15: TODO
release-1.14: TODO


> Join execution plan parsing error
> -
>
> Key: FLINK-29992
> URL: https://issues.apache.org/jira/browse/FLINK-29992
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0, 1.17.0, 1.15.3
>Reporter: HunterXHunter
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> //
> tableEnv.executeSql(" CREATE CATALOG hive WITH (\n"
> + "  'type' = 'hive',\n"
> + " 'default-database' = 'flinkdebug',\n"
> + " 'hive-conf-dir' = '/programe/hadoop/hive-3.1.2/conf'\n"
> + " )");
> tableEnv.executeSql("create table datagen_tbl (\n"
> + "id STRING\n"
> + ",name STRING\n"
> + ",age bigint\n"
> + ",ts bigint\n"
> + ",`par` STRING\n"
> + ",pro_time as PROCTIME()\n"
> + ") with (\n"
> + "  'connector'='datagen'\n"
> + ",'rows-per-second'='10'\n"
> + " \n"
> + ")");
> String dml1 = "select * "
> + " from datagen_tbl as p "
> + " join hive.flinkdebug.default_hive_src_tbl "
> + " FOR SYSTEM_TIME AS OF p.pro_time AS c"
> + " ON p.id = c.id";
> // Execution succeeded
>   System.out.println(tableEnv.explainSql(dml1));
> String dml2 = "select p.id "
> + " from datagen_tbl as p "
> + " join hive.flinkdebug.default_hive_src_tbl "
> + " FOR SYSTEM_TIME AS OF p.pro_time AS c"
> + " ON p.id = c.id";
> // Throw an exception
>  System.out.println(tableEnv.explainSql(dml2)); {code}
> {code:java}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: FlinkLogicalCalc(select=[id]) +- 
> FlinkLogicalJoin(condition=[=($0, $1)], joinType=[inner])    :- 
> FlinkLogicalCalc(select=[id])    :  +- 
> FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
> datagen_tbl]], fields=[id, name, age, ts, par])    +- 
> FlinkLogicalSnapshot(period=[$cor1.pro_time])       +- 
> FlinkLogicalTableSourceScan(table=[[hive, flinkdebug, default_hive_src_tbl, 
> project=[id]]], fields=[id])This exception indicates that the query uses an 
> unsupported SQL feature. Please check the documentation for the set of 
> currently supported SQL features.    at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:70)
>      at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:59)
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] luoyuxia opened a new pull request, #21309: [FLINK-29992][hive] Fix Hive lookup join fail when column pushdown to…

2022-11-13 Thread GitBox


luoyuxia opened a new pull request, #21309:
URL: https://github.com/apache/flink/pull/21309

   … Hive lookup table source
   
   This closes ##21302.
   
   Backport for #21302.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] luoyuxia commented on pull request #21309: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source

2022-11-13 Thread GitBox


luoyuxia commented on PR #21309:
URL: https://github.com/apache/flink/pull/21309#issuecomment-1312952537

   Let's wait the ci pass


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #21309: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source

2022-11-13 Thread GitBox


flinkbot commented on PR #21309:
URL: https://github.com/apache/flink/pull/21309#issuecomment-1312953496

   
   ## CI report:
   
   * dc5be9947ecc61b72ff44ce997c6519dd5944286 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] luoyuxia opened a new pull request, #21310: [FLINK-29992][hive] Fix lookup join fail with Hive table as lookup table source

2022-11-13 Thread GitBox


luoyuxia opened a new pull request, #21310:
URL: https://github.com/apache/flink/pull/21310

   
   
   This closes #21302.
   Backport for #21302


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #21310: [FLINK-29992][hive] Fix lookup join fail with Hive table as lookup table source

2022-11-13 Thread GitBox


flinkbot commented on PR #21310:
URL: https://github.com/apache/flink/pull/21310#issuecomment-1312958494

   
   ## CI report:
   
   * d66ef307290cc55167b7b4d4e1c615f9b5658783 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] luoyuxia opened a new pull request, #21311: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source

2022-11-13 Thread GitBox


luoyuxia opened a new pull request, #21311:
URL: https://github.com/apache/flink/pull/21311

   

[GitHub] [flink] luoyuxia commented on pull request #21311: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source

2022-11-13 Thread GitBox


luoyuxia commented on PR #21311:
URL: https://github.com/apache/flink/pull/21311#issuecomment-1312959501

   Let's wait the ci pass


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] luoyuxia commented on pull request #21302: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source

2022-11-13 Thread GitBox


luoyuxia commented on PR #21302:
URL: https://github.com/apache/flink/pull/21302#issuecomment-1312960002

   @leonardBang 
   1.16: https://github.com/apache/flink/pull/21310
   1.15: https://github.com/apache/flink/pull/21309
   1.14: https://github.com/apache/flink/pull/21311


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lincoln-lil commented on pull request #20745: [FLINK-28988] Don't push above filters down into the right table for temporal join

2022-11-13 Thread GitBox


lincoln-lil commented on PR #20745:
URL: https://github.com/apache/flink/pull/20745#issuecomment-1312961086

   @shuiqiangchen I found this pr while combing through the list of sql related 
legacy issues, and recently we fixed another similar user case on event time 
temporal join in FLINK-29849 includes two problems: 1. ChangelogNormalize 
incorrectly added for upsert source; 2. incorrectly filter pushdown
   and for the second one, I think your solution that only prevent pushing down 
filter related to the right side of input for the event time temporal join is 
better,
   would you like continue this work and fixing the failed tests first?And 
after this was done, the pr for FLINK-29849 can remove the filter part and 
based on your fix.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #21311: [FLINK-29992][hive] fix lookup join fail with Hive table as lookup table source

2022-11-13 Thread GitBox


flinkbot commented on PR #21311:
URL: https://github.com/apache/flink/pull/21311#issuecomment-1312961368

   
   ## CI report:
   
   * 1126862ce143600a501013f86c8855a96525a42e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] yunfengzhou-hub commented on a diff in pull request #172: [FLINK-29592] Add Estimator and Transformer for RobustScaler

2022-11-13 Thread GitBox


yunfengzhou-hub commented on code in PR #172:
URL: https://github.com/apache/flink-ml/pull/172#discussion_r1021035116


##
flink-ml-lib/src/main/java/org/apache/flink/ml/feature/robustscaler/RobustScaler.java:
##
@@ -0,0 +1,183 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.feature.robustscaler;
+
+import org.apache.flink.api.common.functions.AggregateFunction;
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.ml.api.Estimator;
+import org.apache.flink.ml.common.datastream.DataStreamUtils;
+import org.apache.flink.ml.common.util.QuantileSummary;
+import org.apache.flink.ml.linalg.DenseVector;
+import org.apache.flink.ml.linalg.Vector;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Preconditions;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+/**
+ * Scale features using statistics that are robust to outliers.
+ *
+ * This Scaler removes the median and scales the data according to the 
quantile range (defaults
+ * to IQR: Interquartile Range). The IQR is the range between the 1st quartile 
(25th quantile) and
+ * the 3rd quartile (75th quantile) but can be configured.
+ *
+ * Centering and scaling happen independently on each feature by computing 
the relevant
+ * statistics on the samples in the training set. Median and quantile range 
are then stored to be
+ * used on later data using the transform method.
+ *
+ * Standardization of a dataset is a common requirement for many machine 
learning estimators.
+ * Typically this is done by removing the mean and scaling to unit variance. 
However, outliers can
+ * often influence the sample mean / variance in a negative way. In such 
cases, the median and the
+ * interquartile range often give better results.

Review Comment:
   Sorry that I mistook the meaning as "the median range and the interquartile 
range". I agree that there is no grammar error now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-25512) Materialization Files are not cleaned up if no checkpoint is using it

2022-11-13 Thread Yanfei Lei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanfei Lei updated FLINK-25512:
---
Fix Version/s: 1.17.0

> Materialization Files are not cleaned up if no checkpoint is using it
> -
>
> Key: FLINK-25512
> URL: https://issues.apache.org/jira/browse/FLINK-25512
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Yuan Mei
>Assignee: Nicholas Jiang
>Priority: Minor
>  Labels: stale-assigned
> Fix For: 1.17.0
>
>
> This can happen if no checkpoint succeeds within the materialization interval.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] link3280 commented on pull request #21292: [FLINK-28617][SQL Gateway] Support stop job statement in SqlGatewayService

2022-11-13 Thread GitBox


link3280 commented on PR #21292:
URL: https://github.com/apache/flink/pull/21292#issuecomment-1313003298

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-24402) Add a metric for back-pressure from the ChangelogStateBackend

2022-11-13 Thread Yanfei Lei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanfei Lei updated FLINK-24402:
---
Fix Version/s: 1.17.0

> Add a metric for back-pressure from the ChangelogStateBackend
> -
>
> Key: FLINK-24402
> URL: https://issues.apache.org/jira/browse/FLINK-24402
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing, Runtime / Metrics, Runtime / 
> State Backends
>Reporter: Roman Khachatryan
>Priority: Major
> Fix For: 1.17.0
>
>
> FLINK-23381 adds back-pressure, this task is to add monitoring for that.
> See design doc: 
> https://docs.google.com/document/d/1k5WkWIYzs3n3GYQC76H9BLGxvN3wuq7qUHJuBPR9YX0/edit#heading=h.ayt6cka7z0qf
> Can be reported as back-pressured by backend per second, similar to how 
> "regular" back-pressure is currently reported 
> ([prototype|https://github.com/rkhachatryan/flink/tree/clsb-bp-test]).
> Metric name: stateBackendBlockedTimeMsPerSecond
>  Take into account:
>  * there is blocking and non-blocking waiting for changelog availability (see 
> [https://github.com/apache/flink/pull/17229#discussion_r740111285)]
>  * UI needs to be adjusted in several places: Task label; Task details
>  * Back-pressure status label should probably be adjusted
>  * If changelog is disabled then the metric shouldn't be shown
> Consider whether to include changelog back-pressure into overall 
> back-pressure 
> (https://github.com/apache/flink/pull/17229#discussion_r738322138 ).
>  
>  Uploading metrics should be added in FLINK-23486.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25255) Expose Changelog checkpoints via State Processor API

2022-11-13 Thread Yanfei Lei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanfei Lei updated FLINK-25255:
---
Fix Version/s: 1.17.0

> Expose Changelog checkpoints via State Processor API
> 
>
> Key: FLINK-25255
> URL: https://issues.apache.org/jira/browse/FLINK-25255
> Project: Flink
>  Issue Type: New Feature
>  Components: API / State Processor, Runtime / State Backends
>Reporter: Piotr Nowojski
>Priority: Minor
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] shuiqiangchen commented on pull request #20745: [FLINK-28988] Don't push above filters down into the right table for temporal join

2022-11-13 Thread GitBox


shuiqiangchen commented on PR #20745:
URL: https://github.com/apache/flink/pull/20745#issuecomment-1313023060

   @lincoln-lil Thank you for having a look at the pr. I would like to finish 
this work.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] jgrier commented on pull request #21278: [FLINK-29962] Exclude jamon 2.3.1 from dependencies

2022-11-13 Thread GitBox


jgrier commented on PR #21278:
URL: https://github.com/apache/flink/pull/21278#issuecomment-1313034406

   @flinkbot run azure
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] link3280 commented on pull request #21292: [FLINK-28617][SQL Gateway] Support stop job statement in SqlGatewayService

2022-11-13 Thread GitBox


link3280 commented on PR #21292:
URL: https://github.com/apache/flink/pull/21292#issuecomment-1313035136

   The CI failed due to an unrelated Kafka connector test. We may take a look 
at the codes first. cc @fsk119 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29859) TPC-DS end-to-end test with adaptive batch scheduler failed due to oo non-empty .out files.

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633475#comment-17633475
 ] 

Leonard Xu commented on FLINK-29859:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43077&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a

> TPC-DS end-to-end test with adaptive batch scheduler failed due to oo 
> non-empty .out files.
> ---
>
> Key: FLINK-29859
> URL: https://issues.apache.org/jira/browse/FLINK-29859
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Leonard Xu
>Priority: Major
>
> Nov 03 02:02:12 [FAIL] 'TPC-DS end-to-end test with adaptive batch scheduler' 
> failed after 21 minutes and 44 seconds! Test exited with exit code 0 but the 
> logs contained errors, exceptions or non-empty .out files 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42766&view=logs&s=ae4f8708-9994-57d3-c2d7-b892156e7812&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] jgrier merged pull request #21278: [FLINK-29962] Exclude jamon 2.3.1 from dependencies

2022-11-13 Thread GitBox


jgrier merged PR #21278:
URL: https://github.com/apache/flink/pull/21278


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-28394) Python py36-cython: InvocationError for command install_command.sh fails with exit code 1

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633477#comment-17633477
 ] 

Leonard Xu commented on FLINK-28394:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43087&view=logs&j=9cada3cb-c1d3-5621-16da-0f718fb86602&t=c67e71ed-6451-5d26-8920-5a8cf9651901

> Python py36-cython: InvocationError for command install_command.sh fails with 
> exit code 1
> -
>
> Key: FLINK-28394
> URL: https://issues.apache.org/jira/browse/FLINK-28394
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0, 1.15.3
>Reporter: Martijn Visser
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: stale-assigned, test-stability
>
> {code:java}
> Jul 05 03:47:22 Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
> Jul 05 03:47:32 Using Python version 3.8.13 (default, Mar 28 2022 11:38:47)
> Jul 05 03:47:32 pip_test_code.py success!
> Jul 05 03:47:32 py38-cython finish: run-test  after 1658.14 seconds
> Jul 05 03:47:32 py38-cython start: run-test-post 
> Jul 05 03:47:32 py38-cython finish: run-test-post  after 0.00 seconds
> Jul 05 03:47:32 ___ summary 
> 
> Jul 05 03:47:32 ERROR:   py36-cython: InvocationError for command 
> /__w/3/s/flink-python/dev/install_command.sh --exists-action w 
> .tox/.tmp/package/1/apache-flink-1.15.dev0.zip (exited with code 1)
> Jul 05 03:47:32   py37-cython: commands succeeded
> Jul 05 03:47:32   py38-cython: commands succeeded
> Jul 05 03:47:32 cleanup 
> /__w/3/s/flink-python/.tox/.tmp/package/1/apache-flink-1.15.dev0.zip
> Jul 05 03:47:33 tox checks... [FAILED]
> Jul 05 03:47:33 Process exited with EXIT CODE: 1.
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37604&view=logs&j=bf5e383b-9fd3-5f02-ca1c-8f788e2e76d3&t=85189c57-d8a0-5c9c-b61d-fc05cfac62cf&l=27789



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29830) PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633479#comment-17633479
 ] 

Leonard Xu commented on FLINK-29830:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43089&view=logs&j=8eee98ee-a482-5f7c-2c51-b3456453e704&t=da58e781-88fe-508b-b74c-018210e533cc

> PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed
> --
>
> Key: FLINK-29830
> URL: https://issues.apache.org/jira/browse/FLINK-29830
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.16.0, 1.17.0, 1.15.3
>Reporter: Martijn Visser
>Assignee: Yufan Sheng
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> Nov 01 01:28:03 [ERROR] Failures: 
> Nov 01 01:28:03 [ERROR]   
> PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar:140 
> Nov 01 01:28:03 Actual and expected should have same size but actual size is:
> Nov 01 01:28:03   0
> Nov 01 01:28:03 while expected size is:
> Nov 01 01:28:03   115
> Nov 01 01:28:03 Actual was:
> Nov 01 01:28:03   []
> Nov 01 01:28:03 Expected was:
> Nov 01 01:28:03   ["AT_LEAST_ONCE-isxrFGAL-0-kO65unDUKX",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-1-4tBNu1UmeR",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-2-9PTnEahlNU",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-3-GjWqEp21yz",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-4-jnbJr9C0w8",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-5-e8Wacz5yDO",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-6-9cW53j3Zcf",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-7-jk8z3m2Aa5",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-8-VU56KmMeiz",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-9-uvMdFxxDAj",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-10-FQyWfwJFbH",
> ...
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42680&view=logs&j=aa18c3f6-13b8-5f58-86bb-c1cffb239496&t=502fb6c0-30a2-5e49-c5c2-a00fa3acb203&l=37544



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29427) LookupJoinITCase failed with classloader problem

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633480#comment-17633480
 ] 

Leonard Xu commented on FLINK-29427:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43089&view=logs&j=de826397-1924-5900-0034-51895f69d4b7&t=f311e913-93a2-5a37-acab-4a63e1328f94

> LookupJoinITCase failed with classloader problem
> 
>
> Key: FLINK-29427
> URL: https://issues.apache.org/jira/browse/FLINK-29427
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Alexander Smirnov
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-27T02:49:20.9501313Z Sep 27 02:49:20 Caused by: 
> org.codehaus.janino.InternalCompilerException: Compiling 
> "KeyProjection$108341": Trying to access closed classloader. Please check if 
> you store classloaders directly or indirectly in static fields. If the 
> stacktrace suggests that the leak occurs in a third party library and cannot 
> be fixed immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> 2022-09-27T02:49:20.9502654Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:382)
> 2022-09-27T02:49:20.9503366Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:237)
> 2022-09-27T02:49:20.9504044Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:465)
> 2022-09-27T02:49:20.9504704Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:216)
> 2022-09-27T02:49:20.9505341Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:207)
> 2022-09-27T02:49:20.9505965Z Sep 27 02:49:20  at 
> org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80)
> 2022-09-27T02:49:20.9506584Z Sep 27 02:49:20  at 
> org.codehaus.commons.compiler.Cookable.cook(Cookable.java:75)
> 2022-09-27T02:49:20.9507261Z Sep 27 02:49:20  at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:104)
> 2022-09-27T02:49:20.9507883Z Sep 27 02:49:20  ... 30 more
> 2022-09-27T02:49:20.9509266Z Sep 27 02:49:20 Caused by: 
> java.lang.IllegalStateException: Trying to access closed classloader. Please 
> check if you store classloaders directly or indirectly in static fields. If 
> the stacktrace suggests that the leak occurs in a third party library and 
> cannot be fixed immediately, you can disable this check with the 
> configuration 'classloader.check-leaked-classloader'.
> 2022-09-27T02:49:20.9510835Z Sep 27 02:49:20  at 
> org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:184)
> 2022-09-27T02:49:20.9511760Z Sep 27 02:49:20  at 
> org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.loadClass(FlinkUserCodeClassLoaders.java:192)
> 2022-09-27T02:49:20.9512456Z Sep 27 02:49:20  at 
> java.lang.Class.forName0(Native Method)
> 2022-09-27T02:49:20.9513014Z Sep 27 02:49:20  at 
> java.lang.Class.forName(Class.java:348)
> 2022-09-27T02:49:20.9513649Z Sep 27 02:49:20  at 
> org.codehaus.janino.ClassLoaderIClassLoader.findIClass(ClassLoaderIClassLoader.java:89)
> 2022-09-27T02:49:20.9514339Z Sep 27 02:49:20  at 
> org.codehaus.janino.IClassLoader.loadIClass(IClassLoader.java:312)
> 2022-09-27T02:49:20.9514990Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.findTypeByName(UnitCompiler.java:8556)
> 2022-09-27T02:49:20.9515659Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6749)
> 2022-09-27T02:49:20.9516337Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6594)
> 2022-09-27T02:49:20.9516989Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6573)
> 2022-09-27T02:49:20.9517632Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.access$13900(UnitCompiler.java:215)
> 2022-09-27T02:49:20.9518319Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6481)
> 2022-09-27T02:49:20.9519018Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6476)
> 2022-09-27T02:49:20.9519680Z Sep 27 02:49:20  at 
> org.codehaus.janino.Java$ReferenceType.accept(Java.java:3928)
> 2022-09-27T02:49:20.9520386Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6476)
> 2022-09-27T02:49:20.9521042Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6469)
> 2022-09-27T02:49:20.9521677Z Sep 27 02:49:20  at 
> org.codehaus.ja

[jira] [Created] (FLINK-30009) OperatorCoordinator.start()'s JavaDoc mismatches its behavior

2022-11-13 Thread Yunfeng Zhou (Jira)
Yunfeng Zhou created FLINK-30009:


 Summary: OperatorCoordinator.start()'s JavaDoc mismatches its 
behavior
 Key: FLINK-30009
 URL: https://issues.apache.org/jira/browse/FLINK-30009
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.16.0
Reporter: Yunfeng Zhou


The following description lies in the JavaDoc of 
{{OperatorCoordinator.start()}}.

{{This method is called once at the beginning, before any other methods.}}

This description is incorrect because the method {{resetToCheckpoint()}} can 
happen before {{start()}} is invoked. For example, 
{{RecreateOnResetOperatorCoordinator.DeferrableCoordinator.resetAndStart()}} 
uses these methods in this way. Thus the JavaDoc of {{OperatorCoordinator}}'s 
methods should be modified to match this behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30009) OperatorCoordinator.start()'s JavaDoc mismatches its behavior

2022-11-13 Thread Yunfeng Zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunfeng Zhou updated FLINK-30009:
-
Description: 
The following description lies in the JavaDoc of 
{{OperatorCoordinator.start()}}.

{{This method is called once at the beginning, before any other methods.}}

This description is incorrect because the method {{resetToCheckpoint()}} can be 
invoked before {{start()}}. For example, 
{{RecreateOnResetOperatorCoordinator.DeferrableCoordinator.resetAndStart()}} 
uses these methods in this way. Thus the JavaDoc of {{OperatorCoordinator}}'s 
methods should be modified to match this behavior.

  was:
The following description lies in the JavaDoc of 
{{OperatorCoordinator.start()}}.

{{This method is called once at the beginning, before any other methods.}}

This description is incorrect because the method {{resetToCheckpoint()}} can 
happen before {{start()}} is invoked. For example, 
{{RecreateOnResetOperatorCoordinator.DeferrableCoordinator.resetAndStart()}} 
uses these methods in this way. Thus the JavaDoc of {{OperatorCoordinator}}'s 
methods should be modified to match this behavior.


> OperatorCoordinator.start()'s JavaDoc mismatches its behavior
> -
>
> Key: FLINK-30009
> URL: https://issues.apache.org/jira/browse/FLINK-30009
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.16.0
>Reporter: Yunfeng Zhou
>Priority: Major
>
> The following description lies in the JavaDoc of 
> {{OperatorCoordinator.start()}}.
> {{This method is called once at the beginning, before any other methods.}}
> This description is incorrect because the method {{resetToCheckpoint()}} can 
> be invoked before {{start()}}. For example, 
> {{RecreateOnResetOperatorCoordinator.DeferrableCoordinator.resetAndStart()}} 
> uses these methods in this way. Thus the JavaDoc of {{OperatorCoordinator}}'s 
> methods should be modified to match this behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29427) LookupJoinITCase failed with classloader problem

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633481#comment-17633481
 ] 

Leonard Xu commented on FLINK-29427:


[~fsk119] Could you take a look this issue?

> LookupJoinITCase failed with classloader problem
> 
>
> Key: FLINK-29427
> URL: https://issues.apache.org/jira/browse/FLINK-29427
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Alexander Smirnov
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-27T02:49:20.9501313Z Sep 27 02:49:20 Caused by: 
> org.codehaus.janino.InternalCompilerException: Compiling 
> "KeyProjection$108341": Trying to access closed classloader. Please check if 
> you store classloaders directly or indirectly in static fields. If the 
> stacktrace suggests that the leak occurs in a third party library and cannot 
> be fixed immediately, you can disable this check with the configuration 
> 'classloader.check-leaked-classloader'.
> 2022-09-27T02:49:20.9502654Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:382)
> 2022-09-27T02:49:20.9503366Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:237)
> 2022-09-27T02:49:20.9504044Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:465)
> 2022-09-27T02:49:20.9504704Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:216)
> 2022-09-27T02:49:20.9505341Z Sep 27 02:49:20  at 
> org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:207)
> 2022-09-27T02:49:20.9505965Z Sep 27 02:49:20  at 
> org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80)
> 2022-09-27T02:49:20.9506584Z Sep 27 02:49:20  at 
> org.codehaus.commons.compiler.Cookable.cook(Cookable.java:75)
> 2022-09-27T02:49:20.9507261Z Sep 27 02:49:20  at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:104)
> 2022-09-27T02:49:20.9507883Z Sep 27 02:49:20  ... 30 more
> 2022-09-27T02:49:20.9509266Z Sep 27 02:49:20 Caused by: 
> java.lang.IllegalStateException: Trying to access closed classloader. Please 
> check if you store classloaders directly or indirectly in static fields. If 
> the stacktrace suggests that the leak occurs in a third party library and 
> cannot be fixed immediately, you can disable this check with the 
> configuration 'classloader.check-leaked-classloader'.
> 2022-09-27T02:49:20.9510835Z Sep 27 02:49:20  at 
> org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:184)
> 2022-09-27T02:49:20.9511760Z Sep 27 02:49:20  at 
> org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.loadClass(FlinkUserCodeClassLoaders.java:192)
> 2022-09-27T02:49:20.9512456Z Sep 27 02:49:20  at 
> java.lang.Class.forName0(Native Method)
> 2022-09-27T02:49:20.9513014Z Sep 27 02:49:20  at 
> java.lang.Class.forName(Class.java:348)
> 2022-09-27T02:49:20.9513649Z Sep 27 02:49:20  at 
> org.codehaus.janino.ClassLoaderIClassLoader.findIClass(ClassLoaderIClassLoader.java:89)
> 2022-09-27T02:49:20.9514339Z Sep 27 02:49:20  at 
> org.codehaus.janino.IClassLoader.loadIClass(IClassLoader.java:312)
> 2022-09-27T02:49:20.9514990Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.findTypeByName(UnitCompiler.java:8556)
> 2022-09-27T02:49:20.9515659Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6749)
> 2022-09-27T02:49:20.9516337Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6594)
> 2022-09-27T02:49:20.9516989Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6573)
> 2022-09-27T02:49:20.9517632Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler.access$13900(UnitCompiler.java:215)
> 2022-09-27T02:49:20.9518319Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6481)
> 2022-09-27T02:49:20.9519018Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22$1.visitReferenceType(UnitCompiler.java:6476)
> 2022-09-27T02:49:20.9519680Z Sep 27 02:49:20  at 
> org.codehaus.janino.Java$ReferenceType.accept(Java.java:3928)
> 2022-09-27T02:49:20.9520386Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6476)
> 2022-09-27T02:49:20.9521042Z Sep 27 02:49:20  at 
> org.codehaus.janino.UnitCompiler$22.visitType(UnitCompiler.java:6469)
> 2022-09-27T02:49:20.9521677Z Sep 27 02:49:20  at 
> org.codehaus.janino.Java$ReferenceType.accept(Java.java:3927)
> 2022-09-27T02:49:20.9522299Z Sep 27 02:49:20  at 
> org.codehaus.janino.

[jira] [Commented] (FLINK-29830) PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633482#comment-17633482
 ] 

Leonard Xu commented on FLINK-29830:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43089&view=logs&j=8eee98ee-a482-5f7c-2c51-b3456453e704&t=da58e781-88fe-508b-b74c-018210e533cc

> PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar failed
> --
>
> Key: FLINK-29830
> URL: https://issues.apache.org/jira/browse/FLINK-29830
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.16.0, 1.17.0, 1.15.3
>Reporter: Martijn Visser
>Assignee: Yufan Sheng
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> Nov 01 01:28:03 [ERROR] Failures: 
> Nov 01 01:28:03 [ERROR]   
> PulsarSinkITCase$DeliveryGuaranteeTest.writeRecordsToPulsar:140 
> Nov 01 01:28:03 Actual and expected should have same size but actual size is:
> Nov 01 01:28:03   0
> Nov 01 01:28:03 while expected size is:
> Nov 01 01:28:03   115
> Nov 01 01:28:03 Actual was:
> Nov 01 01:28:03   []
> Nov 01 01:28:03 Expected was:
> Nov 01 01:28:03   ["AT_LEAST_ONCE-isxrFGAL-0-kO65unDUKX",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-1-4tBNu1UmeR",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-2-9PTnEahlNU",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-3-GjWqEp21yz",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-4-jnbJr9C0w8",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-5-e8Wacz5yDO",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-6-9cW53j3Zcf",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-7-jk8z3m2Aa5",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-8-VU56KmMeiz",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-9-uvMdFxxDAj",
> Nov 01 01:28:03 "AT_LEAST_ONCE-isxrFGAL-10-FQyWfwJFbH",
> ...
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42680&view=logs&j=aa18c3f6-13b8-5f58-86bb-c1cffb239496&t=502fb6c0-30a2-5e49-c5c2-a00fa3acb203&l=37544



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29755) PulsarSourceUnorderedE2ECase.testSavepoint failed because of missing TaskManagers

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633483#comment-17633483
 ] 

Leonard Xu commented on FLINK-29755:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43102&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=15c1d318-5ca8-529f-77a2-d113a700ec34

> PulsarSourceUnorderedE2ECase.testSavepoint failed because of missing 
> TaskManagers
> -
>
> Key: FLINK-29755
> URL: https://issues.apache.org/jira/browse/FLINK-29755
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
> Attachments: PulsarSourceUnorderedE2ECase.testSavepoint.log
>
>
> [This 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42325&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a&l=13932]
>  failed (not exclusively) due to a problem with 
> {{PulsarSourceUnorderedE2ECase.testSavepoint}}. It seems like there were no 
> TaskManagers spun up which resulted in the test job failing with a 
> {{NoResourceAvailableException}}.
> {code}
> org.apache.flink.runtime.jobmaster.slotpool.DeclarativeSlotPoolBridge [] - 
> Could not acquire the minimum required resources, failing slot requests. 
> Acquired: []. Current slot pool status: Registered TMs: 0, registered slots: 
> 0 free slots: 0
> {code}
> I didn't raise this one to critical because it looks like a missing 
> TaskManager test environment issue. I attached the e2e test-specific logs to 
> the Jira issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-18356) flink-table-planner Exit code 137 returned from process

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633485#comment-17633485
 ] 

Leonard Xu commented on FLINK-18356:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43102&view=logs&j=a9db68b9-a7e0-54b6-0f98-010e0aff39e2&t=cdd32e0b-6047-565b-c58f-14054472f1be

> flink-table-planner Exit code 137 returned from process
> ---
>
> Key: FLINK-18356
> URL: https://issues.apache.org/jira/browse/FLINK-18356
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.12.0, 1.13.0, 1.14.0, 1.15.0
>Reporter: Piotr Nowojski
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Attachments: 1234.jpg, app-profiling_4.gif
>
>
> {noformat}
> = test session starts 
> ==
> platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
> cachedir: .tox/py37-cython/.pytest_cache
> rootdir: /__w/3/s/flink-python
> collected 568 items
> pyflink/common/tests/test_configuration.py ..[  
> 1%]
> pyflink/common/tests/test_execution_config.py ...[  
> 5%]
> pyflink/dataset/tests/test_execution_environment.py .
> ##[error]Exit code 137 returned from process: file name '/bin/docker', 
> arguments 'exec -i -u 1002 
> 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> Finishing: Test - python
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729&view=logs&j=9cada3cb-c1d3-5621-16da-0f718fb86602&t=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30010) flink-quickstart-test failed due to could not resolve dependencies

2022-11-13 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-30010:
--

 Summary: flink-quickstart-test failed due to could not resolve 
dependencies 
 Key: FLINK-30010
 URL: https://issues.apache.org/jira/browse/FLINK-30010
 Project: Flink
  Issue Type: Bug
  Components: Examples, Tests
Affects Versions: 1.17.0
Reporter: Leonard Xu



{noformat}
Nov 13 02:10:37 [ERROR] Failed to execute goal on project 
flink-quickstart-test: Could not resolve dependencies for project 
org.apache.flink:flink-quickstart-test:jar:1.17-SNAPSHOT: Could not find 
artifact org.apache.flink:flink-quickstart-scala:jar:1.17-SNAPSHOT in 
apache.snapshots (https://repository.apache.org/snapshots) -> [Help 1]
Nov 13 02:10:37 [ERROR] 
Nov 13 02:10:37 [ERROR] To see the full stack trace of the errors, re-run Maven 
with the -e switch.
Nov 13 02:10:37 [ERROR] Re-run Maven using the -X switch to enable full debug 
logging.
Nov 13 02:10:37 [ERROR] 
Nov 13 02:10:37 [ERROR] For more information about the errors and possible 
solutions, please read the following articles:
Nov 13 02:10:37 [ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
Nov 13 02:10:37 [ERROR] 
Nov 13 02:10:37 [ERROR] After correcting the problems, you can resume the build 
with the command
Nov 13 02:10:37 [ERROR]   mvn  -rf :flink-quickstart-test
Nov 13 02:10:38 Process exited with EXIT CODE: 1.
Nov 13 02:10:38 Trying to KILL watchdog (293).
/__w/1/s/tools/ci/watchdog.sh: line 100:   293 Terminated  watchdog
Nov 13 02:10:38 
==
Nov 13 02:10:38 Compilation failure detected, skipping test execution.
Nov 13 02:10:38 
==
{noformat}



https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43102&view=logs&j=298e20ef-7951-5965-0e79-ea664ddc435e&t=d4c90338-c843-57b0-3232-10ae74f00347&l=18363



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29755) PulsarSourceUnorderedE2ECase.testSavepoint failed because of missing TaskManagers

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633487#comment-17633487
 ] 

Leonard Xu commented on FLINK-29755:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43104&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=15c1d318-5ca8-529f-77a2-d113a700ec34

> PulsarSourceUnorderedE2ECase.testSavepoint failed because of missing 
> TaskManagers
> -
>
> Key: FLINK-29755
> URL: https://issues.apache.org/jira/browse/FLINK-29755
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
> Attachments: PulsarSourceUnorderedE2ECase.testSavepoint.log
>
>
> [This 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=42325&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=160c9ae5-96fd-516e-1c91-deb81f59292a&l=13932]
>  failed (not exclusively) due to a problem with 
> {{PulsarSourceUnorderedE2ECase.testSavepoint}}. It seems like there were no 
> TaskManagers spun up which resulted in the test job failing with a 
> {{NoResourceAvailableException}}.
> {code}
> org.apache.flink.runtime.jobmaster.slotpool.DeclarativeSlotPoolBridge [] - 
> Could not acquire the minimum required resources, failing slot requests. 
> Acquired: []. Current slot pool status: Registered TMs: 0, registered slots: 
> 0 free slots: 0
> {code}
> I didn't raise this one to critical because it looks like a missing 
> TaskManager test environment issue. I attached the e2e test-specific logs to 
> the Jira issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-28729) flink hive catalog don't support jdk11

2022-11-13 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-28729.

Resolution: Not A Bug

> flink hive catalog don't support jdk11
> --
>
> Key: FLINK-28729
> URL: https://issues.apache.org/jira/browse/FLINK-28729
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.15.1
>Reporter: jeff-zou
>Priority: Major
>
> when I upgraded jdk to 11,I got the following error:
> {code:java}
> 
>  org.apache.flink
>  flink-sql-connector-hive-3.1.2_2.12
>  1.15.1
>   {code}
> {code:java}
> // error
> Caused by: java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient
>     at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1654)
>     at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:80)
>     at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:130)
>     at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:115)
>     ... 84 more
> Caused by: java.lang.reflect.InvocationTargetException
>     at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>     at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>     at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1652)
>     ... 87 more
> Caused by: MetaException(message:Got exception: java.lang.ClassCastException 
> class [Ljava.lang.Object; cannot be cast to class [Ljava.net.URI; 
> ([Ljava.lang.Object; and [Ljava.net.URI; are in module java.base of loader 
> 'bootstrap'))
>     at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1342)
>     at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:278)
>     at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:210)
>     ... 92 more
> Process finished with exit code -1
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-26827) FlinkSQL和hive整合报错

2022-11-13 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu closed FLINK-26827.
--
Resolution: Invalid

Please update this title to English, feel free to reopen once updated.

> FlinkSQL和hive整合报错
> -
>
> Key: FLINK-26827
> URL: https://issues.apache.org/jira/browse/FLINK-26827
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.13.3
> Environment: 环境:cdh6.2.1 linux系统,j d k1.8
>Reporter: zhushifeng
>Priority: Major
> Attachments: image-2022-03-24-09-33-31-786.png
>
>
> Topic : FlinkSQL combine with Hive
>  
> *step1:*
> environment:
> HIVE2.1  
> Flink1.13.3
> FlinkCDC2.1 
> CDH6.2.1
>  
> *step2:*
> when I do the following thing I come across some problems. For example,
> copy the following jar to /flink-1.13.3/lib/
> // Flink's Hive connector
>        flink-connector-hive_2.11-1.13.3.jar
>        // Hive dependencies
>        hive-exec-2.1.0.jar. ==    hive-exec-2.1.1-cdh6.2.1.jar
>        // add antlr-runtime if you need to use hive dialect
>        antlr-runtime-3.5.2.jar
> !image-2022-03-24-09-33-31-786.png!
>  
> *step3:* restart the Flink Cluster
>  # ./start-cluster.sh 
>  # Starting cluster.
>  # Starting standalonesession daemon on host xuehai-cm.
>  # Starting taskexecutor daemon on host xuehai-cm.
>  # Starting taskexecutor daemon on host xuehai-nn.
>  # Starting taskexecutor daemon on host xuehai-dn.
>  
> *step4:*
> CREATE CATALOG myhive WITH (
>     'type' = 'hive',
>     'default-database' = 'default',
>     'hive-conf-dir' = '/etc/hive/conf'
> );
> -- set the HiveCatalog as the current catalog of the session
> USE CATALOG myhive;
>  
> *step5:* use the hive
> Flink SQL> select * from  rptdata.basic_xhsys_user ;
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>         at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> Caused by: java.lang.ExceptionInInitializerError
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:348)
>         at 
> org.apache.flink.connectors.hive.HiveSourceFileEnumerator.createMRSplits(HiveSourceFileEnumerator.java:94)
>         at 
> org.apache.flink.connectors.hive.HiveSourceFileEnumerator.createInputSplits(HiveSourceFileEnumerator.java:71)
>         at 
> org.apache.flink.connectors.hive.HiveTableSource.lambda$getDataStream$1(HiveTableSource.java:212)
>         at 
> org.apache.flink.connectors.hive.HiveParallelismInference.logRunningTime(HiveParallelismInference.java:107)
>         at 
> org.apache.flink.connectors.hive.HiveParallelismInference.infer(HiveParallelismInference.java:95)
>         at 
> org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:207)
>         at 
> org.apache.flink.connectors.hive.HiveTableSource$1.produceDataStream(HiveTableSource.java:123)
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecTableSourceScan.translateToPlanInternal(CommonExecTableSourceScan.java:96)
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:247)
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:114)
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>         at 
> org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:70)
>         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>         at scala.collection.Iterator.foreach(Iterator.scala:937)
>         at scala.collection.Iterator.foreach$(Iterator.scala:937)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>         at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>         at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>         at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>         at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>         at 
> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:69)
>         at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)
>         at 
> org.apache.flink.table.api.internal.TableEnvir

[jira] [Commented] (FLINK-28394) Python py36-cython: InvocationError for command install_command.sh fails with exit code 1

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633491#comment-17633491
 ] 

Leonard Xu commented on FLINK-28394:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43104&view=logs&j=e92ecf6d-e207-5a42-7ff7-528ff0c5b259&t=40fc352e-9b4c-5fd8-363f-628f24b01ec2

> Python py36-cython: InvocationError for command install_command.sh fails with 
> exit code 1
> -
>
> Key: FLINK-28394
> URL: https://issues.apache.org/jira/browse/FLINK-28394
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.0, 1.15.3
>Reporter: Martijn Visser
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: stale-assigned, test-stability
>
> {code:java}
> Jul 05 03:47:22 Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
> Jul 05 03:47:32 Using Python version 3.8.13 (default, Mar 28 2022 11:38:47)
> Jul 05 03:47:32 pip_test_code.py success!
> Jul 05 03:47:32 py38-cython finish: run-test  after 1658.14 seconds
> Jul 05 03:47:32 py38-cython start: run-test-post 
> Jul 05 03:47:32 py38-cython finish: run-test-post  after 0.00 seconds
> Jul 05 03:47:32 ___ summary 
> 
> Jul 05 03:47:32 ERROR:   py36-cython: InvocationError for command 
> /__w/3/s/flink-python/dev/install_command.sh --exists-action w 
> .tox/.tmp/package/1/apache-flink-1.15.dev0.zip (exited with code 1)
> Jul 05 03:47:32   py37-cython: commands succeeded
> Jul 05 03:47:32   py38-cython: commands succeeded
> Jul 05 03:47:32 cleanup 
> /__w/3/s/flink-python/.tox/.tmp/package/1/apache-flink-1.15.dev0.zip
> Jul 05 03:47:33 tox checks... [FAILED]
> Jul 05 03:47:33 Process exited with EXIT CODE: 1.
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37604&view=logs&j=bf5e383b-9fd3-5f02-ca1c-8f788e2e76d3&t=85189c57-d8a0-5c9c-b61d-fc05cfac62cf&l=27789



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30011) HiveCatalogGenericMetadataTest azure CI failed due to catalog does not exist

2022-11-13 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-30011:
--

 Summary: HiveCatalogGenericMetadataTest azure CI failed due to 
catalog does not exist
 Key: FLINK-30011
 URL: https://issues.apache.org/jira/browse/FLINK-30011
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.16.1
Reporter: Leonard Xu



{noformat}

Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testGetPartitionStats:1212 » Catalog 
F...
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionNotExist:1160 
» Catalog
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionSpecInvalid_invalidPartitionSpec:1124
 » Catalog
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionSpecInvalid_sizeNotEqual:1139
 » Catalog
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_TableNotPartitioned:1110
 » Catalog
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testGetTableStats_TableNotExistException:1201
 » Catalog
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testGetTable_TableNotExistException:323 
» Catalog
Nov 13 01:55:18 [ERROR]   HiveCatalogHiveMetadataTest.testHiveStatistics:251 » 
Catalog Failed to create ...
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testListFunctions:749 » Catalog 
Failed...
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testListPartitionPartialSpec:1188 » 
Catalog
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testListTables:498 » Catalog Failed 
to...
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testListView:620 » Catalog Failed to 
c...
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testPartitionExists:1174 » Catalog 
Fai...
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableAlreadyExistException:483
 » Catalog
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableNotExistException:465
 » Catalog
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableNotExistException_ignored:477
 » Catalog
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_nonPartitionedTable:451 
» Catalog
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testRenameView:637 » Catalog Failed 
to...
Nov 13 01:55:18 [ERROR]   
HiveCatalogHiveMetadataTest>CatalogTest.testTableExists:510 » Catalog Failed 
t...
Nov 13 01:55:18 [ERROR]   HiveCatalogHiveMetadataTest.testViewCompatibility:115 
» Catalog Failed to crea...
Nov 13 01:55:18 [INFO] 
Nov 13 01:55:18 [ERROR] Tests run: 361, Failures: 0, Errors: 132, Skipped: 0
{noformat}


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43104&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=d04c9862-880c-52f5-574b-a7a79fef8e0f



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30011) HiveCatalogGenericMetadataTest azure CI failed due to catalog does not exist

2022-11-13 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633492#comment-17633492
 ] 

Leonard Xu commented on FLINK-30011:


[~luoyuxia] Could you take a look this ticket?

> HiveCatalogGenericMetadataTest azure CI failed due to catalog does not exist
> 
>
> Key: FLINK-30011
> URL: https://issues.apache.org/jira/browse/FLINK-30011
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.1
>Reporter: Leonard Xu
>Priority: Major
>
> {noformat}
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testGetPartitionStats:1212 » Catalog 
> F...
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionNotExist:1160
>  » Catalog
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionSpecInvalid_invalidPartitionSpec:1124
>  » Catalog
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_PartitionSpecInvalid_sizeNotEqual:1139
>  » Catalog
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testGetPartition_TableNotPartitioned:1110
>  » Catalog
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testGetTableStats_TableNotExistException:1201
>  » Catalog
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testGetTable_TableNotExistException:323
>  » Catalog
> Nov 13 01:55:18 [ERROR]   HiveCatalogHiveMetadataTest.testHiveStatistics:251 
> » Catalog Failed to create ...
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testListFunctions:749 » Catalog 
> Failed...
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testListPartitionPartialSpec:1188 » 
> Catalog
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testListTables:498 » Catalog Failed 
> to...
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testListView:620 » Catalog Failed to 
> c...
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testPartitionExists:1174 » Catalog 
> Fai...
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableAlreadyExistException:483
>  » Catalog
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableNotExistException:465
>  » Catalog
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_TableNotExistException_ignored:477
>  » Catalog
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testRenameTable_nonPartitionedTable:451
>  » Catalog
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testRenameView:637 » Catalog Failed 
> to...
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest>CatalogTest.testTableExists:510 » Catalog Failed 
> t...
> Nov 13 01:55:18 [ERROR]   
> HiveCatalogHiveMetadataTest.testViewCompatibility:115 » Catalog Failed to 
> crea...
> Nov 13 01:55:18 [INFO] 
> Nov 13 01:55:18 [ERROR] Tests run: 361, Failures: 0, Errors: 132, Skipped: 0
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43104&view=logs&j=245e1f2e-ba5b-5570-d689-25ae21e5302f&t=d04c9862-880c-52f5-574b-a7a79fef8e0f



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-aws] dannycranmer commented on pull request #20: [FLINK-29444][Connectors/AWS] Syncing parent pom to elasticsearch in prep for release

2022-11-13 Thread GitBox


dannycranmer commented on PR #20:
URL: 
https://github.com/apache/flink-connector-aws/pull/20#issuecomment-1313066278

   Superseded by https://github.com/apache/flink-connector-aws/pull/21


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] dannycranmer closed pull request #20: [FLINK-29444][Connectors/AWS] Syncing parent pom to elasticsearch in prep for release

2022-11-13 Thread GitBox


dannycranmer closed pull request #20: [FLINK-29444][Connectors/AWS] Syncing 
parent pom to elasticsearch in prep for release
URL: https://github.com/apache/flink-connector-aws/pull/20


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29962) Exclude Jamon 2.3.1

2022-11-13 Thread Jamie Grier (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633495#comment-17633495
 ] 

Jamie Grier commented on FLINK-29962:
-

Merged in Flink master: 9572cf6b287d71ee9c307546d8cd8f8898137bdd

> Exclude Jamon 2.3.1
> ---
>
> Key: FLINK-29962
> URL: https://issues.apache.org/jira/browse/FLINK-29962
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive, Table SQL / Gateway
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Minor
>  Labels: pull-request-available
>
> Hi all,
> My Maven mirror is complaining that the pom for jamon-runtime:2.3.1 has a 
> malformed pom. It looks like it's fixed in jamon-runtime:2.4.1. According to 
> dependency:tree, Flink already has transitive dependencies on both versions, 
> so I'm proposing to just exclude the transitive dependency from the 
> problematic direct dependencies and pin the dependency to 2.4.1.
> I'll send a PR shortly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29962) Exclude Jamon 2.3.1

2022-11-13 Thread Jamie Grier (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jamie Grier updated FLINK-29962:

Fix Version/s: 1.17.0

> Exclude Jamon 2.3.1
> ---
>
> Key: FLINK-29962
> URL: https://issues.apache.org/jira/browse/FLINK-29962
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive, Table SQL / Gateway
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> Hi all,
> My Maven mirror is complaining that the pom for jamon-runtime:2.3.1 has a 
> malformed pom. It looks like it's fixed in jamon-runtime:2.4.1. According to 
> dependency:tree, Flink already has transitive dependencies on both versions, 
> so I'm proposing to just exclude the transitive dependency from the 
> problematic direct dependencies and pin the dependency to 2.4.1.
> I'll send a PR shortly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29962) Exclude Jamon 2.3.1

2022-11-13 Thread Jamie Grier (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jamie Grier resolved FLINK-29962.
-
Resolution: Fixed

> Exclude Jamon 2.3.1
> ---
>
> Key: FLINK-29962
> URL: https://issues.apache.org/jira/browse/FLINK-29962
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive, Table SQL / Gateway
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> Hi all,
> My Maven mirror is complaining that the pom for jamon-runtime:2.3.1 has a 
> malformed pom. It looks like it's fixed in jamon-runtime:2.4.1. According to 
> dependency:tree, Flink already has transitive dependencies on both versions, 
> so I'm proposing to just exclude the transitive dependency from the 
> problematic direct dependencies and pin the dependency to 2.4.1.
> I'll send a PR shortly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] leonardBang commented on a diff in pull request #21308: [hotfix][docs][table] Fix versioned table example

2022-11-13 Thread GitBox


leonardBang commented on code in PR #21308:
URL: https://github.com/apache/flink/pull/21308#discussion_r1021064692


##
docs/content/docs/dev/table/concepts/versioned_tables.md:
##
@@ -152,9 +153,9 @@ WHERE rownum = 1;
 +(INSERT)09:00:00  Yen102
 +(INSERT)09:00:00  Euro   114
 +(INSERT)09:00:00  USD1
-+(UPDATE_AFTER)  10:45:00  Euro   116
 +(UPDATE_AFTER)  11:15:00  Euro   119
-+(INSERT)11:49:00  Pounds 108
++(INSERT)11:45:00  Pounds 107
++(UPDATE_AFTER)  11:49:00  Pounds 108

Review Comment:
   Why we need change here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] dannycranmer opened a new pull request, #22: [FLINK-29688] Add test to detect changes in DynamoDB model

2022-11-13 Thread GitBox


dannycranmer opened a new pull request, #22:
URL: https://github.com/apache/flink-connector-aws/pull/22

   ## What is the purpose of the change
   
   Add test to detect changes in DynamoDB model
   
   ## Brief change log
   
   * Add test to detect changes in DynamoDB model
   
   ## Verifying this change
   
   Tests pass
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? n/a
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29688) Build time compatibility check for DynamoDB SDK

2022-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29688:
---
Labels: pull-request-available  (was: )

> Build time compatibility check for DynamoDB SDK
> ---
>
> Key: FLINK-29688
> URL: https://issues.apache.org/jira/browse/FLINK-29688
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / DynamoDB
>Reporter: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-2.0.0
>
>
> The DynamoDB connector exposes SDK classes to the end user code, and also is 
> responsible for de/serialization of these classes. Add a build time check to 
> ensure the client model is binary equivalent of a known good version. This 
> will prevent us updating the SDK and unexpectedly breaking the 
> de/serialization.
> We use {{japicmp-maven-plugin}} to do something similar for Flink, we can 
> potentially reuse this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30001) sql-client.sh start failed

2022-11-13 Thread xiaohang.li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633497#comment-17633497
 ] 

xiaohang.li commented on FLINK-30001:
-

经查询,
默认情况下 flink 中的 org.apache.flink.table.planner.loader.PlannerModule 模块使用 /tmp 
目录来作为临时的工作路径,因此会尝试调用 jave 的 java.nio.file.Files 类来创建这个目录,但是如果  /tmp 目录是一个指向 
/mnt/tmp 的符号软链接,这种情况 java.nio.file.Files 
类无法处理,从而导致出现报错。需要在sql-client.sh添加临时路径的配置:
  export JVM_ARGS="-Djava.io.tmpdir=/mnt/tmp

> sql-client.sh start failed
> --
>
> Key: FLINK-30001
> URL: https://issues.apache.org/jira/browse/FLINK-30001
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.16.0, 1.15.2
>Reporter: xiaohang.li
>Priority: Major
>
> [hadoop@master flink-1.15.0]$ ./bin/sql-client.sh 
> Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR or 
> HADOOP_CLASSPATH was set.
> Setting HBASE_CONF_DIR=/etc/hbase/conf because no HBASE_CONF_DIR was set.
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>         at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> Caused by: org.apache.flink.table.api.TableException: Could not instantiate 
> the executor. Make sure a planner module is on the classpath
>         at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.lookupExecutor(ExecutionContext.java:163)
>         at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.createTableEnvironment(ExecutionContext.java:111)
>         at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.(ExecutionContext.java:66)
>         at 
> org.apache.flink.table.client.gateway.context.SessionContext.create(SessionContext.java:247)
>         at 
> org.apache.flink.table.client.gateway.local.LocalContextUtils.buildSessionContext(LocalContextUtils.java:87)
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:87)
>         at org.apache.flink.table.client.SqlClient.start(SqlClient.java:88)
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)
>         ... 1 more
> Caused by: org.apache.flink.table.api.TableException: Unexpected error when 
> trying to load service provider for factories.
>         at 
> org.apache.flink.table.factories.FactoryUtil.lambda$discoverFactories$19(FactoryUtil.java:813)
>         at java.util.ArrayList.forEach(ArrayList.java:1259)
>         at 
> org.apache.flink.table.factories.FactoryUtil.discoverFactories(FactoryUtil.java:799)
>         at 
> org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:517)
>         at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.lookupExecutor(ExecutionContext.java:154)
>         ... 8 more
> Caused by: java.util.ServiceConfigurationError: 
> org.apache.flink.table.factories.Factory: Provider 
> org.apache.flink.table.planner.loader.DelegateExecutorFactory could not be 
> instantiated
>         at java.util.ServiceLoader.fail(ServiceLoader.java:232)
>         at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
>         at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
>         at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
>         at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
>         at 
> org.apache.flink.table.factories.ServiceLoaderUtil.load(ServiceLoaderUtil.java:42)
>         at 
> org.apache.flink.table.factories.FactoryUtil.discoverFactories(FactoryUtil.java:798)
>         ... 10 more
> Caused by: java.lang.ExceptionInInitializerError
>         at 
> org.apache.flink.table.planner.loader.PlannerModule.getInstance(PlannerModule.java:135)
>         at 
> org.apache.flink.table.planner.loader.DelegateExecutorFactory.(DelegateExecutorFactory.java:34)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>         at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>         at java.lang.Class.newInstance(Class.java:442)
>         at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
>         ... 14 more
> Caused by: org.apache.flink.table.api.TableException: Could not initialize 
> the table planner components loader.
>         at 
> org.apache.flink.table.planner.loader.PlannerModule.(PlannerModule.java:123)
>         at 
> org.apache.flink.tab

[GitHub] [flink-kubernetes-operator] rgsriram commented on a diff in pull request #437: [FLINK-29609] Shut down JM for terminated applications after configured duration

2022-11-13 Thread GitBox


rgsriram commented on code in PR #437:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/437#discussion_r1020282292


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/ApplicationReconciler.java:
##
@@ -318,6 +319,31 @@ private boolean shouldRestartJobBecauseUnhealthy(
 return restartNeeded;
 }
 
+private boolean cleanupTerminalJmAfterTtl(
+FlinkDeployment deployment, Configuration observeConfig) {
+var status = deployment.getStatus();
+boolean terminal = ReconciliationUtils.isJobInTerminalState(status);
+boolean jmStillRunning =
+status.getJobManagerDeploymentStatus() != 
JobManagerDeploymentStatus.MISSING;
+
+if (terminal && jmStillRunning) {
+var ttl = 
observeConfig.get(KubernetesOperatorConfigOptions.OPERATOR_JM_SHUTDOWN_TTL);
+boolean ttlPassed =
+Instant.now()
+.isAfter(
+Instant.ofEpochMilli(
+Long.parseLong(
+
status.getJobStatus().getUpdateTime()))
+.plus(ttl));
+if (ttlPassed) {
+LOG.info("Removing JobManager deployment for terminal 
application.");
+flinkService.deleteClusterDeployment(deployment.getMetadata(), 
status, false);

Review Comment:
   Should we need not delete HA metadata also?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] link3280 commented on pull request #21292: [FLINK-28617][SQL Gateway] Support stop job statement in SqlGatewayService

2022-11-13 Thread GitBox


link3280 commented on PR #21292:
URL: https://github.com/apache/flink/pull/21292#issuecomment-1313103656

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] houhang1005 commented on pull request #21268: [FLINK-29952][table-api]Append the detail of the exception when drop tamporary table.

2022-11-13 Thread GitBox


houhang1005 commented on PR #21268:
URL: https://github.com/apache/flink/pull/21268#issuecomment-1313110569

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lincoln-lil commented on a diff in pull request #21308: [hotfix][docs][table] Fix versioned table example

2022-11-13 Thread GitBox


lincoln-lil commented on code in PR #21308:
URL: https://github.com/apache/flink/pull/21308#discussion_r1021084345


##
docs/content/docs/dev/table/concepts/versioned_tables.md:
##
@@ -152,9 +153,9 @@ WHERE rownum = 1;
 +(INSERT)09:00:00  Yen102
 +(INSERT)09:00:00  Euro   114
 +(INSERT)09:00:00  USD1
-+(UPDATE_AFTER)  10:45:00  Euro   116
 +(UPDATE_AFTER)  11:15:00  Euro   119
-+(INSERT)11:49:00  Pounds 108
++(INSERT)11:45:00  Pounds 107
++(UPDATE_AFTER)  11:49:00  Pounds 108

Review Comment:
   it's better to add more 'update_after' lines (not just one) for better 
understanding



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lincoln-lil commented on pull request #21308: [hotfix][docs][table] Fix versioned table example

2022-11-13 Thread GitBox


lincoln-lil commented on PR #21308:
URL: https://github.com/apache/flink/pull/21308#issuecomment-1313114452

   @leonardBang thanks for reviewing this! I've updated the description for the 
change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lincoln-lil commented on pull request #20745: [FLINK-28988] Don't push above filters down into the right table for temporal join

2022-11-13 Thread GitBox


lincoln-lil commented on PR #20745:
URL: https://github.com/apache/flink/pull/20745#issuecomment-1313117021

   @shuiqiangchen great! Please ping me here once you've fixed the tests and I 
can help review this pr before merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] wxplovecc commented on a diff in pull request #357: [FLINK-29922] Support create external table for hive catalog

2022-11-13 Thread GitBox


wxplovecc commented on code in PR #357:
URL: https://github.com/apache/flink-table-store/pull/357#discussion_r1021091587


##
flink-table-store-hive/flink-table-store-hive-catalog/src/main/java/org/apache/flink/table/store/hive/HiveCatalog.java:
##
@@ -226,6 +227,13 @@ public void createTable(ObjectPath tablePath, UpdateSchema 
updateSchema, boolean
 e);
 }
 Table table = newHmsTable(tablePath);
+
+if (hiveConf.getEnum(TABLE_TYPE.key(), TableType.MANAGED_TABLE)

Review Comment:
   done



##
flink-table-store-core/src/main/java/org/apache/flink/table/store/table/TableType.java:
##
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.table;
+
+/** Enum of catalog table type. */
+public enum TableType {
+MANAGED_TABLE,

Review Comment:
   updated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29913) Shared state would be discarded by mistake when maxConcurrentCheckpoint>1

2022-11-13 Thread Congxian Qiu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17633506#comment-17633506
 ] 

Congxian Qiu commented on FLINK-29913:
--

sorry for the late reply.

[~Yanfei Lei]  for the priority, IMHO, if the user set \{{ 
maxConcurrenctCheckpoint > 1 && MAX_RETAINED_CHECKPOINTS > 1 }} , then the 
checkpoints may be broken, and can't restore from the checkpoint because of the 
{{{}FileNotFoundException{}}}, so I think it deserves to escalate the priority.

[~roman] your proposal seems valid from my perspective, maybe changing the 
logic for {{generating the registry key(perhaps using the filename in the 
remote filesystem)is enough to solve the problem here?}}

please let me what do you think about this, thanks.

> Shared state would be discarded by mistake when maxConcurrentCheckpoint>1
> -
>
> Key: FLINK-29913
> URL: https://issues.apache.org/jira/browse/FLINK-29913
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Yanfei Lei
>Priority: Minor
>
> When maxConcurrentCheckpoint>1, the shared state of Incremental rocksdb state 
> backend would be discarded by registering the same name handle. See 
> [https://github.com/apache/flink/pull/21050#discussion_r1011061072]
> cc [~roman] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] leonardBang merged pull request #21308: [hotfix][docs][table] Fix versioned table example

2022-11-13 Thread GitBox


leonardBang merged PR #21308:
URL: https://github.com/apache/flink/pull/21308


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] leonardBang commented on pull request #21308: [hotfix][docs][table] Fix versioned table example

2022-11-13 Thread GitBox


leonardBang commented on PR #21308:
URL: https://github.com/apache/flink/pull/21308#issuecomment-1313149921

   Thanks @lincoln-lil for the contribution, Could you back port the fix for 
`release-1.15` and `release-1.16` branch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on pull request #21304: [FLINK-30003][rpc] Wait the scheduler future is done before check

2022-11-13 Thread GitBox


1996fanrui commented on PR #21304:
URL: https://github.com/apache/flink/pull/21304#issuecomment-1313151567

   Hi @zentol , it's caused by FLINK-29249, please help take a look in your 
free time, thanks~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on pull request #21193: [hotfix] Add the final and fix typo

2022-11-13 Thread GitBox


1996fanrui commented on PR #21193:
URL: https://github.com/apache/flink/pull/21193#issuecomment-1313152013

   Hi @zentol , please help take a look in your free time, thanks~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on pull request #21303: [FLINK-30002][checkpoint] Change the alignmentTimeout to alignedCheckpointTimeout

2022-11-13 Thread GitBox


1996fanrui commented on PR #21303:
URL: https://github.com/apache/flink/pull/21303#issuecomment-1313153287

   Hi @pnowojski , please help take a look in your free time, thanks~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] SmirAlex commented on pull request #20919: [FLINK-29405] Fix unstable test InputFormatCacheLoaderTest

2022-11-13 Thread GitBox


SmirAlex commented on PR #20919:
URL: https://github.com/apache/flink/pull/20919#issuecomment-1313157507

   > Waiting forever in production code is super sketchy and should virtually 
never be done.
   > 
   > The PR is also lacking a sort of problem analysis and explanation for how 
this fixes the issue.
   
   Hi @zentol, I added timeout on wait after interrupt and updated PR 
description to explain the problem and proposed solution more precisely. Can 
you check the latest commit, please?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] SteNicholas commented on a diff in pull request #357: [FLINK-29922] Support create external table for hive catalog

2022-11-13 Thread GitBox


SteNicholas commented on code in PR #357:
URL: https://github.com/apache/flink-table-store/pull/357#discussion_r1021115008


##
flink-table-store-core/src/main/java/org/apache/flink/table/store/table/TableType.java:
##
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.table;
+
+import org.apache.flink.configuration.DescribedEnum;
+import org.apache.flink.configuration.description.InlineElement;
+
+import static org.apache.flink.configuration.description.TextElement.text;
+
+/** Enum of catalog table type. */
+public enum TableType implements DescribedEnum {
+MANAGED("MANAGED_TABLE", "Hive manage the lifecycle of the table."),

Review Comment:
   ```suggestion
   MANAGED("MANAGED_TABLE", "Table Store owns the table where the entire 
lifecycle of the table data is managed."),
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] SteNicholas commented on a diff in pull request #357: [FLINK-29922] Support create external table for hive catalog

2022-11-13 Thread GitBox


SteNicholas commented on code in PR #357:
URL: https://github.com/apache/flink-table-store/pull/357#discussion_r1021115008


##
flink-table-store-core/src/main/java/org/apache/flink/table/store/table/TableType.java:
##
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.table;
+
+import org.apache.flink.configuration.DescribedEnum;
+import org.apache.flink.configuration.description.InlineElement;
+
+import static org.apache.flink.configuration.description.TextElement.text;
+
+/** Enum of catalog table type. */
+public enum TableType implements DescribedEnum {
+MANAGED("MANAGED_TABLE", "Hive manage the lifecycle of the table."),

Review Comment:
   ```suggestion
   MANAGED("MANAGED_TABLE", "Table Store owned table where the entire 
lifecycle of the table data is managed."),
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] SteNicholas commented on a diff in pull request #357: [FLINK-29922] Support create external table for hive catalog

2022-11-13 Thread GitBox


SteNicholas commented on code in PR #357:
URL: https://github.com/apache/flink-table-store/pull/357#discussion_r1021116470


##
flink-table-store-core/src/main/java/org/apache/flink/table/store/table/TableType.java:
##
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.table;
+
+import org.apache.flink.configuration.DescribedEnum;
+import org.apache.flink.configuration.description.InlineElement;
+
+import static org.apache.flink.configuration.description.TextElement.text;
+
+/** Enum of catalog table type. */
+public enum TableType implements DescribedEnum {
+MANAGED("MANAGED_TABLE", "Hive manage the lifecycle of the table."),
+EXTERNAL("EXTERNAL_TABLE", "Files are already present or in remote 
locations.");

Review Comment:
   ```suggestion
   EXTERNAL("EXTERNAL_TABLE", "The table where Table Store has loose 
coupling with the data stored in external locations.");
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] link3280 commented on pull request #21292: [FLINK-28617][SQL Gateway] Support stop job statement in SqlGatewayService

2022-11-13 Thread GitBox


link3280 commented on PR #21292:
URL: https://github.com/apache/flink/pull/21292#issuecomment-1313164112

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29549) Add Aws Glue Catalog support in Flink

2022-11-13 Thread Samrat Deb (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samrat Deb updated FLINK-29549:
---
Summary: Add Aws Glue Catalog support in Flink  (was: Flink sql to add 
support of using AWS glue as metastore)

> Add Aws Glue Catalog support in Flink
> -
>
> Key: FLINK-29549
> URL: https://issues.apache.org/jira/browse/FLINK-29549
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common, Connectors / Hive
>Reporter: Samrat Deb
>Priority: Major
>
> Currently , Flink sql hive connector support hive metastore as hardcoded 
> metastore-uri. 
> It would be good if flink provide feature to have configurable metastore (eg. 
> AWS glue).
> This would help many Users of flink who uses AWS 
> Glue([https://docs.aws.amazon.com/glue/latest/dg/start-data-catalog.html]) as 
> their common(unified) catalog and process data. 
> cc [~prabhujoseph] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-ml] yunfengzhou-hub opened a new pull request, #173: [FLINK-29595] Add Estimator and Transformer for ChiSqSelector

2022-11-13 Thread GitBox


yunfengzhou-hub opened a new pull request, #173:
URL: https://github.com/apache/flink-ml/pull/173

   ## What is the purpose of the change
   
   This PR adds the Estimator and Transformer for the Chi-square selector 
algorithm.
   
   ## Brief change log
   
 - Adds Transformer and Estimator implementation of Chi-square selector in 
Java and Python
 - Adds examples and documentation of Chi-square selector
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (docs / JavaDocs)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29595) Add Estimator and Transformer for ChiSqSelector

2022-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29595:
---
Labels: pull-request-available  (was: )

> Add Estimator and Transformer for ChiSqSelector
> ---
>
> Key: FLINK-29595
> URL: https://issues.apache.org/jira/browse/FLINK-29595
> Project: Flink
>  Issue Type: New Feature
>  Components: Library / Machine Learning
>Affects Versions: ml-2.2.0
>Reporter: Yunfeng Zhou
>Priority: Major
>  Labels: pull-request-available
> Fix For: ml-2.2.0
>
>
> Add the Estimator and Transformer for ChiSqSelector.
> Its function would be at least equivalent to Spark's 
> org.apache.spark.ml.feature.ChiSqSelector. The relevant PR should contain the 
> following components:
>  * Java implementation/test (Must include)
>  * Python implementation/test (Optional)
>  * Markdown document (Optional)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] zjureel commented on pull request #376: [FLINK-27843] Schema evolution for data file meta

2022-11-13 Thread GitBox


zjureel commented on PR #376:
URL: 
https://github.com/apache/flink-table-store/pull/376#issuecomment-1313180090

   Hi @JingsongLi @tsreaper Can you help to review this PR when you're free? THX


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-30012) A typo in official Table Store document.

2022-11-13 Thread Hang HOU (Jira)
Hang HOU created FLINK-30012:


 Summary: A typo in official Table Store document.
 Key: FLINK-30012
 URL: https://issues.apache.org/jira/browse/FLINK-30012
 Project: Flink
  Issue Type: Improvement
  Components: Table Store
Affects Versions: 1.16.0
 Environment: Flink 1.16.0
Reporter: Hang HOU


Found a typo in Rescale Bucket document which is "exiting".
[Rescale 
Bucket|https://nightlies.apache.org/flink/flink-table-store-docs-release-0.2/docs/development/rescale-bucket/#rescale-bucket]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >