[jira] [Updated] (FLINK-27888) SQL Error in Flink1.15

2022-06-02 Thread songwenbin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

songwenbin updated FLINK-27888:
---
Docs Text: 
In Flink1.15 
1. put attachment jar to directory:  ./flink-release-1.15/build-target/lib/ , 
eg: ./flink-release-1.15/build-target/lib/flink-demo-1.0-SNAPSHOT.jar
2. start flink local cluster:  
./flink-release-1.15/build-target/start-cluster.sh
3. start sql-client:./flink-release-1.15/build-target/sql-client.sh
4. step by step exec below sql script in sql-client: 

CREATE  FUNCTION ddd AS 'org.example.SumHopWindowUpsertVal';


CREATE TABLE monthly_source_5 (
itemSTRING,
val DOUBLE,
ts  BIGINT,
isEnergyBIGINT,
shiftId INTEGER,
eventTime   BIGINT,
event_time  AS TO_TIMESTAMP_LTZ(eventTime, 3),
process_timeAS PROCTIME(),
projectId   AS 1,
WATERMARK FOR event_time AS event_time)
WITH ('connector' = 'kafka','topic' = 
'daily-consumption','properties.bootstrap.servers' = 
'KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS', 'properties.group.id' = 
'poem-consumergroup-flink-v2',  'value.format' = 'json','scan.startup.mode' = 
'latest-offset');

CREATE VIEW daily_last_value_with_monthly_frame5 AS
SELECT
item AS tag,
val,
ts,
process_time,
isEnergy,
eventTime,
event_time,
1  AS frame
FROM monthly_source_5
WHERE shiftId =0;

CREATE VIEW energy_monthly_consumptions5 AS
SELECT
tag,
ts   AS ts,
1  AS monthFrameTs,
val  AS val,
eventTime,
event_time,
isEnergy
FROM daily_last_value_with_monthly_frame5;

CREATE VIEW monthly_consumption_result_10 AS
SELECT
tag AS tag,
monthFrameTsAS ts,
ddd(
ts,
val,
monthFrameTs)   AS val,
MAX(eventTime)  AS eventTime,
1   AS isEnergy
FROM TABLE(
HOP(TABLE energy_monthly_consumptions5, DESCRIPTOR(event_time), 
INTERVAL '1' DAYS, INTERVAL '31' DAYS))
WHERE isEnergy = 1
GROUP BY tag, monthFrameTs, window_start, window_end;

select * from monthly_consumption_result_10;

  was:
In Flink1.15 
1. put attachment jar to directory:  ./flink-release-1.15/build-target/lib/ , 
eg: ./flink-release-1.15/build-target/lib/flink-demo-1.0-SNAPSHOT.jar
2. start flink local cluster:  
./flink-release-1.15/build-target/start-cluster.sh
3. start sql-client:./flink-release-1.15/build-target/sql-client.sh
4. step by step exec below sql script in sql-client: 

SET 'execution.runtime-mode' = 'streaming';

CREATE  FUNCTION ddd AS 'org.example.SumHopWindowUpsertVal';


CREATE TABLE monthly_source_5 (
itemSTRING,
val DOUBLE,
ts  BIGINT,
isEnergyBIGINT,
shiftId INTEGER,
eventTime   BIGINT,
event_time  AS TO_TIMESTAMP_LTZ(eventTime, 3),
process_timeAS PROCTIME(),
projectId   AS 1,
WATERMARK FOR event_time AS event_time)
WITH ('connector' = 'kafka','topic' = 
'daily-consumption','properties.bootstrap.servers' = 
'KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS', 'properties.group.id' = 
'poem-consumergroup-flink-v2',  'value.format' = 'json','scan.startup.mode' = 
'latest-offset');

CREATE VIEW daily_last_value_with_monthly_frame5 AS
SELECT
item AS tag,
val,
ts,
process_time,
isEnergy,
eventTime,
event_time,
1  AS frame
FROM monthly_source_5
WHERE shiftId =0;

CREATE VIEW energy_monthly_consumptions5 AS
SELECT
tag,
ts   AS ts,
1  AS monthFrameTs,
val  AS val,
eventTime,
event_time,
isEnergy
FROM daily_last_value_with_monthly_frame5;

CREATE VIEW monthly_consumption_result_10 AS
SELECT
tag AS tag,
monthFrameTsAS ts,
ddd(
ts,
val,
monthFrameTs)   AS val,
MAX(eventTime)  AS eventTime,
1   AS isEnergy
FROM TABLE(
HOP(TABLE energy_monthly_consumptions5, DESCRIPTOR(event_time), 
INTERVAL '1' DAYS, INTERVAL '31' DAYS))
WHERE isEnergy = 1
GROUP BY tag, monthFrameTs, window_start, window_end;

select * from monthly_consumption_result_10;


> SQL Error in Flink1.15
> --
>
> Key: FLINK-27888
> URL: https://issues.apache.org/jira/browse/FLINK-27888
> Project: Flink
>  

[jira] [Created] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread Hector Miuler Malpica Gallegos (Jira)
Hector Miuler Malpica Gallegos created FLINK-27889:
--

 Summary: Error when the LastReconciledSpec is null
 Key: FLINK-27889
 URL: https://issues.apache.org/jira/browse/FLINK-27889
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Reporter: Hector Miuler Malpica Gallegos


My FlinkDeployment was left with erros, when he can not start correctly, the 
following message:

 

{{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
][flink-wape-02/migration-cosmosdb-wape] Attempt count: 5, last attempt: true}}
{{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
[ERROR][flink-wape-02/migration-cosmosdb-wape] Error during event processing 
ExecutionScope\{ resource id: CustomResourceID{name='migration-cosmosdb-wape', 
namespace='flink-wape-02'}, version: null} failed.}}
{{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
java.lang.IllegalArgumentException: Only "local" is supported as schema for 
application mode. This assumes that the jar is located in the image, not the 
Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
{{        at 
io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
schema for application mode. This assumes that the jar is located in the image, 
not the Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
{{        at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
{{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)}}
{{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
{{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
Source)}}
{{        at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
{{        at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
Source)}}
{{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
Source)}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
{{        at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)}}
{{        at 
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)}}
{{        at 
org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.deployFlinkJob(ApplicationReconciler.java:283)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:83)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:58)}}

[jira] [Updated] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27889:
---
Description: 
My FlinkDeployment was left with erros, when he can not start correctly, the 
following message:

 

{{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
][flink-02/cosmosdb] Attempt count: 5, last attempt: true}}
{{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
[ERROR][flink-02/cosmosdb] Error during event processing ExecutionScope{ 
resource id: CustomResourceID

{name='cosmosdb', namespace='flink-02'}, version: null} failed.}}
{{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
java.lang.IllegalArgumentException: Only "local" is supported as schema for 
application mode. This assumes that the jar is located in the image, not the 
Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
{{        at 
io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
schema for application mode. This assumes that the jar is located in the image, 
not the Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
{{        at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
{{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)}}
{{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
{{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
Source)}}
{{        at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
{{        at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
Source)}}
{{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
Source)}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
{{        at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)}}
{{        at 
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)}}
{{        at 
org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.deployFlinkJob(ApplicationReconciler.java:283)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:83)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:58)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:126)}}
{{        ... 13 more}}
{{2022-06-01 04:36:10,073 i.j.o.p.e.EventProcessor       
[ERROR][flink-02/co

[jira] [Updated] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27889:
---
Description: 
My FlinkDeployment was left with erros, when he can not start correctly, the 
following message:

 

{{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
][flink-02/cosmosdb] Attempt count: 5, last attempt: true}}
{{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
[ERROR][flink-02/cosmosdb] Error during event processing ExecutionScope{ 
resource id: CustomResourceID

{name='cosmosdb', namespace='flink-02'}, version: null} failed.}}
{{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
java.lang.IllegalArgumentException: Only "local" is supported as schema for 
application mode. This assumes that the jar is located in the image, not the 
Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
{{        at 
io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
schema for application mode. This assumes that the jar is located in the image, 
not the Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
{{        at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
{{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)}}
{{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
{{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
Source)}}
{{        at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
{{        at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
Source)}}
{{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
Source)}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
{{        at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)}}
{{        at 
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)}}
{{        at 
org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.deployFlinkJob(ApplicationReconciler.java:283)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:83)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:58)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:126)}}
{{        ... 13 more}}
{{2022-06-01 04:36:10,073 i.j.o.p.e.EventProcessor       
[ERROR][flink-02/co

[jira] [Updated] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27889:
---
Attachment: scratch_7.json

> Error when the LastReconciledSpec is null
> -
>
> Key: FLINK-27889
> URL: https://issues.apache.org/jira/browse/FLINK-27889
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Hector Miuler Malpica Gallegos
>Priority: Major
> Attachments: scratch_7.json
>
>
> My FlinkDeployment was left with erros, when he can not start correctly, the 
> following message:
>  
> {{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
> ][flink-02/cosmosdb] Attempt count: 5, last attempt: true}}
> {{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
> [ERROR][flink-02/cosmosdb] Error during event processing ExecutionScope{ 
> resource id: CustomResourceID
> {name='cosmosdb', namespace='flink-02'}, version: null} failed.}}
> {{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
> java.lang.IllegalArgumentException: Only "local" is supported as schema for 
> application mode. This assumes that the jar is located in the image, not the 
> Flink client. An example of such path is: 
> local:///opt/flink/examples/streaming/WindowJoin.jar}}
> {{        at 
> org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
> {{        at 
> org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
> {{        at 
> io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
> {{        at java.base/java.lang.Thread.run(Unknown Source)}}
> {{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
> schema for application mode. This assumes that the jar is located in the 
> image, not the Flink client. An example of such path is: 
> local:///opt/flink/examples/streaming/WindowJoin.jar}}
> {{        at 
> org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
> {{        at 
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
> {{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
> Source)}}
> {{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
> {{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
> Source)}}
> {{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
> {{        at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
> Source)}}
> {{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
> Source)}}
> {{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
> Source)}}
> {{        at 
> org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
> {{        at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)}}
> {{        at 
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)}}
> {{        at 
> org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)}}
> {{        at 
> org.apache.flink.kubernetes.operator.reconciler.deployment.A

[jira] [Updated] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27889:
---
Description: 
My FlinkDeployment was left with erros, when he can not start correctly, the 
following message:

 

{{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
][flink-02/cosmosdb] Attempt count: 5, last attempt: true}}
{{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
[ERROR][flink-02/cosmosdb] Error during event processing ExecutionScope{ 
resource id: CustomResourceID

{name='cosmosdb', namespace='flink-02'}, version: null} failed.}}
{{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
java.lang.IllegalArgumentException: Only "local" is supported as schema for 
application mode. This assumes that the jar is located in the image, not the 
Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
{{        at 
io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
schema for application mode. This assumes that the jar is located in the image, 
not the Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
{{        at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
{{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)}}
{{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
{{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
Source)}}
{{        at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
{{        at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
Source)}}
{{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
Source)}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
{{        at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)}}
{{        at 
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)}}
{{        at 
org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.deployFlinkJob(ApplicationReconciler.java:283)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:83)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:58)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:126)}}
{{        ... 13 more}}
{{2022-06-01 04:36:10,073 i.j.o.p.e.EventProcessor       
[ERROR][flink-02/co

[GitHub] [flink-kubernetes-operator] Miuler opened a new pull request, #252: [FLINK-27889] fix: Catch the error when last reconciled spec is null

2022-06-02 Thread GitBox


Miuler opened a new pull request, #252:
URL: https://github.com/apache/flink-kubernetes-operator/pull/252

   
[[](https://github.com/apache/flink-kubernetes-operator/commit/1f47f33485f0af8d1367b55b98dccbe12db452c2)[FLINK-27889](https://issues.apache.org/jira/browse/FLINK-27889)[]
 fix: Catch the error when last reconciled spec is 
null](https://github.com/apache/flink-kubernetes-operator/commit/1f47f33485f0af8d1367b55b98dccbe12db452c2)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27889:
---
Labels: pull-request-available  (was: )

> Error when the LastReconciledSpec is null
> -
>
> Key: FLINK-27889
> URL: https://issues.apache.org/jira/browse/FLINK-27889
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Hector Miuler Malpica Gallegos
>Priority: Major
>  Labels: pull-request-available
> Attachments: scratch_7.json
>
>
> My FlinkDeployment was left with erros, when he can not start correctly, the 
> following message:
>  
> {{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
> ][flink-02/cosmosdb] Attempt count: 5, last attempt: true}}
> {{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
> [ERROR][flink-02/cosmosdb] Error during event processing ExecutionScope{ 
> resource id: CustomResourceID
> {name='cosmosdb', namespace='flink-02'}, version: null} failed.}}
> {{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
> java.lang.IllegalArgumentException: Only "local" is supported as schema for 
> application mode. This assumes that the jar is located in the image, not the 
> Flink client. An example of such path is: 
> local:///opt/flink/examples/streaming/WindowJoin.jar}}
> {{        at 
> org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
> {{        at 
> org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
> {{        at 
> io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
> {{        at java.base/java.lang.Thread.run(Unknown Source)}}
> {{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
> schema for application mode. This assumes that the jar is located in the 
> image, not the Flink client. An example of such path is: 
> local:///opt/flink/examples/streaming/WindowJoin.jar}}
> {{        at 
> org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
> {{        at 
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
> {{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
> Source)}}
> {{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
> {{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
> Source)}}
> {{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
> {{        at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
> Source)}}
> {{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
> Source)}}
> {{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
> Source)}}
> {{        at 
> org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
> {{        at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)}}
> {{        at 
> org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)}}
> {{        at 
> org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)}}
> {{        at 
> org.apache.flink.kubernetes.opera

[GitHub] [flink-web] MartijnVisser merged pull request #543: [FLINK-27722] Add Slack community information and invite link

2022-06-02 Thread GitBox


MartijnVisser merged PR #543:
URL: https://github.com/apache/flink-web/pull/543


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27722) Slack: add invitation link and information to Flink project website

2022-06-02 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-27722:
---
Summary: Slack: add invitation link and information to Flink project 
website  (was: Slack: set up auto-updated invitation link)

> Slack: add invitation link and information to Flink project website
> ---
>
> Key: FLINK-27722
> URL: https://issues.apache.org/jira/browse/FLINK-27722
> Project: Flink
>  Issue Type: Sub-task
>  Components: Project Website
>Reporter: Xintong Song
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (FLINK-27722) Slack: add invitation link and information to Flink project website

2022-06-02 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-27722.
--
Resolution: Fixed

Fixed in asf-site: f51d295d3ce670bdd06476a08b0bd5f1372edf60

> Slack: add invitation link and information to Flink project website
> ---
>
> Key: FLINK-27722
> URL: https://issues.apache.org/jira/browse/FLINK-27722
> Project: Flink
>  Issue Type: Sub-task
>  Components: Project Website
>Reporter: Xintong Song
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (FLINK-27720) Slack: Update the project website regarding communication channels

2022-06-02 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-27720.

Resolution: Done

Subsumed by FLINK-27722

> Slack: Update the project website regarding communication channels
> --
>
> Key: FLINK-27720
> URL: https://issues.apache.org/jira/browse/FLINK-27720
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Xintong Song
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-table-store] JingsongLi commented on a diff in pull request #145: [FLINK-27875] Introduce TableScan and TableRead as an abstraction layer above FileStore for reading RowData

2022-06-02 Thread GitBox


JingsongLi commented on code in PR #145:
URL: https://github.com/apache/flink-table-store/pull/145#discussion_r887643249


##
flink-table-store-core/src/main/java/org/apache/flink/table/store/table/source/TableScan.java:
##
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.table.source;
+
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.table.data.binary.BinaryRowData;
+import org.apache.flink.table.store.file.data.DataFileMeta;
+import org.apache.flink.table.store.file.operation.FileStoreScan;
+import org.apache.flink.table.store.file.predicate.And;
+import org.apache.flink.table.store.file.predicate.CompoundPredicate;
+import org.apache.flink.table.store.file.predicate.LeafPredicate;
+import org.apache.flink.table.store.file.predicate.Predicate;
+import org.apache.flink.table.store.file.predicate.PredicateBuilder;
+import org.apache.flink.table.store.file.schema.Schema;
+import org.apache.flink.table.store.file.utils.FileStorePathFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+
+/** An abstraction layer above {@link FileStoreScan} to provide input split 
generation. */
+public abstract class TableScan {
+
+protected final FileStoreScan scan;
+private final int[] fieldIdxToPartitionIdx;

Review Comment:
   The `fieldIdxToPartitionIdx` can be a local field. It looks a bit strange. 
We don't need to reuse it in class member.
   We can just store a schema.



##
flink-table-store-core/src/main/java/org/apache/flink/table/store/table/source/TableScan.java:
##
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.table.source;
+
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.table.data.binary.BinaryRowData;
+import org.apache.flink.table.store.file.data.DataFileMeta;
+import org.apache.flink.table.store.file.operation.FileStoreScan;
+import org.apache.flink.table.store.file.predicate.And;
+import org.apache.flink.table.store.file.predicate.CompoundPredicate;
+import org.apache.flink.table.store.file.predicate.LeafPredicate;
+import org.apache.flink.table.store.file.predicate.Predicate;
+import org.apache.flink.table.store.file.predicate.PredicateBuilder;
+import org.apache.flink.table.store.file.schema.Schema;
+import org.apache.flink.table.store.file.utils.FileStorePathFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+
+/** An abstraction layer above {@link FileStoreScan} to provide input split 
generation. */
+public abstract class TableScan {
+
+protected final FileStoreScan scan;
+private final int[] fieldIdxToPartitionIdx;
+private final FileStorePathFactory pathFactory;
+
+protected TableScan(FileStoreScan scan, Schema schema, 
FileStorePathFactory pathFactory) {
+this.scan = scan;
+List partitionKeys = schema.partitionKeys();
+this.fieldIdxToPartitionIdx =
+schema.fields().stream().mapToInt(f -> 
partitionKeys.indexOf(f.name())).toArray();
+this.pathFactory = pathFactory;
+}
+
+public TableScan withSnapshot(long snapshotId) {
+scan.withSnapshot(snapshotId);
+return this;
+}
+
+public TableScan withFilter(Predicate predicate) {
+List partitionFilters = new ArrayList<>();
+

[GitHub] [flink-table-store] tsreaper commented on a diff in pull request #145: [FLINK-27875] Introduce TableScan and TableRead as an abstraction layer above FileStore for reading RowData

2022-06-02 Thread GitBox


tsreaper commented on code in PR #145:
URL: https://github.com/apache/flink-table-store/pull/145#discussion_r887672733


##
flink-table-store-core/src/main/java/org/apache/flink/table/store/table/AppendOnlyFileStoreTable.java:
##
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.table;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.store.file.FileStore;
+import org.apache.flink.table.store.file.FileStoreImpl;
+import org.apache.flink.table.store.file.FileStoreOptions;
+import org.apache.flink.table.store.file.KeyValue;
+import org.apache.flink.table.store.file.WriteMode;
+import org.apache.flink.table.store.file.operation.FileStoreRead;
+import org.apache.flink.table.store.file.operation.FileStoreScan;
+import org.apache.flink.table.store.file.predicate.Predicate;
+import org.apache.flink.table.store.file.schema.Schema;
+import org.apache.flink.table.store.table.source.TableRead;
+import org.apache.flink.table.store.table.source.TableScan;
+import org.apache.flink.table.store.table.source.ValueContentRowDataIterator;
+import org.apache.flink.table.types.logical.RowType;
+
+import java.util.Iterator;
+
+/** {@link FileStoreTable} for {@link WriteMode#APPEND_ONLY} write mode. */
+public class AppendOnlyFileStoreTable implements FileStoreTable {
+
+private final Schema schema;
+private final FileStoreImpl store;
+
+AppendOnlyFileStoreTable(Schema schema, Configuration conf, String user) {
+this.schema = schema;
+this.store =
+new FileStoreImpl(
+schema.id(),
+new FileStoreOptions(conf),
+WriteMode.APPEND_ONLY,
+user,
+schema.logicalPartitionType(),
+RowType.of(),
+schema.logicalRowType(),
+null);
+}
+
+@Override
+public TableScan newScan(boolean isStreaming) {
+FileStoreScan scan = store.newScan();
+if (isStreaming) {
+scan.withIncremental(true);
+}
+
+return new TableScan(scan, schema, store.pathFactory()) {
+@Override
+protected void withNonPartitionFilter(Predicate predicate) {
+scan.withValueFilter(predicate);
+}
+};
+}
+
+@Override
+public TableRead newRead(boolean isStreaming) {
+FileStoreRead read = store.newRead();
+if (isStreaming) {
+read.withDropDelete(false);

Review Comment:
   True. But append-only tables only accepts `INSERT` messages. No `DELETE` 
messages will ever occur.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27878) Add Retry Support For Async I/O In DataStream API

2022-06-02 Thread lincoln lee (Jira)
lincoln lee created FLINK-27878:
---

 Summary: Add Retry Support For Async I/O In DataStream API
 Key: FLINK-27878
 URL: https://issues.apache.org/jira/browse/FLINK-27878
 Project: Flink
  Issue Type: New Feature
  Components: API / DataStream
Reporter: lincoln lee
 Fix For: 1.16.0


FLIP-232: Add Retry Support For Async I/O In DataStream API
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883963



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] luoyuxia opened a new pull request, #19867: Flink 12398

2022-06-02 Thread GitBox


luoyuxia opened a new pull request, #19867:
URL: https://github.com/apache/flink/pull/19867

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #19867: Flink 12398

2022-06-02 Thread GitBox


flinkbot commented on PR #19867:
URL: https://github.com/apache/flink/pull/19867#issuecomment-1144597561

   
   ## CI report:
   
   * 47f4d4188ea333adb40576529a5426354f98cbc9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27879) Translate the "Documentation Style Guide" page into Chinese

2022-06-02 Thread zhenyu xing (Jira)
zhenyu xing created FLINK-27879:
---

 Summary: Translate the "Documentation Style Guide" page into 
Chinese
 Key: FLINK-27879
 URL: https://issues.apache.org/jira/browse/FLINK-27879
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: zhenyu xing


The "Documentation Style Guide" need to be translated in Chinese.

Related files:

        modified:   _data/i18n.yml (  add nar-bar translate)
        modified:   contributing/contribute-documentation.zh.md( main page)
        modified:   contributing/docs-style.zh.md ( add missing paragraph)

I will be pleasure to take this PR.

 

Further more:

Currently,the website and document are in two implementations(Jekyll for 
website and Hugo for documents)

, which may need to be clarified in the future.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27880) CI runs keep getting stuck

2022-06-02 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27880:


 Summary: CI runs keep getting stuck
 Key: FLINK-27880
 URL: https://issues.apache.org/jira/browse/FLINK-27880
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Alexander Preuss


I am observing multiple fails where the CI seems to be getting stuck and fails 
because of running into the timeout. 

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36259&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba&l=8217]

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36209&view=logs&j=a57e0635-3fad-5b08-57c7-a4142d7d6fa9&t=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7&l=8153]

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36189&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8&l=9486]

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27878) Add Retry Support For Async I/O In DataStream API

2022-06-02 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-27878:
--

Assignee: lincoln lee

> Add Retry Support For Async I/O In DataStream API
> -
>
> Key: FLINK-27878
> URL: https://issues.apache.org/jira/browse/FLINK-27878
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
> Fix For: 1.16.0
>
>
> FLIP-232: Add Retry Support For Async I/O In DataStream API
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883963



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27879) Translate the "Documentation Style Guide" page into Chinese

2022-06-02 Thread zhenyu xing (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhenyu xing updated FLINK-27879:

Description: 
The "Documentation Style Guide" need to be translated in Chinese.

Related files:

        modified:   _data/i18n.yml (  add nav-bar translate)
        modified:   contributing/contribute-documentation.zh.md( main page)
        modified:   contributing/docs-style.zh.md ( add missing paragraph)

I will be pleasure to take this PR.

 

Further more:

Currently,the website and document are in two implementations(Jekyll for 
website and Hugo for documents)

, which may need to be clarified in the future.

  was:
The "Documentation Style Guide" need to be translated in Chinese.

Related files:

        modified:   _data/i18n.yml (  add nar-bar translate)
        modified:   contributing/contribute-documentation.zh.md( main page)
        modified:   contributing/docs-style.zh.md ( add missing paragraph)

I will be pleasure to take this PR.

 

Further more:

Currently,the website and document are in two implementations(Jekyll for 
website and Hugo for documents)

, which may need to be clarified in the future.


> Translate the "Documentation Style Guide" page into Chinese
> ---
>
> Key: FLINK-27879
> URL: https://issues.apache.org/jira/browse/FLINK-27879
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: zhenyu xing
>Priority: Minor
>
> The "Documentation Style Guide" need to be translated in Chinese.
> Related files:
>         modified:   _data/i18n.yml (  add nav-bar translate)
>         modified:   contributing/contribute-documentation.zh.md( main page)
>         modified:   contributing/docs-style.zh.md ( add missing paragraph)
> I will be pleasure to take this PR.
>  
> Further more:
> Currently,the website and document are in two implementations(Jekyll for 
> website and Hugo for documents)
> , which may need to be clarified in the future.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27877) Improve performance of several feature engineering algorithms

2022-06-02 Thread Zhipeng Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhipeng Zhang reassigned FLINK-27877:
-

Assignee: Zhipeng Zhang

> Improve performance of several feature engineering algorithms
> -
>
> Key: FLINK-27877
> URL: https://issues.apache.org/jira/browse/FLINK-27877
> Project: Flink
>  Issue Type: New Feature
>  Components: Library / Machine Learning
>Reporter: Zhipeng Zhang
>Assignee: Zhipeng Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-table-store] JingsongLi commented on a diff in pull request #145: [FLINK-27875] Introduce TableScan and TableRead as an abstraction layer above FileStore for reading RowData

2022-06-02 Thread GitBox


JingsongLi commented on code in PR #145:
URL: https://github.com/apache/flink-table-store/pull/145#discussion_r887754365


##
flink-table-store-core/src/main/java/org/apache/flink/table/store/table/AppendOnlyFileStoreTable.java:
##
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.store.table;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.store.file.FileStore;
+import org.apache.flink.table.store.file.FileStoreImpl;
+import org.apache.flink.table.store.file.FileStoreOptions;
+import org.apache.flink.table.store.file.KeyValue;
+import org.apache.flink.table.store.file.WriteMode;
+import org.apache.flink.table.store.file.operation.FileStoreRead;
+import org.apache.flink.table.store.file.operation.FileStoreScan;
+import org.apache.flink.table.store.file.predicate.Predicate;
+import org.apache.flink.table.store.file.schema.Schema;
+import org.apache.flink.table.store.table.source.TableRead;
+import org.apache.flink.table.store.table.source.TableScan;
+import org.apache.flink.table.store.table.source.ValueContentRowDataIterator;
+import org.apache.flink.table.types.logical.RowType;
+
+import java.util.Iterator;
+
+/** {@link FileStoreTable} for {@link WriteMode#APPEND_ONLY} write mode. */
+public class AppendOnlyFileStoreTable implements FileStoreTable {
+
+private final Schema schema;
+private final FileStoreImpl store;
+
+AppendOnlyFileStoreTable(Schema schema, Configuration conf, String user) {
+this.schema = schema;
+this.store =
+new FileStoreImpl(
+schema.id(),
+new FileStoreOptions(conf),
+WriteMode.APPEND_ONLY,
+user,
+schema.logicalPartitionType(),
+RowType.of(),
+schema.logicalRowType(),
+null);
+}
+
+@Override
+public TableScan newScan(boolean isStreaming) {
+FileStoreScan scan = store.newScan();
+if (isStreaming) {
+scan.withIncremental(true);
+}
+
+return new TableScan(scan, schema, store.pathFactory()) {
+@Override
+protected void withNonPartitionFilter(Predicate predicate) {
+scan.withValueFilter(predicate);
+}
+};
+}
+
+@Override
+public TableRead newRead(boolean isStreaming) {
+FileStoreRead read = store.newRead();
+if (isStreaming) {
+read.withDropDelete(false);

Review Comment:
   Yes, but there is a check in `FileStoreReadImpl`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27872) Allow KafkaBuilder to set arbitrary subscribers

2022-06-02 Thread Salva (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Salva updated FLINK-27872:
--
Description: 
Currently, 
[KafkaBuilder|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java]
 has two setters:
 * 
[setTopics|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L157]
 * 
[setTopicPattern|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L168]

which under the hood instantiate the corresponding (concrete) subscribers. This 
covers the most common needs, I agree, but it might fall short in some cases. 
Why not add a more generic setter:
 * {{setSubscriber (???)}}

Otherwise, how can users read from kafka in combination with custom subscribing 
logic? Without looking much into it, it seems that they basically cannot, at 
least without having to replicate some parts of the connector, which seems 
rather inconvenient.

  was:
Currently, 
[KafkaBuilder|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java]
 has two setters:
 * 
[setTopics|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L157]
 * 
[setTopicPattern|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L168]

which under the hood instantiate the corresponding (concrete) subscribers. This 
covers the most common needs, I agree, but it might fall short in some cases. 
Why not add a more generic setter:
 * {{setKafkaSubscriber (???)}}

Otherwise, how can users read from kafka in combination with custom subscribing 
logic? Without looking much into it, it seems that they basically cannot, at 
least without having to replicate some parts of the connector, which seems 
rather inconvenient.


> Allow KafkaBuilder to set arbitrary subscribers
> ---
>
> Key: FLINK-27872
> URL: https://issues.apache.org/jira/browse/FLINK-27872
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Reporter: Salva
>Priority: Major
>
> Currently, 
> [KafkaBuilder|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java]
>  has two setters:
>  * 
> [setTopics|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L157]
>  * 
> [setTopicPattern|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L168]
> which under the hood instantiate the corresponding (concrete) subscribers. 
> This covers the most common needs, I agree, but it might fall short in some 
> cases. Why not add a more generic setter:
>  * {{setSubscriber (???)}}
> Otherwise, how can users read from kafka in combination with custom 
> subscribing logic? Without looking much into it, it seems that they basically 
> cannot, at least without having to replicate some parts of the connector, 
> which seems rather inconvenient.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-23721) Flink SQL state TTL has no effect when using non-incremental RocksDBStateBackend

2022-06-02 Thread liangxiaokun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liangxiaokun updated FLINK-23721:
-
Attachment: image-2022-06-02-17-59-38-542.png

> Flink SQL state TTL has no effect when using non-incremental 
> RocksDBStateBackend
> 
>
> Key: FLINK-23721
> URL: https://issues.apache.org/jira/browse/FLINK-23721
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends, Table SQL / Runtime
>Affects Versions: 1.13.0
>Reporter: Q Kang
>Priority: Major
> Attachments: image-2022-06-02-17-59-38-542.png
>
>
> Take the following deduplication SQL program as an example:
> {code:java}
> SET table.exec.state.ttl=30s;
> INSERT INTO tmp.blackhole_order_done_log
> SELECT t.* FROM (
>   SELECT *,ROW_NUMBER() OVER(PARTITION BY suborderid ORDER BY procTime ASC) 
> AS rn
>   FROM rtdw_ods.kafka_order_done_log
> ) AS t WHERE rn = 1;
> {code}
> When using RocksDBStateBackend with incremental checkpoint enabled, the size 
> of deduplication state seems OK.
> FlinkCompactionFilter is also working, regarding to logs below:
> {code:java}
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Call 
> FlinkCompactionFilter::FilterV2 - Key: , Data: 017B3481026D01, Value 
> type: 0, State type: 1, TTL: 3 ms, timestamp_offset: 0
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Last access timestamp: 
> 1628673475181 ms, ttlWithoutOverflow: 3 ms, Current timestamp: 
> 1628673701905 ms
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Decision: 1
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Call 
> FlinkCompactionFilter::FilterV2 - Key: , Data: 017B3484064901, Value 
> type: 0, State type: 1, TTL: 3 ms, timestamp_offset: 0
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Last access timestamp: 
> 1628673672777 ms, ttlWithoutOverflow: 3 ms, Current timestamp: 
> 1628673701905 ms
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Decision: 0
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Call 
> FlinkCompactionFilter::FilterV2 - Key: , Data: 017B3483341D01, Value 
> type: 0, State type: 1, TTL: 3 ms, timestamp_offset: 0
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Last access timestamp: 
> 1628673618973 ms, ttlWithoutOverflow: 3 ms, Current timestamp: 
> 1628673701905 ms
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Decision: 1
> {code}
> However, after turning off incremental checkpoint, the state TTL seems not 
> effective at all: FlinkCompactionFilter logs are not printed, and the size of 
> deduplication state grows steadily up to several GBs (Kafka traffic is 
> somewhat heavy, at about 1K records per sec).
> In contrast, FsStateBackend always works well.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27872) Allow KafkaBuilder to set arbitrary subscribers

2022-06-02 Thread Salva (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545390#comment-17545390
 ] 

Salva commented on FLINK-27872:
---

Based on a quick experiment, it seems that using a custom subscriber requires 
to create your own fork for these two:
- `KafkaSourceBuilder`

-`KafkaSource`

The rest of the connector seems reusable, but that is still far from ideal 
(although better than what I originally thought).

> Allow KafkaBuilder to set arbitrary subscribers
> ---
>
> Key: FLINK-27872
> URL: https://issues.apache.org/jira/browse/FLINK-27872
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Reporter: Salva
>Priority: Major
>
> Currently, 
> [KafkaBuilder|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java]
>  has two setters:
>  * 
> [setTopics|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L157]
>  * 
> [setTopicPattern|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L168]
> which under the hood instantiate the corresponding (concrete) subscribers. 
> This covers the most common needs, I agree, but it might fall short in some 
> cases. Why not add a more generic setter:
>  * {{setSubscriber (???)}}
> Otherwise, how can users read from kafka in combination with custom 
> subscribing logic? Without looking much into it, it seems that they basically 
> cannot, at least without having to replicate some parts of the connector, 
> which seems rather inconvenient.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] echauchot commented on a diff in pull request #19680: [FLINK-27457] Implement flush() logic in Cassandra output formats

2022-06-02 Thread GitBox


echauchot commented on code in PR #19680:
URL: https://github.com/apache/flink/pull/19680#discussion_r887784305


##
flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/batch/connectors/cassandra/CassandraOutputFormatBase.java:
##
@@ -17,130 +17,160 @@
 
 package org.apache.flink.batch.connectors.cassandra;
 
+import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.api.common.io.RichOutputFormat;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connectors.cassandra.utils.SinkUtils;
 import org.apache.flink.streaming.connectors.cassandra.ClusterBuilder;
 import org.apache.flink.util.Preconditions;
 
 import com.datastax.driver.core.Cluster;
-import com.datastax.driver.core.PreparedStatement;
-import com.datastax.driver.core.ResultSet;
-import com.datastax.driver.core.ResultSetFuture;
 import com.datastax.driver.core.Session;
-import com.google.common.base.Strings;
 import com.google.common.util.concurrent.FutureCallback;
 import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.time.Duration;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicReference;
 
 /**
- * CassandraOutputFormatBase is the common abstract class for writing into 
Apache Cassandra.
+ * CassandraOutputFormatBase is the common abstract class for writing into 
Apache Cassandra using
+ * output formats.
  *
  * @param  Type of the elements to write.
  */
-public abstract class CassandraOutputFormatBase extends 
RichOutputFormat {
+public abstract class CassandraOutputFormatBase extends 
RichOutputFormat {
 private static final Logger LOG = 
LoggerFactory.getLogger(CassandraOutputFormatBase.class);
 
-private final String insertQuery;
 private final ClusterBuilder builder;
+private Semaphore semaphore;
+private Duration maxConcurrentRequestsTimeout = 
Duration.ofMillis(Long.MAX_VALUE);
+private int maxConcurrentRequests = Integer.MAX_VALUE;
 
 private transient Cluster cluster;
-private transient Session session;
-private transient PreparedStatement prepared;
-private transient FutureCallback callback;
-private transient Throwable exception = null;
-
-public CassandraOutputFormatBase(String insertQuery, ClusterBuilder 
builder) {
-Preconditions.checkArgument(
-!Strings.isNullOrEmpty(insertQuery), "Query cannot be null or 
empty");
+protected transient Session session;
+private transient FutureCallback callback;
+private AtomicReference throwable;
+
+public CassandraOutputFormatBase(
+ClusterBuilder builder,
+int maxConcurrentRequests,
+Duration maxConcurrentRequestsTimeout) {
 Preconditions.checkNotNull(builder, "Builder cannot be null");
-
-this.insertQuery = insertQuery;
 this.builder = builder;
+Preconditions.checkArgument(
+maxConcurrentRequests > 0, "Max concurrent requests is 
expected to be positive");
+this.maxConcurrentRequests = maxConcurrentRequests;
+Preconditions.checkNotNull(
+maxConcurrentRequestsTimeout, "Max concurrent requests timeout 
cannot be null");
+Preconditions.checkArgument(
+!maxConcurrentRequestsTimeout.isNegative(),
+"Max concurrent requests timeout is expected to be positive");
+this.maxConcurrentRequestsTimeout = maxConcurrentRequestsTimeout;
 }
 
+/** Configure the connection to Cassandra. */
 @Override
 public void configure(Configuration parameters) {
 this.cluster = builder.getCluster();
 }
 
-/**
- * Opens a Session to Cassandra and initializes the prepared statement.
- *
- * @param taskNumber The number of the parallel instance.
- * @throws IOException Thrown, if the output could not be opened due to an 
I/O problem.
- */
+/** Opens a Session to Cassandra . */
 @Override
 public void open(int taskNumber, int numTasks) throws IOException {
+throwable = new AtomicReference<>();
+this.semaphore = new Semaphore(maxConcurrentRequests);
 this.session = cluster.connect();

Review Comment:
   I agree it is a pity to use a mock only because of cassandra session/cluster 
and only for one test. I'll try to refactor to avoid mocking and code 
duplication. I'll propose you something in a isolated commit for ease of review.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-27872) Allow KafkaBuilder to set arbitrary subscribers

2022-06-02 Thread Salva (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545390#comment-17545390
 ] 

Salva edited comment on FLINK-27872 at 6/2/22 10:02 AM:


Based on a quick experiment, it seems that using a custom subscriber requires 
to create your own fork for these two classes:
 - `KafkaSourceBuilder`
 - `KafkaSource`

The rest of the connector seems reusable, but that is still far from ideal.


was (Author: JIRAUSER287051):
Based on a quick experiment, it seems that using a custom subscriber requires 
to create your own fork for these two:
- `KafkaSourceBuilder`

-`KafkaSource`

The rest of the connector seems reusable, but that is still far from ideal 
(although better than what I originally thought).

> Allow KafkaBuilder to set arbitrary subscribers
> ---
>
> Key: FLINK-27872
> URL: https://issues.apache.org/jira/browse/FLINK-27872
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Reporter: Salva
>Priority: Major
>
> Currently, 
> [KafkaBuilder|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java]
>  has two setters:
>  * 
> [setTopics|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L157]
>  * 
> [setTopicPattern|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L168]
> which under the hood instantiate the corresponding (concrete) subscribers. 
> This covers the most common needs, I agree, but it might fall short in some 
> cases. Why not add a more generic setter:
>  * {{setSubscriber (???)}}
> Otherwise, how can users read from kafka in combination with custom 
> subscribing logic? Without looking much into it, it seems that they basically 
> cannot, at least without having to replicate some parts of the connector, 
> which seems rather inconvenient.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] luoyuxia opened a new pull request, #19868: [FLINK-12398][table] Support partitioned view in catalog API

2022-06-02 Thread GitBox


luoyuxia opened a new pull request, #19868:
URL: https://github.com/apache/flink/pull/19868

   
   
   ## What is the purpose of the change
   To support partitioned view in catalog API.
   
   
   ## Brief change log
   - Move method `isPartitioned ` and `getPartitionKeys ` from `CatalogTable ` 
to `CatalogBaseTable`.
   
   
   ## Verifying this change
   This change make `CatalogView` have  `isPartitioned ` and `getPartitionKeys 
` method without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector:  no
   
   ## Documentation
   
 - Does this pull request introduce a new feature?  no
 - If yes, how is the feature documented? N/A
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-12398) Support partitioned view in catalog API

2022-06-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12398:
---
Labels: auto-deprioritized-major auto-deprioritized-minor 
pull-request-available  (was: auto-deprioritized-major auto-deprioritized-minor)

> Support partitioned view in catalog API
> ---
>
> Key: FLINK-12398
> URL: https://issues.apache.org/jira/browse/FLINK-12398
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: BL
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>
> Partitioned view is not a rare thing in common databases:
> SQL Server: 
> https://docs.microsoft.com/en-us/sql/t-sql/statements/create-view-transact-sql?view=sql-server-2017#partitioned-views
> Oracle: 
> https://docs.oracle.com/cd/A57673_01/DOC/server/doc/A48506/partview.htm
> Hive: https://cwiki.apache.org/confluence/display/Hive/PartitionedViews
> The work may include moving {{isPartitioend()}} and {{getPartitionKeys}} from 
> {{CatalogTable}} to {{CatalogBaseTable}}, and other changes.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-23721) Flink SQL state TTL has no effect when using non-incremental RocksDBStateBackend

2022-06-02 Thread liangxiaokun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545393#comment-17545393
 ] 

liangxiaokun commented on FLINK-23721:
--

Hello! I have the same question with [~lmagics]
In my main class,I used two StateTtlConfig,one is  in datastream api which used 
mapstate.This is the code 

 
{code:java}
MapStateDescriptor firstItemState = new 
MapStateDescriptor("firstState", String.class,String.class);
StateTtlConfig ttlConfig = StateTtlConfig
.newBuilder(Time.minutes(10))
.setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)
.setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)
.build();
firstItemState.enableTimeToLive(ttlConfig);
mapState = getRuntimeContext().getMapState(firstItemState); {code}
 

another one is  in sql  api ,

 
{code:java}
StreamTableEnvironment streamTableEnvironment = 
StreamTableEnvironment.create(env, EnvironmentSettings.fromConfiguration(conf));

//ttl
   TableConfig config = streamTableEnvironment.getConfig();
  config.setIdleStateRetention(Duration.ofSeconds(10));{code}
 

 

I used RocksDBStateBackend 

 
{code:java}
env.enableCheckpointing(4 * 6, CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
env.getCheckpointConfig().setCheckpointTimeout(20 * 6);
env.disableOperatorChaining();
env.setStateBackend(new 
FsStateBackend("hdfs:///user/flink_state/order_detail_realtime"));
{code}
But I found it seems didnt work,because in flink web ui,managed memory is full 
used

 

!image-2022-06-02-17-59-38-542.png!

 

> Flink SQL state TTL has no effect when using non-incremental 
> RocksDBStateBackend
> 
>
> Key: FLINK-23721
> URL: https://issues.apache.org/jira/browse/FLINK-23721
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends, Table SQL / Runtime
>Affects Versions: 1.13.0
>Reporter: Q Kang
>Priority: Major
> Attachments: image-2022-06-02-17-59-38-542.png
>
>
> Take the following deduplication SQL program as an example:
> {code:java}
> SET table.exec.state.ttl=30s;
> INSERT INTO tmp.blackhole_order_done_log
> SELECT t.* FROM (
>   SELECT *,ROW_NUMBER() OVER(PARTITION BY suborderid ORDER BY procTime ASC) 
> AS rn
>   FROM rtdw_ods.kafka_order_done_log
> ) AS t WHERE rn = 1;
> {code}
> When using RocksDBStateBackend with incremental checkpoint enabled, the size 
> of deduplication state seems OK.
> FlinkCompactionFilter is also working, regarding to logs below:
> {code:java}
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Call 
> FlinkCompactionFilter::FilterV2 - Key: , Data: 017B3481026D01, Value 
> type: 0, State type: 1, TTL: 3 ms, timestamp_offset: 0
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Last access timestamp: 
> 1628673475181 ms, ttlWithoutOverflow: 3 ms, Current timestamp: 
> 1628673701905 ms
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Decision: 1
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Call 
> FlinkCompactionFilter::FilterV2 - Key: , Data: 017B3484064901, Value 
> type: 0, State type: 1, TTL: 3 ms, timestamp_offset: 0
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Last access timestamp: 
> 1628673672777 ms, ttlWithoutOverflow: 3 ms, Current timestamp: 
> 1628673701905 ms
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Decision: 0
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Call 
> FlinkCompactionFilter::FilterV2 - Key: , Data: 017B3483341D01, Value 
> type: 0, State type: 1, TTL: 3 ms, timestamp_offset: 0
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Last access timestamp: 
> 1628673618973 ms, ttlWithoutOverflow: 3 ms, Current timestamp: 
> 1628673701905 ms
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Decision: 1
> {code}
> However, after turning off incremental checkpoint, the state TTL seems not 
> effective at all: FlinkCompactionFilter logs are not printed, and the size of 
> deduplication state grows steadily up to several GBs

[jira] [Comment Edited] (FLINK-23721) Flink SQL state TTL has no effect when using non-incremental RocksDBStateBackend

2022-06-02 Thread liangxiaokun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545393#comment-17545393
 ] 

liangxiaokun edited comment on FLINK-23721 at 6/2/22 10:08 AM:
---

Hello! I have the same question with [~lmagics]
In my main class,I used two StateTtlConfig,one is  in datastream api which used 
mapstate.This is the code 

 
{code:java}
MapStateDescriptor firstItemState = new 
MapStateDescriptor("firstState", String.class,String.class);
StateTtlConfig ttlConfig = StateTtlConfig
.newBuilder(Time.minutes(10))
.setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)
.setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)
.build();
firstItemState.enableTimeToLive(ttlConfig);
mapState = getRuntimeContext().getMapState(firstItemState); {code}
 

another one is  in sql  api ,

 
{code:java}
StreamTableEnvironment streamTableEnvironment = 
StreamTableEnvironment.create(env, EnvironmentSettings.fromConfiguration(conf));

//ttl
   TableConfig config = streamTableEnvironment.getConfig();
  config.setIdleStateRetention(Duration.ofSeconds(10));{code}
 

 

I used RocksDBStateBackend 

 
{code:java}
env.enableCheckpointing(4 * 6, CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
env.getCheckpointConfig().setCheckpointTimeout(20 * 6);
env.disableOperatorChaining();      
env.setStateBackend(new 
RocksDBStateBackend("hdfs:///user/ucloud_bi/lxk/flink_state/order_detail_realtime"))
{code}
But I found it seems didnt work,because in flink web ui,managed memory is full 
used

 

!image-2022-06-02-17-59-38-542.png!

 


was (Author: JIRAUSER289974):
Hello! I have the same question with [~lmagics]
In my main class,I used two StateTtlConfig,one is  in datastream api which used 
mapstate.This is the code 

 
{code:java}
MapStateDescriptor firstItemState = new 
MapStateDescriptor("firstState", String.class,String.class);
StateTtlConfig ttlConfig = StateTtlConfig
.newBuilder(Time.minutes(10))
.setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)
.setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)
.build();
firstItemState.enableTimeToLive(ttlConfig);
mapState = getRuntimeContext().getMapState(firstItemState); {code}
 

another one is  in sql  api ,

 
{code:java}
StreamTableEnvironment streamTableEnvironment = 
StreamTableEnvironment.create(env, EnvironmentSettings.fromConfiguration(conf));

//ttl
   TableConfig config = streamTableEnvironment.getConfig();
  config.setIdleStateRetention(Duration.ofSeconds(10));{code}
 

 

I used RocksDBStateBackend 

 
{code:java}
env.enableCheckpointing(4 * 6, CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
env.getCheckpointConfig().setCheckpointTimeout(20 * 6);
env.disableOperatorChaining();
env.setStateBackend(new 
FsStateBackend("hdfs:///user/flink_state/order_detail_realtime"));
{code}
But I found it seems didnt work,because in flink web ui,managed memory is full 
used

 

!image-2022-06-02-17-59-38-542.png!

 

> Flink SQL state TTL has no effect when using non-incremental 
> RocksDBStateBackend
> 
>
> Key: FLINK-23721
> URL: https://issues.apache.org/jira/browse/FLINK-23721
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends, Table SQL / Runtime
>Affects Versions: 1.13.0
>Reporter: Q Kang
>Priority: Major
> Attachments: image-2022-06-02-17-59-38-542.png
>
>
> Take the following deduplication SQL program as an example:
> {code:java}
> SET table.exec.state.ttl=30s;
> INSERT INTO tmp.blackhole_order_done_log
> SELECT t.* FROM (
>   SELECT *,ROW_NUMBER() OVER(PARTITION BY suborderid ORDER BY procTime ASC) 
> AS rn
>   FROM rtdw_ods.kafka_order_done_log
> ) AS t WHERE rn = 1;
> {code}
> When using RocksDBStateBackend with incremental checkpoint enabled, the size 
> of deduplication state seems OK.
> FlinkCompactionFilter is also working, regarding to logs below:
> {code:java}
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Call 
> FlinkCompactionFilter::FilterV2 - Key: , Data: 017B3481026D01, Value 
> type: 0, State type: 1, TTL: 3 ms, timestamp_offset: 0
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 
>[] - RocksDB filter native code log: Last access timestamp: 
> 1628673475181 ms, ttlWithoutOverflow: 3 ms, Current timestamp: 
> 1628673701905 ms
> 21-08-11 17:21:42 DEBUG org.rocksdb.FlinkCompactionFilter 

[GitHub] [flink] flinkbot commented on pull request #19868: [FLINK-12398][table] Support partitioned view in catalog API

2022-06-02 Thread GitBox


flinkbot commented on PR #19868:
URL: https://github.com/apache/flink/pull/19868#issuecomment-1144684661

   
   ## CI report:
   
   * c60564e99454778f2ecf768ac13c0cd02d2a551f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27721) Slack: set up archive

2022-06-02 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545404#comment-17545404
 ] 

Robert Metzger commented on FLINK-27721:


flink-slack.org sounds good! Would be nice if you could register it!

> How does flink-packages backup databases?

It runs a cron job every day creating a dump of the database (just locally). 
Every now and then I downloaded a dump. The problem is that this server is 
owned by Ververica (in Google Cloud), so I don't have access to it anymore.

> Can we use the server which hosts https://flink-packages.org/ ? 

In principle yes. The server has very little resources, but upgrading it to a 
bigger machine should be simple.



> Slack: set up archive
> -
>
> Key: FLINK-27721
> URL: https://issues.apache.org/jira/browse/FLINK-27721
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27881) The key(String) in PulsarMessageBuilder returns null

2022-06-02 Thread Shuiqiang Chen (Jira)
Shuiqiang Chen created FLINK-27881:
--

 Summary: The key(String) in PulsarMessageBuilder returns null
 Key: FLINK-27881
 URL: https://issues.apache.org/jira/browse/FLINK-27881
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Pulsar
Affects Versions: 1.15.0
Reporter: Shuiqiang Chen


The PulsarMessageBuild.key(String) always return null, which might cause NPE in 
later call.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] pengmide opened a new pull request, #19869: [FLINK-21996][python] Support Kinesis connector in Python DataStream API

2022-06-02 Thread GitBox


pengmide opened a new pull request, #19869:
URL: https://github.com/apache/flink/pull/19869

   ## What is the purpose of the change
   
   Support Kinesis connector in Python DataStream API
   
   ## Brief change log
   
 - Introduces the `KinesisConsumer`  and `KinesisSink` to support source 
and sink of Kinesis connector in PyFlink.
 
   ## Verifying this change
   
 - Adds the `FlinkKinesisTest` to verify the `KinesisConsumer` and 
`KinesisSink`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #19869: [FLINK-21996][python] Support Kinesis connector in Python DataStream API

2022-06-02 Thread GitBox


flinkbot commented on PR #19869:
URL: https://github.com/apache/flink/pull/19869#issuecomment-1144750309

   
   ## CI report:
   
   * 1e0b00790175a9fbd9a6ec043b69a30f55c9473d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] echauchot commented on a diff in pull request #19680: [FLINK-27457] Implement flush() logic in Cassandra output formats

2022-06-02 Thread GitBox


echauchot commented on code in PR #19680:
URL: https://github.com/apache/flink/pull/19680#discussion_r887784305


##
flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/batch/connectors/cassandra/CassandraOutputFormatBase.java:
##
@@ -17,130 +17,160 @@
 
 package org.apache.flink.batch.connectors.cassandra;
 
+import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.api.common.io.RichOutputFormat;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connectors.cassandra.utils.SinkUtils;
 import org.apache.flink.streaming.connectors.cassandra.ClusterBuilder;
 import org.apache.flink.util.Preconditions;
 
 import com.datastax.driver.core.Cluster;
-import com.datastax.driver.core.PreparedStatement;
-import com.datastax.driver.core.ResultSet;
-import com.datastax.driver.core.ResultSetFuture;
 import com.datastax.driver.core.Session;
-import com.google.common.base.Strings;
 import com.google.common.util.concurrent.FutureCallback;
 import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.time.Duration;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicReference;
 
 /**
- * CassandraOutputFormatBase is the common abstract class for writing into 
Apache Cassandra.
+ * CassandraOutputFormatBase is the common abstract class for writing into 
Apache Cassandra using
+ * output formats.
  *
  * @param  Type of the elements to write.
  */
-public abstract class CassandraOutputFormatBase extends 
RichOutputFormat {
+public abstract class CassandraOutputFormatBase extends 
RichOutputFormat {
 private static final Logger LOG = 
LoggerFactory.getLogger(CassandraOutputFormatBase.class);
 
-private final String insertQuery;
 private final ClusterBuilder builder;
+private Semaphore semaphore;
+private Duration maxConcurrentRequestsTimeout = 
Duration.ofMillis(Long.MAX_VALUE);
+private int maxConcurrentRequests = Integer.MAX_VALUE;
 
 private transient Cluster cluster;
-private transient Session session;
-private transient PreparedStatement prepared;
-private transient FutureCallback callback;
-private transient Throwable exception = null;
-
-public CassandraOutputFormatBase(String insertQuery, ClusterBuilder 
builder) {
-Preconditions.checkArgument(
-!Strings.isNullOrEmpty(insertQuery), "Query cannot be null or 
empty");
+protected transient Session session;
+private transient FutureCallback callback;
+private AtomicReference throwable;
+
+public CassandraOutputFormatBase(
+ClusterBuilder builder,
+int maxConcurrentRequests,
+Duration maxConcurrentRequestsTimeout) {
 Preconditions.checkNotNull(builder, "Builder cannot be null");
-
-this.insertQuery = insertQuery;
 this.builder = builder;
+Preconditions.checkArgument(
+maxConcurrentRequests > 0, "Max concurrent requests is 
expected to be positive");
+this.maxConcurrentRequests = maxConcurrentRequests;
+Preconditions.checkNotNull(
+maxConcurrentRequestsTimeout, "Max concurrent requests timeout 
cannot be null");
+Preconditions.checkArgument(
+!maxConcurrentRequestsTimeout.isNegative(),
+"Max concurrent requests timeout is expected to be positive");
+this.maxConcurrentRequestsTimeout = maxConcurrentRequestsTimeout;
 }
 
+/** Configure the connection to Cassandra. */
 @Override
 public void configure(Configuration parameters) {
 this.cluster = builder.getCluster();
 }
 
-/**
- * Opens a Session to Cassandra and initializes the prepared statement.
- *
- * @param taskNumber The number of the parallel instance.
- * @throws IOException Thrown, if the output could not be opened due to an 
I/O problem.
- */
+/** Opens a Session to Cassandra . */
 @Override
 public void open(int taskNumber, int numTasks) throws IOException {
+throwable = new AtomicReference<>();
+this.semaphore = new Semaphore(maxConcurrentRequests);
 this.session = cluster.connect();

Review Comment:
   I agree it is a pity to use a mock only because of cassandra 
session/cluster. I'll try to refactor to avoid mocking and code duplication. 
I'll propose you something in a isolated commit for ease of review.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27872) Allow KafkaBuilder to set arbitrary subscribers

2022-06-02 Thread Salva (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Salva updated FLINK-27872:
--
Description: 
Currently, 
[KafkaSourceBuilder|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java]
 has two setters:
 * 
[setTopics|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L157]
 * 
[setTopicPattern|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L168]

which under the hood instantiate the corresponding (concrete) subscribers. This 
covers the most common needs, I agree, but it might fall short in some cases. 
Why not add a more generic setter:
 * {{setSubscriber (???)}}

Otherwise, how can users read from kafka in combination with custom subscribing 
logic? Without looking much into it, it seems that they basically cannot, at 
least without having to replicate some parts of the connector, which seems 
rather inconvenient.

  was:
Currently, 
[KafkaBuilder|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java]
 has two setters:
 * 
[setTopics|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L157]
 * 
[setTopicPattern|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L168]

which under the hood instantiate the corresponding (concrete) subscribers. This 
covers the most common needs, I agree, but it might fall short in some cases. 
Why not add a more generic setter:
 * {{setSubscriber (???)}}

Otherwise, how can users read from kafka in combination with custom subscribing 
logic? Without looking much into it, it seems that they basically cannot, at 
least without having to replicate some parts of the connector, which seems 
rather inconvenient.


> Allow KafkaBuilder to set arbitrary subscribers
> ---
>
> Key: FLINK-27872
> URL: https://issues.apache.org/jira/browse/FLINK-27872
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Reporter: Salva
>Priority: Major
>
> Currently, 
> [KafkaSourceBuilder|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java]
>  has two setters:
>  * 
> [setTopics|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L157]
>  * 
> [setTopicPattern|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L168]
> which under the hood instantiate the corresponding (concrete) subscribers. 
> This covers the most common needs, I agree, but it might fall short in some 
> cases. Why not add a more generic setter:
>  * {{setSubscriber (???)}}
> Otherwise, how can users read from kafka in combination with custom 
> subscribing logic? Without looking much into it, it seems that they basically 
> cannot, at least without having to replicate some parts of the connector, 
> which seems rather inconvenient.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-24229) [FLIP-171] DynamoDB implementation of Async Sink

2022-06-02 Thread Yuri Gusev (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545434#comment-17545434
 ] 

Yuri Gusev commented on FLINK-24229:


Thanks [~dannycranmer]. 

We fixed most of the comments, there is also a PR to hide ElementConverter for 
review separately,but we can merge into this one.

One last missing part is client creation, I'll try to fix it soon. But most of 
it available for re-review. Would be nice to get it out soon before too many 
changes again, we re-wrote it couple of times already :)

> [FLIP-171] DynamoDB implementation of Async Sink
> 
>
> Key: FLINK-24229
> URL: https://issues.apache.org/jira/browse/FLINK-24229
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Zichen Liu
>Assignee: Yuri Gusev
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> h2. Motivation
> *User stories:*
>  As a Flink user, I’d like to use DynamoDB as sink for my data pipeline.
> *Scope:*
>  * Implement an asynchronous sink for DynamoDB by inheriting the 
> AsyncSinkBase class. The implementation can for now reside in its own module 
> in flink-connectors.
>  * Implement an asynchornous sink writer for DynamoDB by extending the 
> AsyncSinkWriter. The implementation must deal with failed requests and retry 
> them using the {{requeueFailedRequestEntry}} method. If possible, the 
> implementation should batch multiple requests (PutRecordsRequestEntry 
> objects) to Firehose for increased throughput. The implemented Sink Writer 
> will be used by the Sink class that will be created as part of this story.
>  * Java / code-level docs.
>  * End to end testing: add tests that hits a real AWS instance. (How to best 
> donate resources to the Flink project to allow this to happen?)
> h2. References
> More details to be found 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-24229) [FLIP-171] DynamoDB implementation of Async Sink

2022-06-02 Thread Yuri Gusev (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545434#comment-17545434
 ] 

Yuri Gusev edited comment on FLINK-24229 at 6/2/22 11:51 AM:
-

Thanks [~dannycranmer]. 

We fixed most of the comments, there is also a PR to hide ElementConverter for 
review separately,but we can merge into this one.

One last missing part is client creation, I'll try to fix it soon. But most of 
it available for re-review. Would be nice to get it out soon before too many 
changes again, we re-wrote it couple of times already (earlier implementation 
was without shared base  async connector class). :)


was (Author: gusev):
Thanks [~dannycranmer]. 

We fixed most of the comments, there is also a PR to hide ElementConverter for 
review separately,but we can merge into this one.

One last missing part is client creation, I'll try to fix it soon. But most of 
it available for re-review. Would be nice to get it out soon before too many 
changes again, we re-wrote it couple of times already :)

> [FLIP-171] DynamoDB implementation of Async Sink
> 
>
> Key: FLINK-24229
> URL: https://issues.apache.org/jira/browse/FLINK-24229
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Zichen Liu
>Assignee: Yuri Gusev
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> h2. Motivation
> *User stories:*
>  As a Flink user, I’d like to use DynamoDB as sink for my data pipeline.
> *Scope:*
>  * Implement an asynchronous sink for DynamoDB by inheriting the 
> AsyncSinkBase class. The implementation can for now reside in its own module 
> in flink-connectors.
>  * Implement an asynchornous sink writer for DynamoDB by extending the 
> AsyncSinkWriter. The implementation must deal with failed requests and retry 
> them using the {{requeueFailedRequestEntry}} method. If possible, the 
> implementation should batch multiple requests (PutRecordsRequestEntry 
> objects) to Firehose for increased throughput. The implemented Sink Writer 
> will be used by the Sink class that will be created as part of this story.
>  * Java / code-level docs.
>  * End to end testing: add tests that hits a real AWS instance. (How to best 
> donate resources to the Flink project to allow this to happen?)
> h2. References
> More details to be found 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27882) [JUnit5 Migration] Module: flink-scala

2022-06-02 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-27882:
---

 Summary: [JUnit5 Migration] Module: flink-scala
 Key: FLINK-27882
 URL: https://issues.apache.org/jira/browse/FLINK-27882
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Reporter: Sergey Nuyanzin






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-26721) PulsarSourceITCase.testSavepoint failed on azure pipeline

2022-06-02 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545439#comment-17545439
 ] 

Huang Xingbo commented on FLINK-26721:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36262&view=logs&j=aa18c3f6-13b8-5f58-86bb-c1cffb239496&t=502fb6c0-30a2-5e49-c5c2-a00fa3acb203

> PulsarSourceITCase.testSavepoint failed on azure pipeline
> -
>
> Key: FLINK-26721
> URL: https://issues.apache.org/jira/browse/FLINK-26721
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.16.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: build-stability
>
> {code:java}
> Mar 18 05:49:52 [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 315.581 s <<< FAILURE! - in 
> org.apache.flink.connector.pulsar.source.PulsarSourceITCase
> Mar 18 05:49:52 [ERROR] 
> org.apache.flink.connector.pulsar.source.PulsarSourceITCase.testSavepoint(TestEnvironment,
>  DataStreamSourceExternalContext, CheckpointingMode)[1]  Time elapsed: 
> 140.803 s  <<< FAILURE!
> Mar 18 05:49:52 java.lang.AssertionError: 
> Mar 18 05:49:52 
> Mar 18 05:49:52 Expecting
> Mar 18 05:49:52   
> Mar 18 05:49:52 to be completed within 2M.
> Mar 18 05:49:52 
> Mar 18 05:49:52 exception caught while trying to get the future result: 
> java.util.concurrent.TimeoutException
> Mar 18 05:49:52   at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
> Mar 18 05:49:52   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> Mar 18 05:49:52   at 
> org.assertj.core.internal.Futures.assertSucceededWithin(Futures.java:109)
> Mar 18 05:49:52   at 
> org.assertj.core.api.AbstractCompletableFutureAssert.internalSucceedsWithin(AbstractCompletableFutureAssert.java:400)
> Mar 18 05:49:52   at 
> org.assertj.core.api.AbstractCompletableFutureAssert.succeedsWithin(AbstractCompletableFutureAssert.java:396)
> Mar 18 05:49:52   at 
> org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.checkResultWithSemantic(SourceTestSuiteBase.java:766)
> Mar 18 05:49:52   at 
> org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.restartFromSavepoint(SourceTestSuiteBase.java:399)
> Mar 18 05:49:52   at 
> org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.testSavepoint(SourceTestSuiteBase.java:241)
> Mar 18 05:49:52   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 18 05:49:52   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 18 05:49:52   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 18 05:49:52   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 18 05:49:52   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambd

[jira] [Commented] (FLINK-27869) AdaptiveSchedulerITCase. testStopWithSavepointFailOnStop failed with FAIL_ON_CHECKPOINT_COMPLETE

2022-06-02 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545442#comment-17545442
 ] 

Huang Xingbo commented on FLINK-27869:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36262&view=logs&j=a57e0635-3fad-5b08-57c7-a4142d7d6fa9&t=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7

> AdaptiveSchedulerITCase. testStopWithSavepointFailOnStop failed with 
> FAIL_ON_CHECKPOINT_COMPLETE 
> -
>
> Key: FLINK-27869
> URL: https://issues.apache.org/jira/browse/FLINK-27869
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 8.6667579Z May 31 01:18:28 [ERROR] 
> org.apache.flink.test.scheduling.AdaptiveSchedulerITCase.testStopWithSavepointFailOnStop
>   Time elapsed: 0.235 s  <<< ERROR!
> 2022-05-31T01:18:28.6668521Z May 31 01:18:28 
> org.apache.flink.util.FlinkException: Stop with savepoint operation could not 
> be completed.
> 2022-05-31T01:18:28.6669435Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.StopWithSavepoint.onLeave(StopWithSavepoint.java:125)
> 2022-05-31T01:18:28.6670470Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.transitionToState(AdaptiveScheduler.java:1171)
> 2022-05-31T01:18:28.6671487Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.goToRestarting(AdaptiveScheduler.java:849)
> 2022-05-31T01:18:28.6672481Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.FailureResultUtil.restartOrFail(FailureResultUtil.java:28)
> 2022-05-31T01:18:28.6673459Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.StopWithSavepoint.onFailure(StopWithSavepoint.java:151)
> 2022-05-31T01:18:28.6674502Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.StateWithExecutionGraph.updateTaskExecutionState(StateWithExecutionGraph.java:363)
> 2022-05-31T01:18:28.6675603Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.lambda$updateTaskExecutionState$4(AdaptiveScheduler.java:496)
> 2022-05-31T01:18:28.6677238Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.State.tryCall(State.java:137)
> 2022-05-31T01:18:28.6678573Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.updateTaskExecutionState(AdaptiveScheduler.java:493)
> 2022-05-31T01:18:28.6679517Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
> 2022-05-31T01:18:28.6680538Z May 31 01:18:28  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
> 2022-05-31T01:18:28.6681304Z May 31 01:18:28  at 
> sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
> 2022-05-31T01:18:28.6682058Z May 31 01:18:28  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-05-31T01:18:28.6682800Z May 31 01:18:28  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-05-31T01:18:28.6683611Z May 31 01:18:28  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
> 2022-05-31T01:18:28.6684559Z May 31 01:18:28  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
> 2022-05-31T01:18:28.6685483Z May 31 01:18:28  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
> 2022-05-31T01:18:28.6686343Z May 31 01:18:28  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
> 2022-05-31T01:18:28.6687224Z May 31 01:18:28  at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
> 2022-05-31T01:18:28.6688093Z May 31 01:18:28  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
> 2022-05-31T01:18:28.6688877Z May 31 01:18:28  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
> 2022-05-31T01:18:28.6689602Z May 31 01:18:28  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
> 2022-05-31T01:18:28.6690313Z May 31 01:18:28  at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
> 2022-05-31T01:18:28.6691045Z May 31 01:18:28  at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
> 2022-05-31T01:18:28.6691782Z May 31 01:18:28  at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
> 2022-05-31T01:18:28.6692535Z May 31 01:18:28  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> 2022-05

[jira] [Updated] (FLINK-27869) AdaptiveSchedulerITCase. testStopWithSavepointFailOnStop failed with FAIL_ON_CHECKPOINT_COMPLETE

2022-06-02 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-27869:
-
Priority: Critical  (was: Major)

> AdaptiveSchedulerITCase. testStopWithSavepointFailOnStop failed with 
> FAIL_ON_CHECKPOINT_COMPLETE 
> -
>
> Key: FLINK-27869
> URL: https://issues.apache.org/jira/browse/FLINK-27869
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 8.6667579Z May 31 01:18:28 [ERROR] 
> org.apache.flink.test.scheduling.AdaptiveSchedulerITCase.testStopWithSavepointFailOnStop
>   Time elapsed: 0.235 s  <<< ERROR!
> 2022-05-31T01:18:28.6668521Z May 31 01:18:28 
> org.apache.flink.util.FlinkException: Stop with savepoint operation could not 
> be completed.
> 2022-05-31T01:18:28.6669435Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.StopWithSavepoint.onLeave(StopWithSavepoint.java:125)
> 2022-05-31T01:18:28.6670470Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.transitionToState(AdaptiveScheduler.java:1171)
> 2022-05-31T01:18:28.6671487Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.goToRestarting(AdaptiveScheduler.java:849)
> 2022-05-31T01:18:28.6672481Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.FailureResultUtil.restartOrFail(FailureResultUtil.java:28)
> 2022-05-31T01:18:28.6673459Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.StopWithSavepoint.onFailure(StopWithSavepoint.java:151)
> 2022-05-31T01:18:28.6674502Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.StateWithExecutionGraph.updateTaskExecutionState(StateWithExecutionGraph.java:363)
> 2022-05-31T01:18:28.6675603Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.lambda$updateTaskExecutionState$4(AdaptiveScheduler.java:496)
> 2022-05-31T01:18:28.6677238Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.State.tryCall(State.java:137)
> 2022-05-31T01:18:28.6678573Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.updateTaskExecutionState(AdaptiveScheduler.java:493)
> 2022-05-31T01:18:28.6679517Z May 31 01:18:28  at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
> 2022-05-31T01:18:28.6680538Z May 31 01:18:28  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
> 2022-05-31T01:18:28.6681304Z May 31 01:18:28  at 
> sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
> 2022-05-31T01:18:28.6682058Z May 31 01:18:28  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-05-31T01:18:28.6682800Z May 31 01:18:28  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-05-31T01:18:28.6683611Z May 31 01:18:28  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
> 2022-05-31T01:18:28.6684559Z May 31 01:18:28  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
> 2022-05-31T01:18:28.6685483Z May 31 01:18:28  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
> 2022-05-31T01:18:28.6686343Z May 31 01:18:28  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
> 2022-05-31T01:18:28.6687224Z May 31 01:18:28  at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
> 2022-05-31T01:18:28.6688093Z May 31 01:18:28  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
> 2022-05-31T01:18:28.6688877Z May 31 01:18:28  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
> 2022-05-31T01:18:28.6689602Z May 31 01:18:28  at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
> 2022-05-31T01:18:28.6690313Z May 31 01:18:28  at 
> scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
> 2022-05-31T01:18:28.6691045Z May 31 01:18:28  at 
> scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
> 2022-05-31T01:18:28.6691782Z May 31 01:18:28  at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
> 2022-05-31T01:18:28.6692535Z May 31 01:18:28  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> 2022-05-31T01:18:28.6693283Z May 31 01:18:28  at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
> 2022-05-31T01:18:28.6694031Z May 31 01:18:28  at 
> scala.PartialFun

[GitHub] [flink] snuyanzin commented on pull request #18601: [FLINK-25699][table] Use HashMap for MAP value constructors

2022-06-02 Thread GitBox


snuyanzin commented on PR #18601:
URL: https://github.com/apache/flink/pull/18601#issuecomment-1144786922

   @MartijnVisser , @twalthr sorry for the poke.
   Since the PR has been already approved, could you please help with merging 
this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27791) SlotCountExceedingParallelismTest tests failed with NoResourceAvailableException

2022-06-02 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545443#comment-17545443
 ] 

Huang Xingbo commented on FLINK-27791:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36270&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8

> SlotCountExceedingParallelismTest tests failed with 
> NoResourceAvailableException
> 
>
> Key: FLINK-27791
> URL: https://issues.apache.org/jira/browse/FLINK-27791
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: test-stability
> Attachments: error.log
>
>
> {code:java}
> 2022-05-25T12:16:09.2562348Z May 25 12:16:09 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-05-25T12:16:09.2563741Z May 25 12:16:09  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-05-25T12:16:09.2565457Z May 25 12:16:09  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:982)
> 2022-05-25T12:16:09.2567245Z May 25 12:16:09  at 
> org.apache.flink.runtime.jobmanager.SlotCountExceedingParallelismTest.submitJobGraphAndWait(SlotCountExceedingParallelismTest.java:101)
> 2022-05-25T12:16:09.2569329Z May 25 12:16:09  at 
> org.apache.flink.runtime.jobmanager.SlotCountExceedingParallelismTest.testNoSlotSharingAndBlockingResultBoth(SlotCountExceedingParallelismTest.java:94)
> 2022-05-25T12:16:09.2571889Z May 25 12:16:09  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-05-25T12:16:09.2573109Z May 25 12:16:09  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-05-25T12:16:09.2574528Z May 25 12:16:09  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-05-25T12:16:09.2575657Z May 25 12:16:09  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-05-25T12:16:09.2581380Z May 25 12:16:09  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-05-25T12:16:09.2582747Z May 25 12:16:09  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-05-25T12:16:09.2583600Z May 25 12:16:09  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-05-25T12:16:09.2584455Z May 25 12:16:09  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-05-25T12:16:09.2585172Z May 25 12:16:09  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-05-25T12:16:09.2585792Z May 25 12:16:09  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-05-25T12:16:09.2586376Z May 25 12:16:09  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-05-25T12:16:09.2587035Z May 25 12:16:09  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-05-25T12:16:09.2587682Z May 25 12:16:09  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-05-25T12:16:09.2588589Z May 25 12:16:09  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-05-25T12:16:09.2589623Z May 25 12:16:09  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-05-25T12:16:09.2590262Z May 25 12:16:09  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-05-25T12:16:09.2590856Z May 25 12:16:09  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-05-25T12:16:09.2591453Z May 25 12:16:09  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-05-25T12:16:09.2592063Z May 25 12:16:09  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-05-25T12:16:09.2592673Z May 25 12:16:09  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-05-25T12:16:09.2593288Z May 25 12:16:09  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-05-25T12:16:09.2595864Z May 25 12:16:09  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2022-05-25T12:16:09.2596521Z May 25 12:16:09  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-05-25T12:16:09.2597144Z May 25 12:16:09  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-05-25T12:16:09.2597703Z May 25 12:16:09  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-05-25T12:16:09.2598247Z May 25 12:16:09  at 
> org.junit.runner.JUnitCore.run(JUni

[GitHub] [flink] wuchong merged pull request #19746: [FLINK-27630][table-planner] Add maven-source-plugin for table planne…

2022-06-02 Thread GitBox


wuchong merged PR #19746:
URL: https://github.com/apache/flink/pull/19746


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-27630) maven-source-plugin for table planner values connector for debug

2022-06-02 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-27630.
---
Fix Version/s: 1.16.0
 Assignee: jackylau
   Resolution: Fixed

Fixed in master: 5493a386ae4f3cfbcbe1c19444130f788392860c

> maven-source-plugin for table planner values connector for debug
> 
>
> Key: FLINK-27630
> URL: https://issues.apache.org/jira/browse/FLINK-27630
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> add test source jar to reponsitory  when user use this values connector in 
> flink-table-planner just like kafka/pulsar connector. So user can find the 
> values source code.
> and we just need upload the **/factories/** because it will be too large to 
> upload all the flink-table-planner test source code.
>  
> {code:java}
> // code placeholder
> just like kafka/pulsar
> 
>org.apache.maven.plugins
>maven-source-plugin
>
>   
>  attach-test-sources
>  
> test-jar-no-fork
>  
>  
> 
>
>false
> 
> 
>**/factories/**
>META-INF/LICENSE
>META-INF/NOTICE
> 
>  
>   
>
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] snuyanzin opened a new pull request, #19870: [FLINK-27882][tests][table] Migrate flink-scala to JUnit5

2022-06-02 Thread GitBox


snuyanzin opened a new pull request, #19870:
URL: https://github.com/apache/flink/pull/19870

   ## What is the purpose of the change
   
   Update the flink-scala module to AssertJ and JUnit 5 following the [JUnit 5 
Migration 
Guide](https://docs.google.com/document/d/1514Wa_aNB9bJUen4xm5uiuXOooOJTtXqS_Jqk9KJitU/edit)
   
   
   ## Brief change log
   
   Use JUnit5 and AssertJ in tests instead of JUnit4 and Hamcrest
   
   ## Verifying this change
   
   This change is a code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: ( no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27882) [JUnit5 Migration] Module: flink-scala

2022-06-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27882:
---
Labels: pull-request-available  (was: )

> [JUnit5 Migration] Module: flink-scala
> --
>
> Key: FLINK-27882
> URL: https://issues.apache.org/jira/browse/FLINK-27882
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] flinkbot commented on pull request #19870: [FLINK-27882][tests][table] Migrate flink-scala to JUnit5

2022-06-02 Thread GitBox


flinkbot commented on PR #19870:
URL: https://github.com/apache/flink/pull/19870#issuecomment-1144792109

   
   ## CI report:
   
   * 65bb90423100074cd2aef8ad64075c99a1b735f2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27883) KafkaSubscriberTest failed with NPC

2022-06-02 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-27883:


 Summary: KafkaSubscriberTest failed with NPC
 Key: FLINK-27883
 URL: https://issues.apache.org/jira/browse/FLINK-27883
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.16.0
Reporter: Huang Xingbo



{code:java}
2022-06-02T01:42:45.0924799Z Jun 02 01:42:45 [ERROR] Tests run: 1, Failures: 1, 
Errors: 0, Skipped: 0, Time elapsed: 66.787 s <<< FAILURE! - in 
org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest
2022-06-02T01:42:45.0926067Z Jun 02 01:42:45 [ERROR] 
org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest
  Time elapsed: 66.787 s  <<< FAILURE!
2022-06-02T01:42:45.0926867Z Jun 02 01:42:45 
org.opentest4j.MultipleFailuresError: 
2022-06-02T01:42:45.0927608Z Jun 02 01:42:45 Multiple Failures (2 failures)
2022-06-02T01:42:45.0928626Z Jun 02 01:42:45java.lang.AssertionError: 
Create test topic : topic1 failed, 
org.apache.kafka.common.errors.TimeoutException: The request timed out.
2022-06-02T01:42:45.0929717Z Jun 02 01:42:45java.lang.NullPointerException: 

2022-06-02T01:42:45.0930482Z Jun 02 01:42:45at 
org.junit.vintage.engine.execution.TestRun.getStoredResultOrSuccessful(TestRun.java:196)
2022-06-02T01:42:45.0931579Z Jun 02 01:42:45at 
org.junit.vintage.engine.execution.RunListenerAdapter.fireExecutionFinished(RunListenerAdapter.java:226)
2022-06-02T01:42:45.0932685Z Jun 02 01:42:45at 
org.junit.vintage.engine.execution.RunListenerAdapter.testRunFinished(RunListenerAdapter.java:93)
2022-06-02T01:42:45.0933736Z Jun 02 01:42:45at 
org.junit.runner.notification.SynchronizedRunListener.testRunFinished(SynchronizedRunListener.java:42)
2022-06-02T01:42:45.0934600Z Jun 02 01:42:45at 
org.junit.runner.notification.RunNotifier$2.notifyListener(RunNotifier.java:103)
2022-06-02T01:42:45.0935437Z Jun 02 01:42:45at 
org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72)
2022-06-02T01:42:45.0936147Z Jun 02 01:42:45at 
org.junit.runner.notification.RunNotifier.fireTestRunFinished(RunNotifier.java:100)
2022-06-02T01:42:45.0936807Z Jun 02 01:42:45at 
org.junit.runner.JUnitCore.run(JUnitCore.java:138)
2022-06-02T01:42:45.0937370Z Jun 02 01:42:45at 
org.junit.runner.JUnitCore.run(JUnitCore.java:115)
2022-06-02T01:42:45.0938011Z Jun 02 01:42:45at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
2022-06-02T01:42:45.0938756Z Jun 02 01:42:45at 
org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
2022-06-02T01:42:45.0939480Z Jun 02 01:42:45at 
org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
2022-06-02T01:42:45.0940304Z Jun 02 01:42:45at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
2022-06-02T01:42:45.0941136Z Jun 02 01:42:45at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
2022-06-02T01:42:45.0942000Z Jun 02 01:42:45at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
2022-06-02T01:42:45.0943056Z Jun 02 01:42:45at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
2022-06-02T01:42:45.0944171Z Jun 02 01:42:45at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
2022-06-02T01:42:45.0944945Z Jun 02 01:42:45at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
2022-06-02T01:42:45.0945756Z Jun 02 01:42:45at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
2022-06-02T01:42:45.0946607Z Jun 02 01:42:45at 
org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
2022-06-02T01:42:45.0947618Z Jun 02 01:42:45at 
org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
2022-06-02T01:42:45.0948525Z Jun 02 01:42:45at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.lambda$execute$1(JUnitPlatformProvider.java:199)
2022-06-02T01:42:45.0949401Z Jun 02 01:42:45at 
java.util.Iterator.forEachRemaining(Iterator.java:116)
2022-06-02T01:42:45.0950119Z Jun 02 01:42:45at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:193)
2022-06-02T01:42:45.0950950Z Jun 02 01:42:45at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
2022-06-02T01:42:45.0951783Z Jun 02 01:42:45at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:120)
20

[jira] [Commented] (FLINK-27883) KafkaSubscriberTest failed with NPC

2022-06-02 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545450#comment-17545450
 ] 

Huang Xingbo commented on FLINK-27883:
--

cc [~renqs]

> KafkaSubscriberTest failed with NPC
> ---
>
> Key: FLINK-27883
> URL: https://issues.apache.org/jira/browse/FLINK-27883
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.16.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 2022-06-02T01:42:45.0924799Z Jun 02 01:42:45 [ERROR] Tests run: 1, Failures: 
> 1, Errors: 0, Skipped: 0, Time elapsed: 66.787 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest
> 2022-06-02T01:42:45.0926067Z Jun 02 01:42:45 [ERROR] 
> org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest
>   Time elapsed: 66.787 s  <<< FAILURE!
> 2022-06-02T01:42:45.0926867Z Jun 02 01:42:45 
> org.opentest4j.MultipleFailuresError: 
> 2022-06-02T01:42:45.0927608Z Jun 02 01:42:45 Multiple Failures (2 failures)
> 2022-06-02T01:42:45.0928626Z Jun 02 01:42:45  java.lang.AssertionError: 
> Create test topic : topic1 failed, 
> org.apache.kafka.common.errors.TimeoutException: The request timed out.
> 2022-06-02T01:42:45.0929717Z Jun 02 01:42:45  java.lang.NullPointerException: 
> 
> 2022-06-02T01:42:45.0930482Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.execution.TestRun.getStoredResultOrSuccessful(TestRun.java:196)
> 2022-06-02T01:42:45.0931579Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.execution.RunListenerAdapter.fireExecutionFinished(RunListenerAdapter.java:226)
> 2022-06-02T01:42:45.0932685Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.execution.RunListenerAdapter.testRunFinished(RunListenerAdapter.java:93)
> 2022-06-02T01:42:45.0933736Z Jun 02 01:42:45  at 
> org.junit.runner.notification.SynchronizedRunListener.testRunFinished(SynchronizedRunListener.java:42)
> 2022-06-02T01:42:45.0934600Z Jun 02 01:42:45  at 
> org.junit.runner.notification.RunNotifier$2.notifyListener(RunNotifier.java:103)
> 2022-06-02T01:42:45.0935437Z Jun 02 01:42:45  at 
> org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72)
> 2022-06-02T01:42:45.0936147Z Jun 02 01:42:45  at 
> org.junit.runner.notification.RunNotifier.fireTestRunFinished(RunNotifier.java:100)
> 2022-06-02T01:42:45.0936807Z Jun 02 01:42:45  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:138)
> 2022-06-02T01:42:45.0937370Z Jun 02 01:42:45  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-06-02T01:42:45.0938011Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
> 2022-06-02T01:42:45.0938756Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
> 2022-06-02T01:42:45.0939480Z Jun 02 01:42:45  at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
> 2022-06-02T01:42:45.0940304Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
> 2022-06-02T01:42:45.0941136Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
> 2022-06-02T01:42:45.0942000Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
> 2022-06-02T01:42:45.0943056Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
> 2022-06-02T01:42:45.0944171Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
> 2022-06-02T01:42:45.0944945Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
> 2022-06-02T01:42:45.0945756Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
> 2022-06-02T01:42:45.0946607Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
> 2022-06-02T01:42:45.0947618Z Jun 02 01:42:45  at 
> org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
> 2022-06-02T01:42:45.0948525Z Jun 02 01:42:45  at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.lambda$execute$1(JUnitPlatformProvider.java:199)
> 2022-06-02T01:42:45.0949401Z Jun 02 01:42:45  at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> 2022-06-02T01:42:45.0950119Z Jun 02 01:42:45  at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformP

[jira] [Updated] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2022-06-02 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-27756:
-
Priority: Critical  (was: Major)

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.16.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009&view=logs&j=aa18c3f6-13b8-5f58-86bb-c1cffb239496&t=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2022-06-02 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545453#comment-17545453
 ] 

Huang Xingbo commented on FLINK-27756:
--

another instance 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36272&view=logs&j=fc5181b0-e452-5c8f-68de-1097947f6483&t=995c650b-6573-581c-9ce6-7ad4cc038461

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: test-stability
> Fix For: 1.16.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009&view=logs&j=aa18c3f6-13b8-5f58-86bb-c1cffb239496&t=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] echauchot commented on a diff in pull request #19680: [FLINK-27457] Implement flush() logic in Cassandra output formats

2022-06-02 Thread GitBox


echauchot commented on code in PR #19680:
URL: https://github.com/apache/flink/pull/19680#discussion_r887888690


##
flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/batch/connectors/cassandra/CassandraOutputFormatBase.java:
##
@@ -17,130 +17,160 @@
 
 package org.apache.flink.batch.connectors.cassandra;
 
+import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.api.common.io.RichOutputFormat;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connectors.cassandra.utils.SinkUtils;
 import org.apache.flink.streaming.connectors.cassandra.ClusterBuilder;
 import org.apache.flink.util.Preconditions;
 
 import com.datastax.driver.core.Cluster;
-import com.datastax.driver.core.PreparedStatement;
-import com.datastax.driver.core.ResultSet;
-import com.datastax.driver.core.ResultSetFuture;
 import com.datastax.driver.core.Session;
-import com.google.common.base.Strings;
 import com.google.common.util.concurrent.FutureCallback;
 import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.time.Duration;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicReference;
 
 /**
- * CassandraOutputFormatBase is the common abstract class for writing into 
Apache Cassandra.
+ * CassandraOutputFormatBase is the common abstract class for writing into 
Apache Cassandra using
+ * output formats.
  *
  * @param  Type of the elements to write.
  */
-public abstract class CassandraOutputFormatBase extends 
RichOutputFormat {
+public abstract class CassandraOutputFormatBase extends 
RichOutputFormat {
 private static final Logger LOG = 
LoggerFactory.getLogger(CassandraOutputFormatBase.class);
 
-private final String insertQuery;
 private final ClusterBuilder builder;
+private Semaphore semaphore;
+private Duration maxConcurrentRequestsTimeout = 
Duration.ofMillis(Long.MAX_VALUE);
+private int maxConcurrentRequests = Integer.MAX_VALUE;
 
 private transient Cluster cluster;
-private transient Session session;
-private transient PreparedStatement prepared;
-private transient FutureCallback callback;
-private transient Throwable exception = null;
-
-public CassandraOutputFormatBase(String insertQuery, ClusterBuilder 
builder) {
-Preconditions.checkArgument(
-!Strings.isNullOrEmpty(insertQuery), "Query cannot be null or 
empty");
+protected transient Session session;
+private transient FutureCallback callback;
+private AtomicReference throwable;
+
+public CassandraOutputFormatBase(
+ClusterBuilder builder,
+int maxConcurrentRequests,
+Duration maxConcurrentRequestsTimeout) {
 Preconditions.checkNotNull(builder, "Builder cannot be null");
-
-this.insertQuery = insertQuery;
 this.builder = builder;
+Preconditions.checkArgument(
+maxConcurrentRequests > 0, "Max concurrent requests is 
expected to be positive");
+this.maxConcurrentRequests = maxConcurrentRequests;
+Preconditions.checkNotNull(
+maxConcurrentRequestsTimeout, "Max concurrent requests timeout 
cannot be null");
+Preconditions.checkArgument(
+!maxConcurrentRequestsTimeout.isNegative(),
+"Max concurrent requests timeout is expected to be positive");
+this.maxConcurrentRequestsTimeout = maxConcurrentRequestsTimeout;
 }
 
+/** Configure the connection to Cassandra. */
 @Override
 public void configure(Configuration parameters) {
 this.cluster = builder.getCluster();
 }
 
-/**
- * Opens a Session to Cassandra and initializes the prepared statement.
- *
- * @param taskNumber The number of the parallel instance.
- * @throws IOException Thrown, if the output could not be opened due to an 
I/O problem.
- */
+/** Opens a Session to Cassandra . */
 @Override
 public void open(int taskNumber, int numTasks) throws IOException {
+throwable = new AtomicReference<>();
+this.semaphore = new Semaphore(maxConcurrentRequests);
 this.session = cluster.connect();

Review Comment:
   I'm thinking that if we remove everything Cassandra related from 
`CassandraOutputFormatBase` then this class can become the base class for all 
the output formats that want to leverage the flush logic. So we could promote 
the resulting class to `org.apache.flink.api.common.io` package as 
`OutputFormatBase`.  `CassandraOutputFormatBase` would extend this class and 
contain the cassandra deps.
   
   I'll do the above code so that we discuss on the code itself if needed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to

[GitHub] [flink] wuchong commented on a diff in pull request #18958: [FLINK-15854][hive] Use the new type inference for Hive UDTF

2022-06-02 Thread GitBox


wuchong commented on code in PR #18958:
URL: https://github.com/apache/flink/pull/18958#discussion_r887910344


##
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/functions/hive/HiveFunctionArguments.java:
##
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.functions.hive;
+
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.runtime.types.ClassLogicalTypeConverter;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.CallContext;
+
+import java.io.Serializable;
+import java.util.BitSet;
+
+/** Stores arguments information for a Hive function . */
+public class HiveFunctionArguments implements Serializable {
+
+private static final long serialVersionUID = 1L;
+
+// input arguments -- store the value if an argument is literal, null 
otherwise
+private final Object[] args;
+// date type of each argument
+private final DataType[] argTypes;
+// store the indices of literal arguments, so that we can support null 
literals
+private final BitSet literalIndices;
+
+private HiveFunctionArguments(Object[] args, DataType[] argTypes, BitSet 
literalIndices) {
+this.args = args;
+this.argTypes = argTypes;
+this.literalIndices = literalIndices;
+}
+
+public int size() {
+return args.length;
+}
+
+public boolean isLiteral(int pos) {
+return pos >= 0 && pos < args.length && literalIndices.get(pos);
+}
+
+public Object getArg(int pos) {
+return args[pos];
+}
+
+public DataType getDataType(int pos) {
+return argTypes[pos];
+}
+
+// create from a CallContext
+public static HiveFunctionArguments create(CallContext callContext) {
+DataType[] argTypes = callContext.getArgumentDataTypes().toArray(new 
DataType[0]);
+Object[] args = new Object[argTypes.length];
+BitSet literalIndices = new BitSet(args.length);
+for (int i = 0; i < args.length; i++) {
+if (callContext.isArgumentLiteral(i)) {
+literalIndices.set(i);
+args[i] =
+callContext
+.getArgumentValue(
+i,
+
ClassLogicalTypeConverter.getDefaultExternalClassForType(

Review Comment:
   This method is deprecated and is in `flink-table-runtime`.
   I think you can simply call 
`argTypes[i].getLogicalType().getDefaultConversion()`? 



##
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/functions/hive/HiveFunction.java:
##
@@ -19,38 +19,104 @@
 package org.apache.flink.table.functions.hive;
 
 import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.FunctionDefinition;
 import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.ConstantArgumentCount;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.TypeInference;
+import org.apache.flink.table.types.inference.TypeStrategy;
 
-/**
- * Interface for Hive simple udf, generic udf, and generic udtf. TODO: Note: 
this is only a
- * temporary interface for workaround when Flink type system and udf system 
rework is not finished.
- * Should adapt to Flink type system and Flink UDF framework later on.
- */
+import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+
+/** Interface for Hive UDF, UDTF, UDAF. */
 @Internal
 public interface HiveFunction {
 
-/**
- * Set arguments and argTypes for Function instance. In this way, the 
correct method can be
- * really deduced by the function instance.
- *
- * @param constantArguments arguments of a function call (only literal 
arguments are passed,
- * nulls for non-literal ones)
-  

[GitHub] [flink] wuchong commented on pull request #18958: [FLINK-15854][hive] Use the new type inference for Hive UDTF

2022-06-02 Thread GitBox


wuchong commented on PR #18958:
URL: https://github.com/apache/flink/pull/18958#issuecomment-1144858790

   I have another question that did we enable the CI test for Hive 3.x? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27881) The key(String) in PulsarMessageBuilder returns null

2022-06-02 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545471#comment-17545471
 ] 

Martijn Visser commented on FLINK-27881:


[~syhily] Any thoughts on this? 

> The key(String) in PulsarMessageBuilder returns null
> 
>
> Key: FLINK-27881
> URL: https://issues.apache.org/jira/browse/FLINK-27881
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.15.0
>Reporter: Shuiqiang Chen
>Priority: Major
>
> The PulsarMessageBuild.key(String) always return null, which might cause NPE 
> in later call.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27592) WebUI - Status CANCELING not showing in overview

2022-06-02 Thread Peter Schrott (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Schrott updated FLINK-27592:
--
Priority: Major  (was: Minor)

> WebUI - Status CANCELING not showing in overview
> 
>
> Key: FLINK-27592
> URL: https://issues.apache.org/jira/browse/FLINK-27592
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.0
>Reporter: Peter Schrott
>Priority: Major
> Attachments: image-2022-05-12-13-01-25-266.png
>
>
> Cluster Version: 1.15.0, Standalone
> Job Version: 1.15.0
> Status "CANCELING" is not showing / visible in WebUI jobs overview. Compare 
> attached screenshot.
> Status of the Job was actually "CANCELING" when checking with Flink CLI list 
> command.
> !image-2022-05-12-13-01-25-266.png|width=919,height=164!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] echauchot commented on a diff in pull request #19680: [FLINK-27457] Implement flush() logic in Cassandra output formats

2022-06-02 Thread GitBox


echauchot commented on code in PR #19680:
URL: https://github.com/apache/flink/pull/19680#discussion_r887888690


##
flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/batch/connectors/cassandra/CassandraOutputFormatBase.java:
##
@@ -17,130 +17,160 @@
 
 package org.apache.flink.batch.connectors.cassandra;
 
+import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.api.common.io.RichOutputFormat;
 import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connectors.cassandra.utils.SinkUtils;
 import org.apache.flink.streaming.connectors.cassandra.ClusterBuilder;
 import org.apache.flink.util.Preconditions;
 
 import com.datastax.driver.core.Cluster;
-import com.datastax.driver.core.PreparedStatement;
-import com.datastax.driver.core.ResultSet;
-import com.datastax.driver.core.ResultSetFuture;
 import com.datastax.driver.core.Session;
-import com.google.common.base.Strings;
 import com.google.common.util.concurrent.FutureCallback;
 import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.time.Duration;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.atomic.AtomicReference;
 
 /**
- * CassandraOutputFormatBase is the common abstract class for writing into 
Apache Cassandra.
+ * CassandraOutputFormatBase is the common abstract class for writing into 
Apache Cassandra using
+ * output formats.
  *
  * @param  Type of the elements to write.
  */
-public abstract class CassandraOutputFormatBase extends 
RichOutputFormat {
+public abstract class CassandraOutputFormatBase extends 
RichOutputFormat {
 private static final Logger LOG = 
LoggerFactory.getLogger(CassandraOutputFormatBase.class);
 
-private final String insertQuery;
 private final ClusterBuilder builder;
+private Semaphore semaphore;
+private Duration maxConcurrentRequestsTimeout = 
Duration.ofMillis(Long.MAX_VALUE);
+private int maxConcurrentRequests = Integer.MAX_VALUE;
 
 private transient Cluster cluster;
-private transient Session session;
-private transient PreparedStatement prepared;
-private transient FutureCallback callback;
-private transient Throwable exception = null;
-
-public CassandraOutputFormatBase(String insertQuery, ClusterBuilder 
builder) {
-Preconditions.checkArgument(
-!Strings.isNullOrEmpty(insertQuery), "Query cannot be null or 
empty");
+protected transient Session session;
+private transient FutureCallback callback;
+private AtomicReference throwable;
+
+public CassandraOutputFormatBase(
+ClusterBuilder builder,
+int maxConcurrentRequests,
+Duration maxConcurrentRequestsTimeout) {
 Preconditions.checkNotNull(builder, "Builder cannot be null");
-
-this.insertQuery = insertQuery;
 this.builder = builder;
+Preconditions.checkArgument(
+maxConcurrentRequests > 0, "Max concurrent requests is 
expected to be positive");
+this.maxConcurrentRequests = maxConcurrentRequests;
+Preconditions.checkNotNull(
+maxConcurrentRequestsTimeout, "Max concurrent requests timeout 
cannot be null");
+Preconditions.checkArgument(
+!maxConcurrentRequestsTimeout.isNegative(),
+"Max concurrent requests timeout is expected to be positive");
+this.maxConcurrentRequestsTimeout = maxConcurrentRequestsTimeout;
 }
 
+/** Configure the connection to Cassandra. */
 @Override
 public void configure(Configuration parameters) {
 this.cluster = builder.getCluster();
 }
 
-/**
- * Opens a Session to Cassandra and initializes the prepared statement.
- *
- * @param taskNumber The number of the parallel instance.
- * @throws IOException Thrown, if the output could not be opened due to an 
I/O problem.
- */
+/** Opens a Session to Cassandra . */
 @Override
 public void open(int taskNumber, int numTasks) throws IOException {
+throwable = new AtomicReference<>();
+this.semaphore = new Semaphore(maxConcurrentRequests);
 this.session = cluster.connect();

Review Comment:
   I'm thinking that if we remove everything Cassandra related from 
`CassandraOutputFormatBase` then this class can become the base class for all 
the output formats that want to leverage the flush logic. So we could promote 
the resulting class to `org.apache.flink.api.common.io` package as 
`OutputFormatBase`.  `CassandraOutputFormatBase` would extend this class and 
contain the cassandra deps.
   
   I think it requires also to create a dedicated ticket
   
   I'll do the above code so that we discuss on the code itself if needed.



-- 
This is an automated message from the Apache Git Service.
To respond to the me

[jira] [Created] (FLINK-27884) Introduce a flush mechanism for OutputFormats

2022-06-02 Thread Etienne Chauchot (Jira)
Etienne Chauchot created FLINK-27884:


 Summary: Introduce a flush mechanism for OutputFormats
 Key: FLINK-27884
 URL: https://issues.apache.org/jira/browse/FLINK-27884
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Common
Reporter: Etienne Chauchot


This ticket is a generalization of 
[FLINK-27457|https://issues.apache.org/jira/browse/FLINK-27457]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27884) Introduce a flush mechanism for OutputFormats

2022-06-02 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545486#comment-17545486
 ] 

Etienne Chauchot commented on FLINK-27884:
--

[~chesnay] As discussed in [this PR|https://github.com/apache/flink/pull/19680] 
I thought that flush mechanism could be interesting for the other output 
formats even if output formats are being deprecated. Can you assign me this 
ticket if you agree or close it otherwise ?

> Introduce a flush mechanism for OutputFormats
> -
>
> Key: FLINK-27884
> URL: https://issues.apache.org/jira/browse/FLINK-27884
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Etienne Chauchot
>Priority: Major
>
> This ticket is a generalization of 
> [FLINK-27457|https://issues.apache.org/jira/browse/FLINK-27457]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] luoyuxia commented on a diff in pull request #18958: [FLINK-15854][hive] Use the new type inference for Hive UDTF

2022-06-02 Thread GitBox


luoyuxia commented on code in PR #18958:
URL: https://github.com/apache/flink/pull/18958#discussion_r888005853


##
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParserCalcitePlanner.java:
##
@@ -2539,9 +2539,9 @@ private RelNode genUDTFPlan(
 
 SqlOperator convertedOperator = convertedCall.getOperator();
 Preconditions.checkState(
-convertedOperator instanceof SqlUserDefinedTableFunction,
+convertedOperator instanceof BridgingSqlFunction,
 "Expect operator to be "
-+ SqlUserDefinedTableFunction.class.getSimpleName()
++ BridgingSqlFunction.class.getSimpleName()

Review Comment:
   Before this pr, the Hive's udtf will be converted to `HiveTableSqlFunction`, 
which is instance of SqlUserDefinedTableFunction. The old code logic is in  
[here](https://github.com/apache/flink/pull/18958/files#diff-ca5db8275fd03a414acc2af23a9114f47791506fb0542986f55f31865f1dbb87L130).
   After this pr, it'll then convert to `BridgingSqlFunction`.
   So, we need to.change it. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] luoyuxia commented on a diff in pull request #18958: [FLINK-15854][hive] Use the new type inference for Hive UDTF

2022-06-02 Thread GitBox


luoyuxia commented on code in PR #18958:
URL: https://github.com/apache/flink/pull/18958#discussion_r888006464


##
flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/TableEnvHiveConnectorITCase.java:
##
@@ -288,6 +290,15 @@ public void testUDTF() throws Exception {
 .collect());
 assertThat(results.toString()).isEqualTo("[+I[{1=a, 2=b}], 
+I[{3=c}]]");
 
+assertThat(results.toString()).isEqualTo("[+I[{1=a, 2=b}], 
+I[{3=c}]]");

Review Comment:
   Miss added when resolve conflicts.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] luoyuxia commented on pull request #18958: [FLINK-15854][hive] Use the new type inference for Hive UDTF

2022-06-02 Thread GitBox


luoyuxia commented on PR #18958:
URL: https://github.com/apache/flink/pull/18958#issuecomment-1144930976

   @wuchong We haven't enabled the CI test for Hive 3.x. Usually I wiil run the 
test in my local  manually for Hive 3.x with command `mvn  test -Phive-3.1.1`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27885) [JUnit5 Migration] Module: flink-csv

2022-06-02 Thread Ryan Skraba (Jira)
Ryan Skraba created FLINK-27885:
---

 Summary: [JUnit5 Migration] Module: flink-csv
 Key: FLINK-27885
 URL: https://issues.apache.org/jira/browse/FLINK-27885
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Reporter: Ryan Skraba






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27885) [JUnit5 Migration] Module: flink-csv

2022-06-02 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545506#comment-17545506
 ] 

Ryan Skraba commented on FLINK-27885:
-

Can you assign this to me please?

> [JUnit5 Migration] Module: flink-csv
> 
>
> Key: FLINK-27885
> URL: https://issues.apache.org/jira/browse/FLINK-27885
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Ryan Skraba
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27886) Error way for skipping shade plugin in sub-modules

2022-06-02 Thread Ran Tao (Jira)
Ran Tao created FLINK-27886:
---

 Summary: Error way for skipping shade plugin in sub-modules
 Key: FLINK-27886
 URL: https://issues.apache.org/jira/browse/FLINK-27886
 Project: Flink
  Issue Type: Improvement
  Components: BuildSystem / Shaded
Affects Versions: 1.14.4, 1.13.6, 1.15.0
Reporter: Ran Tao


Currently in some sub-modules, for example, flink-quickstart-java  
flink-quickstart-scala, flink-walkthrough, we want to skip shade but use error 
way like this

{code:java}

org.apache.maven.plugins
maven-shade-plugin






{code}

It just set none phase for itself, can't forbid the inherited parent shade.

the correct way is 

{code:java}

org.apache.maven.plugins
maven-shade-plugin


shade-flink
none



{code}

it can forbid parent shade-flink and also it's own shade. because there is no 
another own shade execution.




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27886) Error way for skipping shade plugin in sub-modules

2022-06-02 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-27886:

Attachment: screenshot-2.png

> Error way for skipping shade plugin in sub-modules
> --
>
> Key: FLINK-27886
> URL: https://issues.apache.org/jira/browse/FLINK-27886
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded
>Affects Versions: 1.15.0, 1.13.6, 1.14.4
>Reporter: Ran Tao
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> Currently in some sub-modules, for example, flink-quickstart-java  
> flink-quickstart-scala, flink-walkthrough, we want to skip shade but use 
> error way like this
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   
>   
>   
>   
> {code}
> It just set none phase for itself, can't forbid the inherited parent shade.
> the correct way is 
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   shade-flink
>   none
>   
>   
>   
> {code}
> it can forbid parent shade-flink and also it's own shade. because there is no 
> another own shade execution.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27886) Error way for skipping shade plugin in sub-modules

2022-06-02 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-27886:

Attachment: screenshot-1.png

> Error way for skipping shade plugin in sub-modules
> --
>
> Key: FLINK-27886
> URL: https://issues.apache.org/jira/browse/FLINK-27886
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded
>Affects Versions: 1.15.0, 1.13.6, 1.14.4
>Reporter: Ran Tao
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> Currently in some sub-modules, for example, flink-quickstart-java  
> flink-quickstart-scala, flink-walkthrough, we want to skip shade but use 
> error way like this
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   
>   
>   
>   
> {code}
> It just set none phase for itself, can't forbid the inherited parent shade.
> the correct way is 
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   shade-flink
>   none
>   
>   
>   
> {code}
> it can forbid parent shade-flink and also it's own shade. because there is no 
> another own shade execution.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27886) Error way for skipping shade plugin in sub-modules

2022-06-02 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-27886:

Description: 
Currently in some sub-modules, for example, flink-quickstart-java  
flink-quickstart-scala, flink-walkthrough, we want to skip shade but use error 
way like this

{code:java}

org.apache.maven.plugins
maven-shade-plugin






{code}

It just set none phase for itself, can't forbid the inherited parent shade.

 !screenshot-1.png! 
 !screenshot-2.png! 

the correct way is 

{code:java}

org.apache.maven.plugins
maven-shade-plugin


shade-flink
none



{code}

it can forbid parent shade-flink and also it's own shade. because there is no 
another own shade execution.


  was:
Currently in some sub-modules, for example, flink-quickstart-java  
flink-quickstart-scala, flink-walkthrough, we want to skip shade but use error 
way like this

{code:java}

org.apache.maven.plugins
maven-shade-plugin






{code}

It just set none phase for itself, can't forbid the inherited parent shade.

the correct way is 

{code:java}

org.apache.maven.plugins
maven-shade-plugin


shade-flink
none



{code}

it can forbid parent shade-flink and also it's own shade. because there is no 
another own shade execution.



> Error way for skipping shade plugin in sub-modules
> --
>
> Key: FLINK-27886
> URL: https://issues.apache.org/jira/browse/FLINK-27886
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded
>Affects Versions: 1.15.0, 1.13.6, 1.14.4
>Reporter: Ran Tao
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> Currently in some sub-modules, for example, flink-quickstart-java  
> flink-quickstart-scala, flink-walkthrough, we want to skip shade but use 
> error way like this
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   
>   
>   
>   
> {code}
> It just set none phase for itself, can't forbid the inherited parent shade.
>  !screenshot-1.png! 
>  !screenshot-2.png! 
> the correct way is 
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   shade-flink
>   none
>   
>   
>   
> {code}
> it can forbid parent shade-flink and also it's own shade. because there is no 
> another own shade execution.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27886) Error way for skipping shade plugin in sub-modules

2022-06-02 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-27886:

Description: 
Currently in some sub-modules, for example, flink-quickstart-java  
flink-quickstart-scala, flink-walkthrough, we want to skip shade but use error 
way like this

{code:java}

org.apache.maven.plugins
maven-shade-plugin






{code}

It just set none phase for itself, can't forbid the inherited parent shade.

 !screenshot-1.png! 
 !screenshot-2.png! 

the correct way is 

{code:java}

org.apache.maven.plugins
maven-shade-plugin


shade-flink
none



{code}

it can forbid parent shade-flink and also it's own shade. because there is no 
extra own shade executions.


  was:
Currently in some sub-modules, for example, flink-quickstart-java  
flink-quickstart-scala, flink-walkthrough, we want to skip shade but use error 
way like this

{code:java}

org.apache.maven.plugins
maven-shade-plugin






{code}

It just set none phase for itself, can't forbid the inherited parent shade.

 !screenshot-1.png! 
 !screenshot-2.png! 

the correct way is 

{code:java}

org.apache.maven.plugins
maven-shade-plugin


shade-flink
none



{code}

it can forbid parent shade-flink and also it's own shade. because there is no 
another own shade execution.



> Error way for skipping shade plugin in sub-modules
> --
>
> Key: FLINK-27886
> URL: https://issues.apache.org/jira/browse/FLINK-27886
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded
>Affects Versions: 1.15.0, 1.13.6, 1.14.4
>Reporter: Ran Tao
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> Currently in some sub-modules, for example, flink-quickstart-java  
> flink-quickstart-scala, flink-walkthrough, we want to skip shade but use 
> error way like this
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   
>   
>   
>   
> {code}
> It just set none phase for itself, can't forbid the inherited parent shade.
>  !screenshot-1.png! 
>  !screenshot-2.png! 
> the correct way is 
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   shade-flink
>   none
>   
>   
>   
> {code}
> it can forbid parent shade-flink and also it's own shade. because there is no 
> extra own shade executions.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] chucheng92 opened a new pull request, #19871: [FLINK-27886][pom] Fix the error way for skipping shade plugin in sub-modules

2022-06-02 Thread GitBox


chucheng92 opened a new pull request, #19871:
URL: https://github.com/apache/flink/pull/19871

   ## What is the purpose of the change
   
   Fix the error way for skipping shade plugin in sub-modules, e.g. 
flink-quickstart, flink-walkthroughs
   
   ## Brief change log
   
   Fix the error way for skipping shade plugin in sub-modules
   
   ## Verifying this change
   
   sub-modules maven build log, it's clean and without shade log.
   
   ## Does this pull request potentially affect one of the following parts:
   
   - Dependencies (does it add or upgrade a dependency): no
   - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
   - The serializers: no
   - The runtime per-record code paths (performance sensitive): no
   - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
   - The S3 file system connector: no
   
   ## Documentation
   
   - Does this pull request introduce a new feature?  no
   - If yes, how is the feature documented? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27886) Error way for skipping shade plugin in sub-modules

2022-06-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27886:
---
Labels: pull-request-available  (was: )

> Error way for skipping shade plugin in sub-modules
> --
>
> Key: FLINK-27886
> URL: https://issues.apache.org/jira/browse/FLINK-27886
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded
>Affects Versions: 1.15.0, 1.13.6, 1.14.4
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> Currently in some sub-modules, for example, flink-quickstart-java  
> flink-quickstart-scala, flink-walkthrough, we want to skip shade but use 
> error way like this
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   
>   
>   
>   
> {code}
> It just set none phase for itself, can't forbid the inherited parent shade.
>  !screenshot-1.png! 
>  !screenshot-2.png! 
> the correct way is 
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   shade-flink
>   none
>   
>   
>   
> {code}
> it can forbid parent shade-flink and also it's own shade. because there is no 
> extra own shade executions.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-25931) Add projection pushdown support for CsvFormatFactory

2022-06-02 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-25931:
--

Assignee: Alexander Fedulov

> Add projection pushdown support for CsvFormatFactory
> 
>
> Key: FLINK-25931
> URL: https://issues.apache.org/jira/browse/FLINK-25931
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Alexander Fedulov
>Assignee: Alexander Fedulov
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-24703 added support for projection pushdown in 
> {_}CsvFileFormatFactory{_}. The same functionality should be added for 
> non-file-based connectors based on _CsvFormatFactory_ and on 
> _Ser/DeSerialization_ schemas.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27871) Configuration change is undedected on config removal

2022-06-02 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-27871:
--

Assignee: Matyas Orhidi

> Configuration change is undedected on config removal
> 
>
> Key: FLINK-27871
> URL: https://issues.apache.org/jira/browse/FLINK-27871
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.0.0
>Reporter: Matyas Orhidi
>Assignee: Matyas Orhidi
>Priority: Major
> Fix For: kubernetes-operator-1.1.0
>
>
> The Operator does not detect when a configuration entry is removed from the 
> configmap. The equals check in *FlinkConfigManager.updateDefaultConfig* 
> returns *true* incorrectly:
>  
> {{if (newConf.equals(defaultConfig)) {}}
> {{LOG.info("Default configuration did not change, nothing to do...");}}
> {{return;}}
> {{}}}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] flinkbot commented on pull request #19871: [FLINK-27886][pom] Fix the error way for skipping shade plugin in sub-modules

2022-06-02 Thread GitBox


flinkbot commented on PR #19871:
URL: https://github.com/apache/flink/pull/19871#issuecomment-1144992312

   
   ## CI report:
   
   * 4e65649614e328b185312aad386b77a0bd348abd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-27886) Error way for skipping shade plugin in sub-modules

2022-06-02 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545516#comment-17545516
 ] 

Ran Tao edited comment on FLINK-27886 at 6/2/22 3:24 PM:
-

[~gaoyunhaii], [~jark] Hi, gaoyun, jark, can u help me to review this issue?


was (Author: lemonjing):
[~gaoyunhaii][~jark] Hi, gaoyun, jark, can u help me to review this issue?

> Error way for skipping shade plugin in sub-modules
> --
>
> Key: FLINK-27886
> URL: https://issues.apache.org/jira/browse/FLINK-27886
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded
>Affects Versions: 1.15.0, 1.13.6, 1.14.4
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> Currently in some sub-modules, for example, flink-quickstart-java  
> flink-quickstart-scala, flink-walkthrough, we want to skip shade but use 
> error way like this
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   
>   
>   
>   
> {code}
> It just set none phase for itself, can't forbid the inherited parent shade.
>  !screenshot-1.png! 
>  !screenshot-2.png! 
> the correct way is 
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   shade-flink
>   none
>   
>   
>   
> {code}
> it can forbid parent shade-flink and also it's own shade. because there is no 
> extra own shade executions.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27886) Error way for skipping shade plugin in sub-modules

2022-06-02 Thread Ran Tao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545516#comment-17545516
 ] 

Ran Tao commented on FLINK-27886:
-

[~gaoyunhaii][~jark] Hi, gaoyun, jark, can u help me to review this issue?

> Error way for skipping shade plugin in sub-modules
> --
>
> Key: FLINK-27886
> URL: https://issues.apache.org/jira/browse/FLINK-27886
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded
>Affects Versions: 1.15.0, 1.13.6, 1.14.4
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> Currently in some sub-modules, for example, flink-quickstart-java  
> flink-quickstart-scala, flink-walkthrough, we want to skip shade but use 
> error way like this
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   
>   
>   
>   
> {code}
> It just set none phase for itself, can't forbid the inherited parent shade.
>  !screenshot-1.png! 
>  !screenshot-2.png! 
> the correct way is 
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   shade-flink
>   none
>   
>   
>   
> {code}
> it can forbid parent shade-flink and also it's own shade. because there is no 
> extra own shade executions.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27886) Error way for skipping shade plugin in sub-modules

2022-06-02 Thread Ran Tao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ran Tao updated FLINK-27886:

Description: 
Currently in some sub-modules, for example, flink-quickstart-java  
flink-quickstart-scala, flink-walkthrough, we want to skip shade but use error 
way like this

{code:java}

org.apache.maven.plugins
maven-shade-plugin






{code}

It just set none phase for itself, can't forbid the inherited parent shade.

 !screenshot-1.png! 
 !screenshot-2.png! 

the correct way is 

{code:java}

org.apache.maven.plugins
maven-shade-plugin


shade-flink
none



{code}

it can forbid parent shade-flink and also it's own shade. because there are no 
extra own shade executions.


  was:
Currently in some sub-modules, for example, flink-quickstart-java  
flink-quickstart-scala, flink-walkthrough, we want to skip shade but use error 
way like this

{code:java}

org.apache.maven.plugins
maven-shade-plugin






{code}

It just set none phase for itself, can't forbid the inherited parent shade.

 !screenshot-1.png! 
 !screenshot-2.png! 

the correct way is 

{code:java}

org.apache.maven.plugins
maven-shade-plugin


shade-flink
none



{code}

it can forbid parent shade-flink and also it's own shade. because there is no 
extra own shade executions.



> Error way for skipping shade plugin in sub-modules
> --
>
> Key: FLINK-27886
> URL: https://issues.apache.org/jira/browse/FLINK-27886
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded
>Affects Versions: 1.15.0, 1.13.6, 1.14.4
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> Currently in some sub-modules, for example, flink-quickstart-java  
> flink-quickstart-scala, flink-walkthrough, we want to skip shade but use 
> error way like this
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   
>   
>   
>   
> {code}
> It just set none phase for itself, can't forbid the inherited parent shade.
>  !screenshot-1.png! 
>  !screenshot-2.png! 
> the correct way is 
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   shade-flink
>   none
>   
>   
>   
> {code}
> it can forbid parent shade-flink and also it's own shade. because there are 
> no extra own shade executions.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27886) Error way for skipping shade plugin in sub-modules

2022-06-02 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545519#comment-17545519
 ] 

Martijn Visser commented on FLINK-27886:


[~chesnay] any thoughts on this?

> Error way for skipping shade plugin in sub-modules
> --
>
> Key: FLINK-27886
> URL: https://issues.apache.org/jira/browse/FLINK-27886
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded
>Affects Versions: 1.15.0, 1.13.6, 1.14.4
>Reporter: Ran Tao
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> Currently in some sub-modules, for example, flink-quickstart-java  
> flink-quickstart-scala, flink-walkthrough, we want to skip shade but use 
> error way like this
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   
>   
>   
>   
> {code}
> It just set none phase for itself, can't forbid the inherited parent shade.
>  !screenshot-1.png! 
>  !screenshot-2.png! 
> the correct way is 
> {code:java}
> 
>   org.apache.maven.plugins
>   maven-shade-plugin
>   
>   
>   shade-flink
>   none
>   
>   
>   
> {code}
> it can forbid parent shade-flink and also it's own shade. because there are 
> no extra own shade executions.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] godfreyhe commented on pull request #19286: [FLINK-25931] Add projection pushdown support for CsvFormatFactory

2022-06-02 Thread GitBox


godfreyhe commented on PR #19286:
URL: https://github.com/apache/flink/pull/19286#issuecomment-1145020743

   > @AHeise I am not sure that the proposed alternative approach makes things 
much easier. I personally find the implementation of the projection 
specification as an array of arrays somewhat confusing and would prefer not to 
spread this abstraction to the lower level interfaces, if possible. I find the 
code of the `CsvRowDataDeserializationSchema` that just operates on input (read 
from source) and output (converted/projected) data types easier to understand. 
That way someone who needs to maintain this class does not need to decipher 
what exactly ` int[][] projections` means. @godfreyhe please let me know if you 
also think it is worth pursuing the proposed alternative approach to what is 
currently implemented in the PR.
   
   Thanks @afedulov for the contribution. I find there is no any difference of 
the json node after deserialized from the message between changing the 
`csvSchema` or not. See the debug info with original schema:
   
![image](https://user-images.githubusercontent.com/8777671/171669068-db8c58f0-0b0f-4501-9b88-6147067b299b.png)
   
   Do you have any performance test to verify the change?
   
   One more thing about  the code in `optimizeCsvRead` method, I find 
`filteredFieldsNames` and `filteredFieldsIndices` are not needed. The code can 
be `
   final VarCharType varCharType = new VarCharType();
   final List optimizedRowFields =
   fields.stream()
   .map(
   field -> {
   if 
(rowResultTypeFields.contains(field.getName())) {
   return field;
   } else {
   return new RowType.RowField(
   field.getName(),
   varCharType,
   
field.getDescription().orElse(null));
   }
   })
   .collect(Collectors.toList());
   `
 
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] afedulov commented on pull request #19856: [FLINK-27854][tests] Refactor FlinkImageBuilder and FlinkContainerBuilder

2022-06-02 Thread GitBox


afedulov commented on PR #19856:
URL: https://github.com/apache/flink/pull/19856#issuecomment-1145055226

   There seem to be some flakiness  in Pulsar tests. The same commit passed in 
my branch CI: 
https://dev.azure.com/alexanderfedulov/Flink/_build/results?buildId=291&view=results


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] afedulov commented on pull request #19856: [FLINK-27854][tests] Refactor FlinkImageBuilder and FlinkContainerBuilder

2022-06-02 Thread GitBox


afedulov commented on PR #19856:
URL: https://github.com/apache/flink/pull/19856#issuecomment-1145055445

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] afedulov commented on pull request #19286: [FLINK-25931] Add projection pushdown support for CsvFormatFactory

2022-06-02 Thread GitBox


afedulov commented on PR #19286:
URL: https://github.com/apache/flink/pull/19286#issuecomment-1145107709

   @godfreyhe thanks for the feedback.
   >  I find there is no any difference of the json node after deserialized 
from the message between changing the csvSchema or not. See the debug info with 
original schema:
   
   I can confirm your findings - something I did not notice after Arvid 
proposed the optimization. It seems that Jackson ignores the schema when 
parsing to a generic JsonNode type and not a fixed POJO, even when initialized 
with schema.
   `new CsvMapper().readerFor(JsonNode.class).with(csvSchema);`, also 
indirectly implied 
[here](https://stackoverflow.com/questions/52384169/is-it-possible-to-parse-the-csv-rows-into-jsonnodes-having-the-correct-value-typ).
   I propose to remove the optimization part in this case and stick to the 
original scope of this PR. What do you think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27872) Allow KafkaBuilder to set arbitrary subscribers

2022-06-02 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545583#comment-17545583
 ] 

Martijn Visser commented on FLINK-27872:


[~renqs] What do you think?

> Allow KafkaBuilder to set arbitrary subscribers
> ---
>
> Key: FLINK-27872
> URL: https://issues.apache.org/jira/browse/FLINK-27872
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Reporter: Salva
>Priority: Major
>
> Currently, 
> [KafkaSourceBuilder|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java]
>  has two setters:
>  * 
> [setTopics|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L157]
>  * 
> [setTopicPattern|https://github.com/apache/flink/blob/586715f23ef49939ab74e4736c58d71c643a64ba/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java#L168]
> which under the hood instantiate the corresponding (concrete) subscribers. 
> This covers the most common needs, I agree, but it might fall short in some 
> cases. Why not add a more generic setter:
>  * {{setSubscriber (???)}}
> Otherwise, how can users read from kafka in combination with custom 
> subscribing logic? Without looking much into it, it seems that they basically 
> cannot, at least without having to replicate some parts of the connector, 
> which seems rather inconvenient.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] snuyanzin commented on pull request #19870: [FLINK-27882][tests][table] Migrate flink-scala to JUnit5

2022-06-02 Thread GitBox


snuyanzin commented on PR #19870:
URL: https://github.com/apache/flink/pull/19870#issuecomment-1145189697

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27887) Unable to access https://nightlies.apache.org/flink/

2022-06-02 Thread Ben Carlton (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Carlton updated FLINK-27887:

Labels: important  (was: )

> Unable to access https://nightlies.apache.org/flink/
> 
>
> Key: FLINK-27887
> URL: https://issues.apache.org/jira/browse/FLINK-27887
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Ben Carlton
>Priority: Blocker
>  Labels: important
>
> My organization is unable to access nightlies.apache.org/flink
> We believe we may have been mistakenly placed on a deny list. Need someone to 
> contact me for resolution at: 
> [benjamin.carl...@ibm.com|mailto:benjamin.carl...@ibm.com]
> If there is another preferred method of resolution, please let me know how to 
> resolve. Thank you.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27887) Unable to access https://nightlies.apache.org/flink/

2022-06-02 Thread Ben Carlton (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Carlton updated FLINK-27887:

Description: 
My organization is unable to access nightlies.apache.org/flink

We believe we may have been mistakenly placed on a deny list. Need someone to 
contact me for resolution at: 
[benjamin.carl...@ibm.com|mailto:benjamin.carl...@ibm.com]

If there is another preferred method of resolution, please let me know how to 
resolve. I have emailed the webmaster already but we did not receive a 
response. Thank you.

  was:
My organization is unable to access nightlies.apache.org/flink

We believe we may have been mistakenly placed on a deny list. Need someone to 
contact me for resolution at: 
[benjamin.carl...@ibm.com|mailto:benjamin.carl...@ibm.com]

If there is another preferred method of resolution, please let me know how to 
resolve. Thank you.


> Unable to access https://nightlies.apache.org/flink/
> 
>
> Key: FLINK-27887
> URL: https://issues.apache.org/jira/browse/FLINK-27887
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Ben Carlton
>Priority: Blocker
>  Labels: important
>
> My organization is unable to access nightlies.apache.org/flink
> We believe we may have been mistakenly placed on a deny list. Need someone to 
> contact me for resolution at: 
> [benjamin.carl...@ibm.com|mailto:benjamin.carl...@ibm.com]
> If there is another preferred method of resolution, please let me know how to 
> resolve. I have emailed the webmaster already but we did not receive a 
> response. Thank you.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27887) Unable to access https://nightlies.apache.org/flink/

2022-06-02 Thread Ben Carlton (Jira)
Ben Carlton created FLINK-27887:
---

 Summary: Unable to access https://nightlies.apache.org/flink/
 Key: FLINK-27887
 URL: https://issues.apache.org/jira/browse/FLINK-27887
 Project: Flink
  Issue Type: Bug
  Components: Project Website
Reporter: Ben Carlton


My organization is unable to access nightlies.apache.org/flink

We believe we may have been mistakenly placed on a deny list. Need someone to 
contact me for resolution at: 
[benjamin.carl...@ibm.com|mailto:benjamin.carl...@ibm.com]

If there is another preferred method of resolution, please let me know how to 
resolve. Thank you.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] snuyanzin commented on pull request #19870: [FLINK-27882][tests][table] Migrate flink-scala to JUnit5

2022-06-02 Thread GitBox


snuyanzin commented on PR #19870:
URL: https://github.com/apache/flink/pull/19870#issuecomment-1145303034

   The PR is based on PR/commit https://github.com/apache/flink/pull/19780 
migrating `TypeSerializerUpgradeTestBaseё` to JUnit5


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27887) Unable to access https://nightlies.apache.org/flink/

2022-06-02 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545660#comment-17545660
 ] 

Martijn Visser commented on FLINK-27887:


[~carlton.ibm] Do you have any error message or more details when you're trying 
to access this page? 

> Unable to access https://nightlies.apache.org/flink/
> 
>
> Key: FLINK-27887
> URL: https://issues.apache.org/jira/browse/FLINK-27887
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Ben Carlton
>Priority: Blocker
>  Labels: important
>
> My organization is unable to access nightlies.apache.org/flink
> We believe we may have been mistakenly placed on a deny list. Need someone to 
> contact me for resolution at: 
> [benjamin.carl...@ibm.com|mailto:benjamin.carl...@ibm.com]
> If there is another preferred method of resolution, please let me know how to 
> resolve. I have emailed the webmaster already but we did not receive a 
> response. Thank you.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27887) Unable to access https://nightlies.apache.org/flink/

2022-06-02 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-27887:
---
Priority: Major  (was: Blocker)

> Unable to access https://nightlies.apache.org/flink/
> 
>
> Key: FLINK-27887
> URL: https://issues.apache.org/jira/browse/FLINK-27887
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Ben Carlton
>Priority: Major
>  Labels: important
>
> My organization is unable to access nightlies.apache.org/flink
> We believe we may have been mistakenly placed on a deny list. Need someone to 
> contact me for resolution at: 
> [benjamin.carl...@ibm.com|mailto:benjamin.carl...@ibm.com]
> If there is another preferred method of resolution, please let me know how to 
> resolve. I have emailed the webmaster already but we did not receive a 
> response. Thank you.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27762) Kafka WakeupException during handling splits changes

2022-06-02 Thread Mason Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545668#comment-17545668
 ] 

Mason Chen commented on FLINK-27762:


Hi [~renqs], I also recently encountered a similar issue internally

> It could only happen if the wakeup operation is triggered after 
> KafkaConsumer#poll returns and the flow is still in 
> KafkaPartitionSplitReader#fetch. 

I don't think so, because the exception is thrown from handling the split 
changes.

 

> Kafka WakeupException during handling splits changes
> 
>
> Key: FLINK-27762
> URL: https://issues.apache.org/jira/browse/FLINK-27762
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.3
>Reporter: zoucan
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
>
>  
> We enable dynamic partition discovery in our flink job, but job failed when 
> kafka partition is changed. 
> Exception detail is shown as follows:
> {code:java}
> java.lang.RuntimeException: One or more fetchers have encountered exception
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:225)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:130)
>   at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:350)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:496)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:150)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:105)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   ... 1 more
> Caused by: org.apache.kafka.common.errors.WakeupException
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:511)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:275)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1726)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1684)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.removeEmptySplits(KafkaPartitionSplitReader.java:315)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.handleSplitsChanges(KafkaPartitionSplitReader.java:200)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.AddSplitsTask.run(AddSplitsTask.java:51)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142)
>   ... 6 more {code}
>  
> After preliminary investigation, according to source code of KafkaSource,
> At first: 
> method *org.apache.kafka.clients.consumer.KafkaConsumer.wakeup()* will be 
> called if consumer is polling data.
> Later: 
> metho

[jira] [Commented] (FLINK-27887) Unable to access https://nightlies.apache.org/flink/

2022-06-02 Thread Ben Carlton (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545674#comment-17545674
 ] 

Ben Carlton commented on FLINK-27887:
-

[~mvis...@pivotal.io] No error messages, just never connects from any AP within 
IBM as a whole. VPN or hotspots work fine. After internal investigations, the 
network team has asked me to inquire here to see if IBM's addresses were 
accidentally placed on a deny list. If so, we would like them to be placed on 
allow instead.

> Unable to access https://nightlies.apache.org/flink/
> 
>
> Key: FLINK-27887
> URL: https://issues.apache.org/jira/browse/FLINK-27887
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Ben Carlton
>Priority: Major
>  Labels: important
>
> My organization is unable to access nightlies.apache.org/flink
> We believe we may have been mistakenly placed on a deny list. Need someone to 
> contact me for resolution at: 
> [benjamin.carl...@ibm.com|mailto:benjamin.carl...@ibm.com]
> If there is another preferred method of resolution, please let me know how to 
> resolve. I have emailed the webmaster already but we did not receive a 
> response. Thank you.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] luoyuxia commented on pull request #18958: [FLINK-15854][hive] Use the new type inference for Hive UDTF

2022-06-02 Thread GitBox


luoyuxia commented on PR #18958:
URL: https://github.com/apache/flink/pull/18958#issuecomment-1145440808

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] JingsongLi merged pull request #145: [FLINK-27875] Introduce TableScan and TableRead as an abstraction layer above FileStore for reading RowData

2022-06-02 Thread GitBox


JingsongLi merged PR #145:
URL: https://github.com/apache/flink-table-store/pull/145


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-27875) Introduce TableScan and TableRead as an abstraction layer above FileStore for reading RowData

2022-06-02 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-27875.

Fix Version/s: table-store-0.2.0
 Assignee: Caizhi Weng
   Resolution: Fixed

master:

1012ea20cecbd6237acefc6427769bec164d5076

3ec19cd53e7db287028c142fb5754d73353ff1af

> Introduce TableScan and TableRead as an abstraction layer above FileStore for 
> reading RowData
> -
>
> Key: FLINK-27875
> URL: https://issues.apache.org/jira/browse/FLINK-27875
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> In this step we introduce {{TableScan}} and {{TableRead}} They are an 
> abstraction layer above {{FileStoreScan}} and {{FileStoreRead}} to provide 
> {{RowData}} reading.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


  1   2   >