[jira] [Commented] (FLINK-27058) Add support for Python 3.9

2022-05-23 Thread LuNng Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541279#comment-17541279
 ] 

LuNng Wang commented on FLINK-27058:


Need to bump fastavro

> Add support for Python 3.9
> --
>
> Key: FLINK-27058
> URL: https://issues.apache.org/jira/browse/FLINK-27058
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Martijn Visser
>Assignee: LuNng Wang
>Priority: Major
>
> As mentioned on the Dev mailing list 
> https://lists.apache.org/thread/oq6p1zqnvk9m240j4nlbot7gw0grcvvh and in the 
> documentation 
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/python/installation/
>  PyFlink currently requires Python version (3.6, 3.7 or 3.8). We should make 
> sure that PyFlink also works with Python version 3.9



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27750) The configuration of JobManagerOption.TOTAL_PROCESS_MEMORY(jobmanager.memory.process.size) not work

2022-05-23 Thread dong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dong updated FLINK-27750:
-
Description: 
Configuration flinkConfiguration = GlobalConfiguration.loadConfiguration();

flinkConfiguration.set(DeploymentOptions.TARGET, 
KubernetesDeploymentTarget.APPLICATION.getName())

.set(PipelineOptions.JARS, Collections.singletonList(flinkDistJar))

.set(KubernetesConfigOptions.CLUSTER_ID, "APPLICATION1")

.set(KubernetesConfigOptions.CONTAINER_IMAGE, "img_url")

.set(KubernetesConfigOptions.CONTAINER_IMAGE_PULL_POLICY, 
KubernetesConfigOptions.ImagePullPolicy.Always)

.set(JobManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("4096M"))

.set...;

KubernetesClusterDescriptor kubernetesClusterDescriptor = new 
KubernetesClusterDescriptor(

flinkConfiguration,

new Fabric8FlinkKubeClient(

flinkConfiguration,

new DefaultKubernetesClient(),

Executors.newFixedThreadPool(2)

)

);

ApplicationConfiguration applicationConfiguration = new 
ApplicationConfiguration(execArgs, null);

ClusterClient clusterClient = 
kubernetesClusterDescriptor.deployApplicationCluster(

new 
ClusterSpecification.ClusterSpecificationBuilder().createClusterSpecification(),

applicationConfiguration

).getClusterClient();

> The configuration of 
> JobManagerOption.TOTAL_PROCESS_MEMORY(jobmanager.memory.process.size) not work
> ---
>
> Key: FLINK-27750
> URL: https://issues.apache.org/jira/browse/FLINK-27750
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.14.4
>Reporter: dong
>Priority: Major
>
> Configuration flinkConfiguration = GlobalConfiguration.loadConfiguration();
> flinkConfiguration.set(DeploymentOptions.TARGET, 
> KubernetesDeploymentTarget.APPLICATION.getName())
> .set(PipelineOptions.JARS, Collections.singletonList(flinkDistJar))
> .set(KubernetesConfigOptions.CLUSTER_ID, "APPLICATION1")
> .set(KubernetesConfigOptions.CONTAINER_IMAGE, "img_url")
> .set(KubernetesConfigOptions.CONTAINER_IMAGE_PULL_POLICY, 
> KubernetesConfigOptions.ImagePullPolicy.Always)
> .set(JobManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("4096M"))
> .set...;
> KubernetesClusterDescriptor kubernetesClusterDescriptor = new 
> KubernetesClusterDescriptor(
> flinkConfiguration,
> new Fabric8FlinkKubeClient(
> flinkConfiguration,
> new DefaultKubernetesClient(),
> Executors.newFixedThreadPool(2)
> )
> );
> ApplicationConfiguration applicationConfiguration = new 
> ApplicationConfiguration(execArgs, null);
> ClusterClient clusterClient = 
> kubernetesClusterDescriptor.deployApplicationCluster(
> new 
> ClusterSpecification.ClusterSpecificationBuilder().createClusterSpecification(),
> applicationConfiguration
> ).getClusterClient();



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-ml] lindong28 closed pull request #103: [FLINK-27742] Fix Compatibility Issue Between Stages

2022-05-23 Thread GitBox


lindong28 closed pull request #103: [FLINK-27742] Fix Compatibility Issue 
Between Stages
URL: https://github.com/apache/flink-ml/pull/103


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] lindong28 commented on pull request #103: [FLINK-27742] Fix Compatibility Issue Between Stages

2022-05-23 Thread GitBox


lindong28 commented on PR #103:
URL: https://github.com/apache/flink-ml/pull/103#issuecomment-1135426603

   @yunfengzhou-hub Could you help update the PR title and the commit message 
so that it follows the same style as existing titles/messages? For example, not 
every word needs to be capitalized. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] lindong28 commented on a diff in pull request #103: [FLINK-27742] Fix Compatibility Issue Between Stages

2022-05-23 Thread GitBox


lindong28 commented on code in PR #103:
URL: https://github.com/apache/flink-ml/pull/103#discussion_r880067284


##
flink-ml-lib/src/test/java/org/apache/flink/ml/classification/KnnTest.java:
##
@@ -121,8 +124,9 @@ private static void verifyPredictionResult(Table output, 
String labelCol, String
 @Override
 public Tuple2 map(Row row) 
{
 return Tuple2.of(
-(Double) 
row.getField(labelCol),
-(Double) 
row.getField(predictionCol));
+((Number) 
row.getField(labelCol)).doubleValue(),

Review Comment:
   The test code basically should mimic the user code's beahvior.
   
   Since we have updated the stage to output double-typed label column, do we 
still need to do type conversion here using `Number::doubleValue()`?
   
   Same for prediction column.
   
   Same for `LinearSVCTest`.



##
flink-ml-lib/src/test/java/org/apache/flink/ml/classification/KnnTest.java:
##
@@ -179,6 +183,42 @@ public void testFitAndPredict() throws Exception {
 verifyPredictionResult(output, knn.getLabelCol(), 
knn.getPredictionCol());
 }
 
+@Test
+public void testDataTypes() throws Exception {

Review Comment:
   nits: would it be more intuitive to rename this test as 
`testInputTypeConversion()`?
   
   Same for `LinearSVCTest`.



##
flink-ml-core/src/main/java/org/apache/flink/ml/linalg/DenseVector.java:
##
@@ -51,6 +51,35 @@ public double[] toArray() {
 return values;
 }
 
+@Override
+public DenseVector toDense() {
+return this;
+}
+
+@Override
+public SparseVector toSparse() {
+int nnz = 0;

Review Comment:
   I am not sure `nnz` is an intuitive name. The general rule is to make 
variable names either self-explanatory or simple.
   
   Could we change this to either `non_zero_count` or `count`?
   
   Same for `ii` and `vv`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-18640) Fix PostgresDialect doesn't quote the identifiers

2022-05-23 Thread Yaroslav Tkachenko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541275#comment-17541275
 ] 

Yaroslav Tkachenko commented on FLINK-18640:


I'd love to address this issue, because currently, PostgresDialect is not very 
useful in real production scenarios, e.g. keywords used as column names are 
quite common.

I want to propose a simple and elegant (in my opinion) change to address the 
issue (I can create a PR if we agree here):
{code:java}
public String quoteIdentifier(String identifier) {
// skip if already quoted
if (identifier.startsWith("\"")) {
return identifier;
} else {
return "\"" + identifier + "\"";
}
} {code}
Why I consider it the best solution:
 * It's very simple, minimum changes.
 * Since Postgres identifiers can have dots, it's very hard to understand when 
it's safe to split them for escaping. E.g., schema.table_name is fine to escape 
like "schema"."table_name", but you can't do the same for schema.table.name. 
So, my proposal - just let the user deal with it. They know how to escape 
properly. That's why _quoteIdentifier_ checks for quoting first.
 * In most of the scenarios though, identifiers won't have dots, so it's safe 
to quote them.

Thoughts?

> Fix PostgresDialect doesn't quote the identifiers
> -
>
> Key: FLINK-18640
> URL: https://issues.apache.org/jira/browse/FLINK-18640
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.9.1, 1.10.1
>Reporter: 毛宗良
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>
> Flink jdbc throw exceptions when read a postgresql table with scheam, like 
> "ods.t_test". BY debugging the source code, I found a bug about dealing the 
> table name.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-table-store] ajian2002 commented on pull request #121: [FLINK-27626] Introduce AggregatuibMergeFunction

2022-05-23 Thread GitBox


ajian2002 commented on PR #121:
URL: 
https://github.com/apache/flink-table-store/pull/121#issuecomment-1135419375

   > You can try this test case if your're interested.
   > 
   > ```java
   > @Test
   > public void myTest() throws Exception {
   > String ddl3 =
   > "CREATE TABLE IF NOT EXISTS T5 ( dt STRING, hr INT, price INT, 
PRIMARY KEY (dt, hr) NOT ENFORCED ) WITH ( 'merge-engine' = 'aggregation', 
'price.aggregate-function' = 'sum' );";
   > String tmpPath;
   > try {
   > tmpPath = TEMPORARY_FOLDER.newFolder().toURI().toString();
   > } catch (Exception e) {
   > throw new RuntimeException(e);
   > }
   > String ddl4 =
   > "CREATE TABLE IF NOT EXISTS A ( dt STRING, hr INT, price INT ) 
WITH ( 'connector' = 'filesystem', 'path' = '"
   > + tmpPath
   > + "', 'format' = 'avro' );";
   > String ddl5 =
   > "CREATE TABLE IF NOT EXISTS P ( dt STRING, hr INT, price INT ) 
WITH ( 'connector' = 'print' );";
   > bEnv.executeSql(ddl3).await();
   > bEnv.executeSql(ddl4).await();
   > sEnv.executeSql(ddl3).await();
   > sEnv.executeSql(ddl4).await();
   > sEnv.executeSql(ddl5).await();
   > bEnv.executeSql(
   > "INSERT INTO A VALUES ('20220101', 8, 100), 
('20220101', 8, 300), ('20220101', 8, 200), ('20220101', 8, 400), ('20220101', 
9, 100)")
   > .await();
   > sEnv.executeSql(
   > "INSERT INTO T5 SELECT dt, hr, price FROM ("
   > + "  SELECT dt, hr, price, ROW_NUMBER() OVER 
(PARTITION BY dt, hr ORDER BY price desc) AS rn FROM A"
   > + ") WHERE rn <= 2")
   > .await();
   > sEnv.executeSql(
   > "INSERT INTO P SELECT dt, hr, price FROM ("
   > + "  SELECT dt, hr, price, ROW_NUMBER() OVER 
(PARTITION BY dt, hr ORDER BY price desc) AS rn FROM A"
   > + ") WHERE rn <= 2")
   > .await();
   > List result = iteratorToList(bEnv.from("T5").execute().collect());
   > System.out.println(result);
   > }
   > ```
   > 
   > This SQL script calculates the sum of the top 2 largest price in each 
hour. I've also created a print sink so that you can see what's going into the 
sink.
   > 
   > ```
   > 1> +I[20220101, 8, 100]
   > 1> +I[20220101, 8, 300] # insert price 100 and 300
   > 1> -D[20220101, 8, 100]
   > 1> +I[20220101, 8, 200] # price 200 comes, so 100 should be removed out of 
the result
   > 1> -D[20220101, 8, 200]
   > 1> +I[20220101, 8, 400] # price 400 comes, so 200 should be removed out of 
the result
   > 1> +I[20220101, 9, 100] # 100 in another hour, not affected
   > ```
   > 
   > The expected result of our aggregate function should be `[+I[20220101, 8, 
700], +I[20220101, 9, 100]]` but sadly current implementation prints out 
`[+I[20220101, 8, 1300], +I[20220101, 9, 100]]`.
   > 
   > Sorry that my previous comments on row kinds are sort of misleading. Flink 
does have 4 row kinds but in Table Store we only consider key-value pairs 
instead of rows. Key-value pairs only have two value kinds (provided by 
`KeyValue#valueKind`): `ValueKind.ADD` means updating the corresponding key 
with the value (you can think of it as `Map#compute` in Java) and 
`ValueKind.DELETE` means removing the corresponding key (like `Map#remove` in 
Java). Although each key and each value is represented by a `RowData`, their 
row kind are meaningless and are always `INSERT`. Only their value kinds are 
important.
   > 
   > To connect table store with Flink, we parse `RowData` from Flink into 
`KeyValue` according to both its row kind and the merge function used in 
`flink-table-store-connector` module. As the number of merge functions are 
growing we should consider extracting a common method to complete this parsing. 
But that is out of the scope of this PR.
   > 
   > For this PR I suggest that we only support `INSERT` row kind and leave the 
support for other row kind later. All in all each PR should be as small as 
possible and only concentrate on one thing.
   
   
   sorry i didn't understand what you meant
   Your test executes SQL
   > "Insert a value ('20220101', 8, 100), ('20220101', 8, 300), ('20220101', 
8, 200), ('20220101', 8, 400), ('20220101' , 9, 100)"
   
   But I didn't find where to execute `-D[20220101, 8, 200]`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27750) The configuration of JobManagerOption.TOTAL_PROCESS_MEMORY(jobmanager.memory.process.size) not work

2022-05-23 Thread dong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dong updated FLINK-27750:
-
  Component/s: Deployment / Kubernetes
 Language: Java
Affects Version/s: 1.14.4
  Summary: The configuration of 
JobManagerOption.TOTAL_PROCESS_MEMORY(jobmanager.memory.process.size) not work  
(was: The configuration of)

> The configuration of 
> JobManagerOption.TOTAL_PROCESS_MEMORY(jobmanager.memory.process.size) not work
> ---
>
> Key: FLINK-27750
> URL: https://issues.apache.org/jira/browse/FLINK-27750
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.14.4
>Reporter: dong
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27750) The configuration of

2022-05-23 Thread dong (Jira)
dong created FLINK-27750:


 Summary: The configuration of
 Key: FLINK-27750
 URL: https://issues.apache.org/jira/browse/FLINK-27750
 Project: Flink
  Issue Type: Bug
Reporter: dong






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27683) Insert into (column1, column2) Values(.....) can't work with sql Hints

2022-05-23 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541262#comment-17541262
 ] 

Shengkai Fang commented on FLINK-27683:
---

You can open a PR and ping me when you are ready.

> Insert into (column1, column2) Values(.) can't work with sql Hints
> --
>
> Key: FLINK-27683
> URL: https://issues.apache.org/jira/browse/FLINK-27683
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.14.0, 1.16.0, 1.15.1
>Reporter: Xin Yang
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> When I try to use statement `Insert into (column1, column2) Values(.)` 
> with SQL hints, it throw some exception, which is certainly a bug.
>  
>  * Sql 1
> {code:java}
> INSERT INTO `tidb`.`%s`.`%s` /*+ OPTIONS('tidb.sink.update-columns'='c2, 
> c13')*/   (c2, c13) values(1, 12.12) {code}
>  * 
>  ** result 1
> !screenshot-1.png!
>  * Sql 2
> {code:java}
> INSERT INTO `tidb`.`%s`.`%s` (c2, c13) /*+ 
> OPTIONS('tidb.sink.update-columns'='c2, c13')*/  values(1, 12.12)
> {code}
>  * 
>  ** result 2
> !screenshot-2.png!
>  * Sql 3
> {code:java}
> INSERT INTO `tidb`.`%s`.`%s` (c2, c13)  values(1, 12.12)
> {code}
>  * 
>  ** result3 : success



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] 1996fanrui commented on pull request #19781: Just for test : output buffer aligned to uc

2022-05-23 Thread GitBox


1996fanrui commented on PR #19781:
URL: https://github.com/apache/flink/pull/19781#issuecomment-1135382292

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27464) java.lang.NoClassDefFoundError

2022-05-23 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541255#comment-17541255
 ] 

Shengkai Fang commented on FLINK-27464:
---

+1 to use Flink CDC as the source, which is much powerful.

I notice you use the parent-first  as the resolve order. For connector, you 
should use the child-first. 

> java.lang.NoClassDefFoundError
> --
>
> Key: FLINK-27464
> URL: https://issues.apache.org/jira/browse/FLINK-27464
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.13.6
>Reporter: zhangxin
>Priority: Major
>
> used config  classloader.resolve-order: parent-first
> at:
>  
> Caused by: org.apache.flink.util.FlinkException: Global failure triggered by 
> OperatorCoordinator for 'Source: Flink-IMS -> Map -> Sink: Unnamed' (operator 
> cbc357ccb763df2852fee8c4fc7d55f2).
>     at 
> org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder$LazyInitializedCoordinatorContext.failJob(OperatorCoordinatorHolder.java:553)
>     at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$QuiesceableContext.failJob(RecreateOnResetOperatorCoordinator.java:223)
>     at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorContext.failJob(SourceCoordinatorContext.java:285)
>     at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:133)
>     at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$DeferrableCoordinator.resetAndStart(RecreateOnResetOperatorCoordinator.java:381)
>     at 
> org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.lambda$resetToCheckpoint$6(RecreateOnResetOperatorCoordinator.java:136)
>     at 
> java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:719)
>     at 
> java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:701)
>     at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>     at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>     at 
> org.apache.flink.runtime.operators.coordination.ComponentClosingUtils.lambda$closeAsyncWithTimeout$0(ComponentClosingUtils.java:71)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NoClassDefFoundError: com/zaxxer/hikari/HikariConfig
>     at 
> com.ververica.cdc.connectors.mysql.source.connection.PooledDataSourceFactory.createPooledDataSource(PooledDataSourceFactory.java:38)
>     at 
> com.ververica.cdc.connectors.mysql.source.connection.JdbcConnectionPools.getOrCreateConnectionPool(JdbcConnectionPools.java:51)
>     at 
> com.ververica.cdc.connectors.mysql.source.connection.JdbcConnectionFactory.connect(JdbcConnectionFactory.java:53)
>     at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:872)
>     at io.debezium.jdbc.JdbcConnection.connection(JdbcConnection.java:867)
>     at io.debezium.jdbc.JdbcConnection.connect(JdbcConnection.java:413)
>     at 
> com.ververica.cdc.connectors.mysql.debezium.DebeziumUtils.openJdbcConnection(DebeziumUtils.java:62)
>     at 
> com.ververica.cdc.connectors.mysql.MySqlValidator.validate(MySqlValidator.java:68)
>     at 
> com.ververica.cdc.connectors.mysql.source.MySqlSource.createEnumerator(MySqlSource.java:156)
>     at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:129)
>     ... 8 more



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27746) Flink kubernetes operator docker image could not build with source release

2022-05-23 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang reassigned FLINK-27746:
-

Assignee: Nicholas Jiang

> Flink kubernetes operator docker image could not build with source release
> --
>
> Key: FLINK-27746
> URL: https://issues.apache.org/jira/browse/FLINK-27746
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Assignee: Nicholas Jiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> Could not build the Docker image from the source release, getting the
> following error:
> > [build 11/14] COPY .git ./.git:
> --
> failed to compute cache key: "/.git" not found: not found



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] flinkbot commented on pull request #19801: AdaptiveScheduler support operator fixed parallelism

2022-05-23 Thread GitBox


flinkbot commented on PR #19801:
URL: https://github.com/apache/flink/pull/19801#issuecomment-1135360046

   
   ## CI report:
   
   * dfcd0cc45784de66c3e2f47a02eee1b022c38d14 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] jlon opened a new pull request, #19801: AdaptiveScheduler support operator fixed parallelism

2022-05-23 Thread GitBox


jlon opened a new pull request, #19801:
URL: https://github.com/apache/flink/pull/19801

   ## What is the purpose of the change
   
   In the job topology, if the user specifies the concurrency of the operator, 
AdaptiveScheduler should support the operator's maximum parallelism equal to 
the user-specified parallelism during the scheduling process. 
   And the minimum parallelism is equal to the number of slots available to the 
cluster. 
   This is especially useful in certain scenarios,
   For example, the parallelism of an operator that consumes Kafka is specified 
to be equal to the number of partitions. Or you want to control the write rate 
of the operator, etc.
   
   ## Verifying this change
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (docs)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #19800: [docs]update User-defined Sources & Sinks document

2022-05-23 Thread GitBox


flinkbot commented on PR #19800:
URL: https://github.com/apache/flink/pull/19800#issuecomment-1135358517

   
   ## CI report:
   
   * efd52e5aa85e96b5551b5cea374eddf40523ed95 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] senlizishi opened a new pull request, #19800: [docs]update User-defined Sources & Sinks document

2022-05-23 Thread GitBox


senlizishi opened a new pull request, #19800:
URL: https://github.com/apache/flink/pull/19800

   
   
   ## What is the purpose of the change
   
   *Part of the code in the document is not available and the latest code is 
not synchronized*
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27738) instance KafkaSink support config topic properties

2022-05-23 Thread LCER (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541244#comment-17541244
 ] 

LCER commented on FLINK-27738:
--

[~fsk119] ,thank you for your reply  , if the topic name is not exists when use 
KafkaSink to send message, it will automatic creation, so i want to modify the 
topic properties; Now I use pre generated topic to resove this problem , but I 
don't think is a good idea;

 

> instance KafkaSink support config topic properties
> --
>
> Key: FLINK-27738
> URL: https://issues.apache.org/jira/browse/FLINK-27738
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: LCER
>Priority: Major
>
> I use KafkaSink to config Kafka information as following:
> *KafkaSink.builder()*
>  *.setBootstrapServers(brokers)*
>  *.setRecordSerializer(KafkaRecordSerializationSchema.builder()*
>  *.setTopicSelector(topicSelector)*
>  *.setValueSerializationSchema(new SimpleStringSchema())*
>  *.build()*
>  *)*
>  *.setDeliverGuarantee(DeliveryGuarantee.EXACTLY_ONCE)*
>  *.setKafkaProducerConfig(properties)*
>  *.build();*
> **
> *I can't find any method to support config topic properties*



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-kubernetes-operator] wangyang0918 commented on pull request #240: [FLINK-27746] Use tar instead of helm package when creating source release

2022-05-23 Thread GitBox


wangyang0918 commented on PR #240:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/240#issuecomment-1135349398

   cc @mbalassi @gyfora WDYT?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-27747) Flink kubernetes operator helm chart release the Chart.yaml file doesn't have an apache license header

2022-05-23 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang reassigned FLINK-27747:
-

Assignee: Yang Wang

> Flink kubernetes operator helm chart release the Chart.yaml file doesn't have 
> an apache license header
> --
>
> Key: FLINK-27747
> URL: https://issues.apache.org/jira/browse/FLINK-27747
> Project: Flink
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
> Fix For: kubernetes-operator-1.0.0
>
>
> When verifying the 1.0.0-rc1, [~gyfora] found that the Chart.yaml file 
> doesn't have an apache license header.
> It seems this is caused by {{helm package}} in the 
> {{create_source_release.sh}}.
> We also have this issue in the 0.1.0 release[1].
> [1]. 
> https://dist.apache.org/repos/dist/release/flink/flink-kubernetes-operator-0.1.0/



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27746) Flink kubernetes operator docker image could not build with source release

2022-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27746:
---
Labels: pull-request-available  (was: )

> Flink kubernetes operator docker image could not build with source release
> --
>
> Key: FLINK-27746
> URL: https://issues.apache.org/jira/browse/FLINK-27746
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> Could not build the Docker image from the source release, getting the
> following error:
> > [build 11/14] COPY .git ./.git:
> --
> failed to compute cache key: "/.git" not found: not found



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-web] wangyang0918 commented on pull request #542: Add Kubernetes Operator 1.0.0 release

2022-05-23 Thread GitBox


wangyang0918 commented on PR #542:
URL: https://github.com/apache/flink-web/pull/542#issuecomment-1135349037

   cc @mbalassi @gyfora WDYT?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] wangyang0918 opened a new pull request, #240: [FLINK-27746] Use tar instead of helm package when creating source release

2022-05-23 Thread GitBox


wangyang0918 opened a new pull request, #240:
URL: https://github.com/apache/flink-kubernetes-operator/pull/240

   `helm package` command will remove the apache license header in the 
`Chart.yaml`. Given that we already update the version in 
`update_branch_version`, it is unnecessary to specify the `--app-version` and 
`--version` again.
   
   So we could simply use `tar` to prepare the helm package.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27219) CliClientITCase.testSqlStatements failed on azure with jdk11

2022-05-23 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541243#comment-17541243
 ] 

Huang Xingbo commented on FLINK-27219:
--

[~fsk119] Thanks a lot for the investigation. As far as I know, the machines 
run in our private Azure pipeline are provided by Azure, but the tests of flink 
Azure Pipeline run on Alibaba Cloud  machines. I am not sure if this is 
related, but I did encounter this difference before, which an unstable test 
only failed on Azure machines.  Let's merge this commit into the master branch 
firstly to see if we can get more specific stack information.

Merged the "print stack info" commit in 3c5d3bacda580de1dc4986909a0aa5fc30e37885

> CliClientITCase.testSqlStatements failed on azure with jdk11
> 
>
> Key: FLINK-27219
> URL: https://issues.apache.org/jira/browse/FLINK-27219
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Yun Gao
>Assignee: Shengkai Fang
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # test "ctas" only supported in Hive Dialect
> Apr 13 04:56:44 CREATE TABLE foo as select 1;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # list the configured configuration
> Apr 13 04:56:44 set;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # reset the configuration
> Apr 13 04:56:44 reset;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 set;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> ...
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 org.apache.flink.sql.parser.impl.ParseException: Encountered 
> "STRING" at line 10, column 27.
> Apr 13 04:56:44 Was expecting one of:
> Apr 13 04:56:44 ")" ...
> Apr 13 04:56:44 "," ...
> Apr 13 04:56:44 
> Apr 13 04:56:44 !error
> ...
> Apr 13 04:56:44 SHOW JARS;
> Apr 13 04:56:44 Empty set
> Apr 13 04:56:44 !ok
> Apr 13 04:56:44 "
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 13 04:56:44   at 
> org.apache.flink.table.client.cli.CliClientITCase.testSqlStatements(CliClientITCase.java:139)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 13 04:56:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> Apr 13 

[GitHub] [flink] 1996fanrui commented on a diff in pull request #19723: [FLINK-27251][checkpoint] Timeout aligned to unaligned checkpoint barrier in the output buffers

2022-05-23 Thread GitBox


1996fanrui commented on code in PR #19723:
URL: https://github.com/apache/flink/pull/19723#discussion_r880013345


##
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedSubpartition.java:
##
@@ -240,9 +260,158 @@ private boolean processPriorityBuffer(BufferConsumer 
bufferConsumer, int partial
 inflightBuffers.toArray(new Buffer[0]));
 }
 }
-return numPriorityElements == 1
-&& !isBlocked; // if subpartition is blocked then downstream 
doesn't expect any
-// notifications
+return needNotifyPriorityEvent();
+}
+
+// It just be called after add priorityEvent.
+private boolean needNotifyPriorityEvent() {
+assert Thread.holdsLock(buffers);
+// if subpartition is blocked then downstream doesn't expect any 
notifications
+return buffers.getNumPriorityElements() == 1 && !isBlocked;
+}
+
+private void processTimeoutableCheckpointBarrier(BufferConsumer 
bufferConsumer) {
+CheckpointBarrier barrier = 
parseAndCheckTimeoutableCheckpointBarrier(bufferConsumer);
+channelStateWriter.addOutputDataFuture(
+barrier.getId(),
+subpartitionInfo,
+ChannelStateWriter.SEQUENCE_NUMBER_UNKNOWN,
+createChannelStateFuture(barrier.getId()));
+}
+
+private void completeTimeoutableCheckpointBarrier(BufferConsumer 
bufferConsumer) {
+CheckpointBarrier barrier = 
parseAndCheckTimeoutableCheckpointBarrier(bufferConsumer);
+if (channelStateFutureIsAvailable(barrier.getId())) {
+completeChannelStateFuture(Collections.emptyList(), null);
+}
+}

Review Comment:
   Actually, it may also be false here.
   
   You mentioned in a [previous 
comment](https://github.com/apache/flink/pull/19723#discussion_r875800201). It 
happens when a checkpoint that has been aborted previously.
   
   BTW, for these 2 comments, I'll add a brief description to the code 
explaining why we can't checkState, to prevent it from being corrected later.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] HuangXingBo closed pull request #19796: [FLINK-27219][sql-client] Print exception stack when get errors

2022-05-23 Thread GitBox


HuangXingBo closed pull request #19796: [FLINK-27219][sql-client] Print 
exception stack when get errors
URL: https://github.com/apache/flink/pull/19796


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27738) instance KafkaSink support config topic properties

2022-05-23 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541242#comment-17541242
 ] 

Shengkai Fang commented on FLINK-27738:
---

Do you mean you want to use the KafkaSink to modify the config of the Kafka 
topic? If so, I think it's not a good idea. The connector API just 
INSERT/MODIFY/DELTE the content in the topic, which will not influence the 
topic itself. In the Flink world, Catalog is used to manage the metadata 
including alter the topic config. 

If you want to set the KafkaProducer properties, I think the setProperties[1] 
is enough for you? 

[1] 
https://github.com/apache/flink/blob/b4bb9c8bffe1e37ad6912348d8b3bef89af42286/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaSinkBuilder.java#L122

> instance KafkaSink support config topic properties
> --
>
> Key: FLINK-27738
> URL: https://issues.apache.org/jira/browse/FLINK-27738
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: LCER
>Priority: Major
>
> I use KafkaSink to config Kafka information as following:
> *KafkaSink.builder()*
>  *.setBootstrapServers(brokers)*
>  *.setRecordSerializer(KafkaRecordSerializationSchema.builder()*
>  *.setTopicSelector(topicSelector)*
>  *.setValueSerializationSchema(new SimpleStringSchema())*
>  *.build()*
>  *)*
>  *.setDeliverGuarantee(DeliveryGuarantee.EXACTLY_ONCE)*
>  *.setKafkaProducerConfig(properties)*
>  *.build();*
> **
> *I can't find any method to support config topic properties*



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] 1996fanrui commented on a diff in pull request #19723: [FLINK-27251][checkpoint] Timeout aligned to unaligned checkpoint barrier in the output buffers

2022-05-23 Thread GitBox


1996fanrui commented on code in PR #19723:
URL: https://github.com/apache/flink/pull/19723#discussion_r879691899


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorImpl.java:
##
@@ -335,6 +380,34 @@ public void checkpointState(
 }
 }
 
+private void registerAlignmentTimer(
+long checkpointId,
+OperatorChain operatorChain,
+CheckpointBarrier checkpointBarrier) {
+if (alignmentTimer != null) {

Review Comment:
   Actually, it may not be null.
   
   For a job without back pressure, the CP is completed quickly, but the timer 
is not triggered, so the next time the CP will continue, so this may not be 
null. It should be reasonable to cancel the previous timer here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-27733) Rework on_timer output behind watermark bug fix

2022-05-23 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo resolved FLINK-27733.
--
  Assignee: Juntao Hu
Resolution: Fixed

merged into master via a0ef9eb46ad3896d6d87595dbe364f69d583794c
merged into release-1.15 via f413c40c8ab8145d3bdea8dbc6372961a598be37
merged into release-1.14 via 945c15341b93a9bfadc7b6ce239a96c2b7baf592

> Rework on_timer output behind watermark bug fix
> ---
>
> Key: FLINK-27733
> URL: https://issues.apache.org/jira/browse/FLINK-27733
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0, 1.14.4
>Reporter: Juntao Hu
>Assignee: Juntao Hu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.14.5, 1.15.1
>
>
> FLINK-27676 can be simplified by just checking isBundleFinished() before 
> emitting watermark in AbstractPythonFunctionOperator, and this fix 
> FLINK-27676 in python group window aggregate too.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27733) Rework on_timer output behind watermark bug fix

2022-05-23 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-27733:
-
Fix Version/s: 1.14.5
   1.15.1

> Rework on_timer output behind watermark bug fix
> ---
>
> Key: FLINK-27733
> URL: https://issues.apache.org/jira/browse/FLINK-27733
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0, 1.14.4
>Reporter: Juntao Hu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.14.5, 1.15.1
>
>
> FLINK-27676 can be simplified by just checking isBundleFinished() before 
> emitting watermark in AbstractPythonFunctionOperator, and this fix 
> FLINK-27676 in python group window aggregate too.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27733) Rework on_timer output behind watermark bug fix

2022-05-23 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-27733:
-
Affects Version/s: 1.14.4

> Rework on_timer output behind watermark bug fix
> ---
>
> Key: FLINK-27733
> URL: https://issues.apache.org/jira/browse/FLINK-27733
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0, 1.14.4
>Reporter: Juntao Hu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> FLINK-27676 can be simplified by just checking isBundleFinished() before 
> emitting watermark in AbstractPythonFunctionOperator, and this fix 
> FLINK-27676 in python group window aggregate too.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27733) Rework on_timer output behind watermark bug fix

2022-05-23 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-27733:
-
Priority: Critical  (was: Minor)

> Rework on_timer output behind watermark bug fix
> ---
>
> Key: FLINK-27733
> URL: https://issues.apache.org/jira/browse/FLINK-27733
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0
>Reporter: Juntao Hu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> FLINK-27676 can be simplified by just checking isBundleFinished() before 
> emitting watermark in AbstractPythonFunctionOperator, and this fix 
> FLINK-27676 in python group window aggregate too.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27733) Rework on_timer output behind watermark bug fix

2022-05-23 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-27733:
-
Issue Type: Bug  (was: Improvement)

> Rework on_timer output behind watermark bug fix
> ---
>
> Key: FLINK-27733
> URL: https://issues.apache.org/jira/browse/FLINK-27733
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0
>Reporter: Juntao Hu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> FLINK-27676 can be simplified by just checking isBundleFinished() before 
> emitting watermark in AbstractPythonFunctionOperator, and this fix 
> FLINK-27676 in python group window aggregate too.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] HuangXingBo closed pull request #19788: [FLINK-27733][python] Rework on_timer output behind watermark bug fix

2022-05-23 Thread GitBox


HuangXingBo closed pull request #19788: [FLINK-27733][python] Rework on_timer 
output behind watermark bug fix
URL: https://github.com/apache/flink/pull/19788


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27219) CliClientITCase.testSqlStatements failed on azure with jdk11

2022-05-23 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541237#comment-17541237
 ] 

Shengkai Fang commented on FLINK-27219:
---

Hi, all. I still can't reproduce the exception in my local environment or in 
the azure pipelines(in [~hxbks2ks]'s environment). In my local environment, I 
just use the command as follows(it's almost same comparing to the command that 
starts the tests):

```
mvn -Dmaven.wagon.http.pool=false -Dorg.slf4j.simpleLogger.showDateTime=true 
-Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS 
-Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn
 --no-snapshot-updates -B -Dhadoop.version=2.8.5 -Dinclude_hadoop_aws 
-Dscala-2.12 -Djdk11 -Pjava11-target -Duse-alibaba-mirror  -Dflink.forkCount=2 
-Dflink.forkCountTestPackage=2 -Dfast -Pskip-webui-build 
-Dlog.dir=/__w/_temp/debug_files 
-Dlog4j.configurationFile=file:///__w/2/s/tools/ci/log4j.properties 
-Dflink.tests.with-openssl -Dflink.tests.check-segment-multiple-free 
-Darchunit.freeze.store.default.allowStoreUpdate=false -Dhadoop.version=2.8.5 
-Dinclude_hadoop_aws -Dscala-2.12 -Djdk11 -Pjava11-target -pl 
flink-table/flink-sql-client verify
```

So I think we can print the exception stack when getting errors in the sql 
client test. At least, we can know what is the problem.



> CliClientITCase.testSqlStatements failed on azure with jdk11
> 
>
> Key: FLINK-27219
> URL: https://issues.apache.org/jira/browse/FLINK-27219
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Yun Gao
>Assignee: Shengkai Fang
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # test "ctas" only supported in Hive Dialect
> Apr 13 04:56:44 CREATE TABLE foo as select 1;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # list the configured configuration
> Apr 13 04:56:44 set;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # reset the configuration
> Apr 13 04:56:44 reset;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 set;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> ...
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 org.apache.flink.sql.parser.impl.ParseException: Encountered 
> "STRING" at line 10, column 27.
> Apr 13 04:56:44 Was expecting one of:
> Apr 13 04:56:44 ")" ...
> Apr 13 04:56:44 "," ...
> Apr 13 04:56:44 
> Apr 13 04:56:44 !error
> ...
> Apr 13 04:56:44 SHOW JARS;
> Apr 13 04:56:44 Empty set
> Apr 13 04:56:44 !ok
> Apr 13 04:56:44 "
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 13 04:56:44   at 
> 

[GitHub] [flink-kubernetes-operator] SteNicholas commented on pull request #237: [FLINK-27257] Flink kubernetes operator triggers savepoint failed because of not all tasks running

2022-05-23 Thread GitBox


SteNicholas commented on PR #237:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/237#issuecomment-1135319512

   @wangyang0918, PTAL.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Reopened] (FLINK-25188) Cannot install PyFlink on MacOS with M1 chip

2022-05-23 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo reopened FLINK-25188:
--

Tests failed in python3.6 (which only run in nightly pipeline). 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35985=logs=fe7ebddc-3e2f-5c50-79ee-226c8653f218=fa5bbfc5-a952-5977-5beb-055415f4bd3d=113

I revert the commit in 99c74d5b4301436fbaf597dc24ff852428243a8d

> Cannot install PyFlink on MacOS with M1 chip
> 
>
> Key: FLINK-25188
> URL: https://issues.apache.org/jira/browse/FLINK-25188
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Affects Versions: 1.14.0
>Reporter: LuNng Wang
>Assignee: LuNng Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
> Attachments: image-2022-01-04-11-36-20-090.png
>
>
> Need to update dependencies: numpy>= 
> 1.20.3、pyarrow>=5.0.0、pandas>=1.3.0、apache-beam==2.36.0
> This following is some dependencies adapt M1 chip informations
> Numpy version:
> [https://stackoverflow.com/questions/65336789/numpy-build-fail-in-m1-big-sur-11-1]
> [https://github.com/numpy/numpy/releases/tag/v1.21.4]
> pyarrow version:
> [https://stackoverflow.com/questions/68385728/installing-pyarrow-cant-copy-build-lib-macosx-11-arm64-3-9-pyarrow-include-ar]
> pandas version:
> [https://github.com/pandas-dev/pandas/issues/40611#issuecomment-901569655]
> Apache beam:
> https://issues.apache.org/jira/browse/BEAM-12957
> https://issues.apache.org/jira/browse/BEAM-11703
> This following is dependency tree after installed successfully 
> Although Beam need numpy<1.21.0 and M1 need numpy >=1.21.4, when I using 
> numpy 1.20.3  I install successfully on M1 chip.
> {code:java}
> apache-flink==1.14.dev0
>   - apache-beam [required: ==2.34.0, installed: 2.34.0]
>     - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1]
>     - crcmod [required: >=1.7,<2.0, installed: 1.7]
>     - dill [required: >=0.3.1.1,<0.3.2, installed: 0.3.1.1]
>     - fastavro [required: >=0.21.4,<2, installed: 0.23.6]
>       - pytz [required: Any, installed: 2021.3]
>     - future [required: >=0.18.2,<1.0.0, installed: 0.18.2]
>     - grpcio [required: >=1.29.0,<2, installed: 1.42.0]
>       - six [required: >=1.5.2, installed: 1.16.0]
>     - hdfs [required: >=2.1.0,<3.0.0, installed: 2.6.0]
>       - docopt [required: Any, installed: 0.6.2]
>       - requests [required: >=2.7.0, installed: 2.26.0]
>         - certifi [required: >=2017.4.17, installed: 2021.10.8]
>         - charset-normalizer [required: ~=2.0.0, installed: 2.0.9]
>         - idna [required: >=2.5,<4, installed: 3.3]
>         - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7]
>       - six [required: >=1.9.0, installed: 1.16.0]
>     - httplib2 [required: >=0.8,<0.20.0, installed: 0.19.1]
>       - pyparsing [required: >=2.4.2,<3, installed: 2.4.7]
>     - numpy [required: >=1.14.3,<1.21.0, installed: 1.20.3]
>     - oauth2client [required: >=2.0.1,<5, installed: 4.1.3]
>       - httplib2 [required: >=0.9.1, installed: 0.19.1]
>         - pyparsing [required: >=2.4.2,<3, installed: 2.4.7]
>       - pyasn1 [required: >=0.1.7, installed: 0.4.8]
>       - pyasn1-modules [required: >=0.0.5, installed: 0.2.8]
>         - pyasn1 [required: >=0.4.6,<0.5.0, installed: 0.4.8]
>       - rsa [required: >=3.1.4, installed: 4.8]
>         - pyasn1 [required: >=0.1.3, installed: 0.4.8]
>       - six [required: >=1.6.1, installed: 1.16.0]
>     - orjson [required: <4.0, installed: 3.6.5]
>     - protobuf [required: >=3.12.2,<4, installed: 3.17.3]
>       - six [required: >=1.9, installed: 1.16.0]
>     - pyarrow [required: >=0.15.1,<6.0.0, installed: 5.0.0]
>       - numpy [required: >=1.16.6, installed: 1.20.3]
>     - pydot [required: >=1.2.0,<2, installed: 1.4.2]
>       - pyparsing [required: >=2.1.4, installed: 2.4.7]
>     - pymongo [required: >=3.8.0,<4.0.0, installed: 3.12.2]
>     - python-dateutil [required: >=2.8.0,<3, installed: 2.8.0]
>       - six [required: >=1.5, installed: 1.16.0]
>     - pytz [required: >=2018.3, installed: 2021.3]
>     - requests [required: >=2.24.0,<3.0.0, installed: 2.26.0]
>       - certifi [required: >=2017.4.17, installed: 2021.10.8]
>       - charset-normalizer [required: ~=2.0.0, installed: 2.0.9]
>       - idna [required: >=2.5,<4, installed: 3.3]
>       - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7]
>     - typing-extensions [required: >=3.7.0,<4, installed: 3.10.0.2]
>   - apache-flink-libraries [required: ==1.14.dev0, installed: 1.14.dev0]
>   - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1]
>   - cloudpickle [required: ==1.2.2, installed: 1.2.2]
>   - fastavro [required: >=0.21.4,<0.24, installed: 0.23.6]
>     - pytz [required: Any, 

[jira] [Commented] (FLINK-27738) instance KafkaSink support config topic properties

2022-05-23 Thread LCER (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541231#comment-17541231
 ] 

LCER commented on FLINK-27738:
--

[~martijnvisser] ,I use flink mysql cdc  collect  data from mysql to kafka, but 
 some table  row data size is large than kafka default config value , in this 
case ,throw an exception :
org.apache.kafka.common.errors.RecordTooLargeException: The message is XXX 
bytes when serialized which is larger than XXX, which is the value of the 
max.request.size configuration.
so ,I want use  KafkaSinkBuilder to config Topic proeperties to resoleve the 
problem;
 

> instance KafkaSink support config topic properties
> --
>
> Key: FLINK-27738
> URL: https://issues.apache.org/jira/browse/FLINK-27738
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: LCER
>Priority: Major
>
> I use KafkaSink to config Kafka information as following:
> *KafkaSink.builder()*
>  *.setBootstrapServers(brokers)*
>  *.setRecordSerializer(KafkaRecordSerializationSchema.builder()*
>  *.setTopicSelector(topicSelector)*
>  *.setValueSerializationSchema(new SimpleStringSchema())*
>  *.build()*
>  *)*
>  *.setDeliverGuarantee(DeliveryGuarantee.EXACTLY_ONCE)*
>  *.setKafkaProducerConfig(properties)*
>  *.build();*
> **
> *I can't find any method to support config topic properties*



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-27746) Flink kubernetes operator docker image could not build with source release

2022-05-23 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541228#comment-17541228
 ] 

Yang Wang edited comment on FLINK-27746 at 5/24/22 1:32 AM:


cc [~nicholasjiang] would you like to have a look?

We should skip copying the .git directory when it does not exit.


was (Author: fly_in_gis):
cc [~nicholasjiang] would you like to have a look?

> Flink kubernetes operator docker image could not build with source release
> --
>
> Key: FLINK-27746
> URL: https://issues.apache.org/jira/browse/FLINK-27746
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Priority: Blocker
> Fix For: kubernetes-operator-1.0.0
>
>
> Could not build the Docker image from the source release, getting the
> following error:
> > [build 11/14] COPY .git ./.git:
> --
> failed to compute cache key: "/.git" not found: not found



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-table-store] JingsongLi commented on pull request #134: [FLINK-27207] Support built-in parquet format

2022-05-23 Thread GitBox


JingsongLi commented on PR #134:
URL: 
https://github.com/apache/flink-table-store/pull/134#issuecomment-1135303088

   Thanks @liyubin117 for the contribution!
   @tsreaper Can you take a look? You have researched parquet.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27748) AdaptiveScheduler should support operator fixed parallelism

2022-05-23 Thread john (Jira)
john created FLINK-27748:


 Summary: AdaptiveScheduler should support operator fixed 
parallelism
 Key: FLINK-27748
 URL: https://issues.apache.org/jira/browse/FLINK-27748
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Task
Reporter: john


In the job topology, if the user specifies the concurrency of the operator, 
AdaptiveScheduler should support the operator's maximum parallelism equal to 
the user-specified parallelism during the scheduling process. And the minimum 
parallelism is equal to the number of slots available to the cluster. 
This is especially useful in certain scenarios,
For example, the parallelism of an operator that consumes Kafka is specified to 
be equal to the number of partitions. Or you want to control the write rate of 
the operator, etc.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27746) Flink kubernetes operator docker image could not build with source release

2022-05-23 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541228#comment-17541228
 ] 

Yang Wang commented on FLINK-27746:
---

cc [~nicholasjiang] would you like to have a look?

> Flink kubernetes operator docker image could not build with source release
> --
>
> Key: FLINK-27746
> URL: https://issues.apache.org/jira/browse/FLINK-27746
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Priority: Blocker
> Fix For: kubernetes-operator-1.0.0
>
>
> Could not build the Docker image from the source release, getting the
> following error:
> > [build 11/14] COPY .git ./.git:
> --
> failed to compute cache key: "/.git" not found: not found



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27747) Flink kubernetes operator helm chart release the Chart.yaml file doesn't have an apache license header

2022-05-23 Thread Yang Wang (Jira)
Yang Wang created FLINK-27747:
-

 Summary: Flink kubernetes operator helm chart release the 
Chart.yaml file doesn't have an apache license header
 Key: FLINK-27747
 URL: https://issues.apache.org/jira/browse/FLINK-27747
 Project: Flink
  Issue Type: Bug
Reporter: Yang Wang
 Fix For: kubernetes-operator-1.0.0


When verifying the 1.0.0-rc1, [~gyfora] found that the Chart.yaml file doesn't 
have an apache license header.

It seems this is caused by {{helm package}} in the {{create_source_release.sh}}.

We also have this issue in the 0.1.0 release[1].

[1]. 
https://dist.apache.org/repos/dist/release/flink/flink-kubernetes-operator-0.1.0/



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27746) Flink kubernetes operator docker image could not build with source release

2022-05-23 Thread Yang Wang (Jira)
Yang Wang created FLINK-27746:
-

 Summary: Flink kubernetes operator docker image could not build 
with source release
 Key: FLINK-27746
 URL: https://issues.apache.org/jira/browse/FLINK-27746
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Reporter: Yang Wang
 Fix For: kubernetes-operator-1.0.0


Could not build the Docker image from the source release, getting the
following error:


> [build 11/14] COPY .git ./.git:

--

failed to compute cache key: "/.git" not found: not found



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27714) Migrate to java-operator-sdk v3

2022-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27714:
---
Labels: pull-request-available  (was: )

> Migrate to java-operator-sdk v3
> ---
>
> Key: FLINK-27714
> URL: https://issues.apache.org/jira/browse/FLINK-27714
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
>Reporter: Matyas Orhidi
>Assignee: Matyas Orhidi
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.1.0
>
>
> There are a few features planning to add to the operator:
>  * Dynamic change of watched namespaces and automatic adjustment of related 
> {{EventSources}}
>  * Improved Error Handling API
> also worth evaluating of:
>  * Dependent resources management! See the 
> [documentation|https://javaoperatorsdk.io/docs/dependent-resources] for more 
> information
>  * Support for following a set of namespaces in {{InformerEventSource}} and 
> other related improvements.
>  * Removal for need of {{PrimaryToSecondaryMapper}} - now handled 
> automatically for you
> https://github.com/java-operator-sdk/java-operator-sdk/releases/tag/v3.0.0



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-kubernetes-operator] morhidi opened a new pull request, #239: [FLINK-27714] Migrate to java-operator-sdk v3

2022-05-23 Thread GitBox


morhidi opened a new pull request, #239:
URL: https://github.com/apache/flink-kubernetes-operator/pull/239

   Hi Folks, started looking at the challenges of a java-operator-sdk v3 
migration.  
   
   My main motivation:
   - Support dynamic namespaces (without operator restart)
   - Embrace never features that eliminate some existing hacks from the code 
(error handling / status patch / controller configs / etc.)
   
   Opening this PR as draft for the time being to have a conversation starter. 
cc @wangyang0918 @Aitozi @tweise 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] rkhachatryan commented on a diff in pull request #540: Add Changelog State Backend blog post

2022-05-23 Thread GitBox


rkhachatryan commented on code in PR #540:
URL: https://github.com/apache/flink-web/pull/540#discussion_r879905279


##
_posts/2022-05-20-changelog-state-backend.md:
##
@@ -0,0 +1,353 @@
+---
+layout: post 
+title:  "Improving speed and stability of checkpointing with generic log-based 
incremental checkpoints"
+date: 2022-05-20T00:00:00.000Z 
+authors:
+- Roman Khachatryan:
+  name: "Roman Khachatryan"
+- Yuan Mei:
+  name: "Yuan Mei"
+excerpt: This post describes the mechanism introduced in Flink 1.15 that 
continuously uploads state changes to a durable storage while performing 
materialization in the background
+
+---
+
+# Introduction
+
+One of the most important characteristics of stream processing systems is 
end-to-end latency, i.e. the time it takes for the results of processing an 
input record to reach the outputs. In the case of Flink, end-to-end latency 
mostly depends on the checkpointing mechanism, because processing results 
should only become visible after the state of the stream is persisted to 
non-volatile storage (this is assuming exactly-once mode; in other modes, 
results can be published immediately).
+
+Furthermore, сheckpoint duration also defines the reasonable interval with 
which checkpoints are made. A shorter interval provides the following 
advantages:
+
+* Lower latency for transactional sinks: Transactional sinks commit on 
checkpoints, so faster checkpoints mean more frequent commits.
+* More predictable checkpoint intervals: Currently the length of the 
checkpoint depends on the size of the artifacts that need to be persisted in 
the checkpoint storage.
+* Less work on recovery. The more frequently the checkpoint, the fewer events 
need to be re-processed after recovery.
+
+Following are the main factors affecting checkpoint duration in Flink:
+
+1. Barrier travel time and alignment duration
+1. Time to take state snapshot and persist it onto the non-volatile 
highly-available storage (such as S3)
+
+Recent improvements such as [Unaligned 
checkpoints](https://flink.apache.org/2020/10/15/from-aligned-to-unaligned-checkpoints-part-1.html)
 and [ Buffer debloating 
](https://cwiki.apache.org/confluence/display/FLINK/FLIP-183%3A+Dynamic+buffer+size+adjustment)
 try to address (1), especially in the presence of back-pressure. Previously, [ 
Incremental checkpoints 
](https://flink.apache.org/features/2018/01/30/incremental-checkpointing.html) 
were introduced to reduce the size of a snapshot, thereby reducing the time 
required to store it (2).
+
+However, there are still some cases when this duration is high
+
+### Every checkpoint is delayed by at least one task with high parallelism
+
+
+
+
+
+
+
+
+
+With the existing incremental checkpoint implementation of the RocksDB state 
backend, every subtask needs to periodically perform some form of compaction. 
That compaction results in new, relatively big files, which in turn increase 
the upload time (2). The probability of at least one node performing such 
compaction and thus slowing down the whole checkpoint grows proportionally to 
the number of nodes. In large deployments, almost every checkpoint becomes 
delayed by some node.
+
+### Unnecessary delay before uploading state snapshot
+
+
+
+
+
+
+
+
+State backends don't start any snapshotting work until the task receives at 
least one checkpoint barrier, increasing the effective checkpoint duration. 
This is suboptimal if the upload time is comparable to the checkpoint interval; 
instead, a snapshot could be uploaded continuously throughout the interval.
+
+This work discusses the mechanism introduced in Flink 1.15 to address the 
above cases by continuously persisting state changes on non-volatile storage 
while performing materialization in the background. The basic idea is described 
in the following section, and then important implementation details are 
highlighted. Subsequent sections discuss benchmarking results, limitations, and 
future work.
+
+# High-level Overview
+
+The core idea is to introduce a state changelog (a log that records state 
changes); this changelog allows operators to persist state changes in a very 
fine-grained manner, as described below:
+
+* Stateful operators write the state changes to the state changelog, in 
addition to applying them to the state tables in RocksDB or the in-mem 
Hashtable.
+* An operator can acknowledge a checkpoint as soon as the changes in the log 
have reached the durable checkpoint storage.
+* The state tables are persisted periodically as well, independent of the 
checkpoints. We call this procedure the materialization of the state on the 
durable checkpoint storage.
+* Once the state is materialized on the checkpoint storage, the state 
changelog can be truncated to the point where the state is materialized.
+
+This can be illustrated as follows:
+
+
+
+
+ 
+ 
+ 
+
+
+
+
+
+
+
+
+
+This approach mirrors what database systems do, 

[GitHub] [flink-web] rkhachatryan commented on a diff in pull request #540: Add Changelog State Backend blog post

2022-05-23 Thread GitBox


rkhachatryan commented on code in PR #540:
URL: https://github.com/apache/flink-web/pull/540#discussion_r879903747


##
_posts/2022-05-20-changelog-state-backend.md:
##
@@ -0,0 +1,353 @@
+---
+layout: post 
+title:  "Improving speed and stability of checkpointing with generic log-based 
incremental checkpoints"
+date: 2022-05-20T00:00:00.000Z 
+authors:
+- Roman Khachatryan:
+  name: "Roman Khachatryan"
+- Yuan Mei:
+  name: "Yuan Mei"
+excerpt: This post describes the mechanism introduced in Flink 1.15 that 
continuously uploads state changes to a durable storage while performing 
materialization in the background
+
+---
+
+# Introduction
+
+One of the most important characteristics of stream processing systems is 
end-to-end latency, i.e. the time it takes for the results of processing an 
input record to reach the outputs. In the case of Flink, end-to-end latency 
mostly depends on the checkpointing mechanism, because processing results 
should only become visible after the state of the stream is persisted to 
non-volatile storage (this is assuming exactly-once mode; in other modes, 
results can be published immediately).
+
+Furthermore, сheckpoint duration also defines the reasonable interval with 
which checkpoints are made. A shorter interval provides the following 
advantages:
+
+* Lower latency for transactional sinks: Transactional sinks commit on 
checkpoints, so faster checkpoints mean more frequent commits.
+* More predictable checkpoint intervals: Currently the length of the 
checkpoint depends on the size of the artifacts that need to be persisted in 
the checkpoint storage.

Review Comment:
   Agreed, replaced "length" with "duration" (uploading is mentioned right 
after it).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #19799: [Flink-26011][test] backport 1.15 - ArchUnit test for formats test code

2022-05-23 Thread GitBox


flinkbot commented on PR #19799:
URL: https://github.com/apache/flink/pull/19799#issuecomment-1135186061

   
   ## CI report:
   
   * 2c9f68b0cc4a4b008eeb0f3349a2f2a565167868 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] rkhachatryan commented on a diff in pull request #540: Add Changelog State Backend blog post

2022-05-23 Thread GitBox


rkhachatryan commented on code in PR #540:
URL: https://github.com/apache/flink-web/pull/540#discussion_r879902366


##
_posts/2022-05-20-changelog-state-backend.md:
##
@@ -0,0 +1,353 @@
+---
+layout: post 
+title:  "Improving speed and stability of checkpointing with generic log-based 
incremental checkpoints"
+date: 2022-05-20T00:00:00.000Z 
+authors:
+- Roman Khachatryan:
+  name: "Roman Khachatryan"
+- Yuan Mei:
+  name: "Yuan Mei"
+excerpt: This post describes the mechanism introduced in Flink 1.15 that 
continuously uploads state changes to a durable storage while performing 
materialization in the background

Review Comment:
   This excerpt is rendered together with the title, so "generic log-based 
incremental checkpoints" would be duplicated' and adding more text makes it too 
verbose for the list of blog posts (https://flink.apache.org/blog/).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] JingGe opened a new pull request, #19799: [Flink-26011][test] backport 1.15 - ArchUnit test for formats test code

2022-05-23 Thread GitBox


JingGe opened a new pull request, #19799:
URL: https://github.com/apache/flink/pull/19799

   backport to 1.15  #19357


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] rkhachatryan commented on a diff in pull request #540: Add Changelog State Backend blog post

2022-05-23 Thread GitBox


rkhachatryan commented on code in PR #540:
URL: https://github.com/apache/flink-web/pull/540#discussion_r879897974


##
_posts/2022-05-20-changelog-state-backend.md:
##
@@ -338,6 +336,10 @@ We encourage you to try out this feature and assess the 
pros and cons of using i
 
 Please see the full documentation 
[here](https://nightlies.apache.org/flink/flink-docs-master/docs/ops/state/state_backends/#enabling-changelog).
 
+# Acknowledgments
+
+We thank Stephan Ewen and all the engineers who contributed to the project.

Review Comment:
   I think this is rather a matter of taste 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] pscls commented on pull request #15140: [FLINK-20628][connectors/rabbitmq2] RabbitMQ connector using new connector API

2022-05-23 Thread GitBox


pscls commented on PR #15140:
URL: https://github.com/apache/flink/pull/15140#issuecomment-1135014178

   > @pscls There's now 
[apache/flink-connector-rabbitmq](https://github.com/apache/flink-connector-rabbitmq)
 - Would you like to move this PR to that repo, so we can merge it there?
   
   @MartijnVisser This PR is now moved into the repository new 
[apache/flink-connector-rabbitmq](https://github.com/apache/flink-connector-rabbitmq)
 and can be found here: 
[flink-connector-rabbitmq/pull/1](https://github.com/apache/flink-connector-rabbitmq/pull/1)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] pnowojski commented on a diff in pull request #19723: [FLINK-27251][checkpoint] Timeout aligned to unaligned checkpoint barrier in the output buffers

2022-05-23 Thread GitBox


pnowojski commented on code in PR #19723:
URL: https://github.com/apache/flink/pull/19723#discussion_r879764102


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorImpl.java:
##
@@ -335,6 +380,34 @@ public void checkpointState(
 }
 }
 
+private void registerAlignmentTimer(
+long checkpointId,
+OperatorChain operatorChain,
+CheckpointBarrier checkpointBarrier) {
+if (alignmentTimer != null) {

Review Comment:
   I see that in the end you have added the `checkState(alignmentTimer == 
null)` that I suggested?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] pnowojski commented on a diff in pull request #19723: [FLINK-27251][checkpoint] Timeout aligned to unaligned checkpoint barrier in the output buffers

2022-05-23 Thread GitBox


pnowojski commented on code in PR #19723:
URL: https://github.com/apache/flink/pull/19723#discussion_r879592425


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorImpl.java:
##
@@ -234,6 +265,11 @@ public void abortCheckpointOnBarrier(
 () -> operatorChain.broadcastEvent(new 
CancelCheckpointMarker(checkpointId)));

Review Comment:
   >  Should we move these 4 lines into runThrowing?
   
   That's a good question. This method `abortCheckpointOnBarrier` is called 
only from the `CheckpointBarrierHandler`, which is only used with network input 
tasks, for which `actionExecutor` is always a no-op (it's being used only for 
legacy sources in `SourceStreamTask`, which doesn't have network input). So I 
think it doesn't matter really matter, but conceptually yes, it would be 
technically more correct to wrap all of those lines with `actionExecutor`. 
   
   > And should we call cancelAlignmentTimer(); in close method?
   
   Probably yes, to wake up anyone that might be waiting on the timer.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora merged pull request #238: [hotfix] Make image tag in helm chart values as an explicit string

2022-05-23 Thread GitBox


gyfora merged PR #238:
URL: https://github.com/apache/flink-kubernetes-operator/pull/238


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on a diff in pull request #19723: [FLINK-27251][checkpoint] Timeout aligned to unaligned checkpoint barrier in the output buffers

2022-05-23 Thread GitBox


1996fanrui commented on code in PR #19723:
URL: https://github.com/apache/flink/pull/19723#discussion_r879691899


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorImpl.java:
##
@@ -335,6 +380,34 @@ public void checkpointState(
 }
 }
 
+private void registerAlignmentTimer(
+long checkpointId,
+OperatorChain operatorChain,
+CheckpointBarrier checkpointBarrier) {
+if (alignmentTimer != null) {

Review Comment:
   Actually, it may be null.
   
   For a job without back pressure, the CP is completed quickly, but the timer 
is not triggered, so the next time the CP will continue, so this may be null. 
It should be reasonable to cancel the previous timer here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27738) instance KafkaSink support config topic properties

2022-05-23 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541038#comment-17541038
 ] 

Martijn Visser commented on FLINK-27738:


[~LCER] That class is usually used with the AdminClient; what is the use case 
that you would like to access the TopicConfig while working from a Flink 
KafkaSink perspective? 

> instance KafkaSink support config topic properties
> --
>
> Key: FLINK-27738
> URL: https://issues.apache.org/jira/browse/FLINK-27738
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: LCER
>Priority: Major
>
> I use KafkaSink to config Kafka information as following:
> *KafkaSink.builder()*
>  *.setBootstrapServers(brokers)*
>  *.setRecordSerializer(KafkaRecordSerializationSchema.builder()*
>  *.setTopicSelector(topicSelector)*
>  *.setValueSerializationSchema(new SimpleStringSchema())*
>  *.build()*
>  *)*
>  *.setDeliverGuarantee(DeliveryGuarantee.EXACTLY_ONCE)*
>  *.setKafkaProducerConfig(properties)*
>  *.build();*
> **
> *I can't find any method to support config topic properties*



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] flinkbot commented on pull request #19798: [FLINK-27299][Runtime/Configuration] flink parsing parameter bug fixed.

2022-05-23 Thread GitBox


flinkbot commented on PR #19798:
URL: https://github.com/apache/flink/pull/19798#issuecomment-1134890110

   
   ## CI report:
   
   * 07cd9648b6ebef44b068d2b793727e7e53f33611 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wolfboys opened a new pull request, #19798: [FLINK-27299][Runtime/Configuration] flink parsing parameter bug fixed.

2022-05-23 Thread GitBox


wolfboys opened a new pull request, #19798:
URL: https://github.com/apache/flink/pull/19798

   ## What is the purpose of the change
   
   When I am running a flink job, I specify a running parameter with a "#" sign 
in it. The parsing fails.
   
   e.g: flink run com.myJob --sink.password db@123#123 
   
   only parse the content in front of "#", after reading the source code It is 
found that the parameters are intercepted according to "#" in the 
loadYAMLResource method of GlobalConfiguration. This part needs to be improved
   
   ## Brief change log
 -  before: `line.split("#", 2) ` after: `line.split("\\s+#", 2)` Following 
yaml's parameter parsing rules, the content after " #" ( space + #) is the real 
comment part, which should be discarded when parsing parameters
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): don't know
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature?  no
 - If yes, how is the feature documented?  not applicable


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27738) instance KafkaSink support config topic properties

2022-05-23 Thread LCER (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541029#comment-17541029
 ] 

LCER commented on FLINK-27738:
--

[~martijnvisser] ,thank you for your reply , In your provide url,Kafka Sink 
section  hasn't introduce how to add properties ; I find class  
org.apache.flink.connector.kafka.sink.KafkaSinkBuilder only declare 
setKafkaProducerConfig method , this method support properties which declare in 
org.apache.kafka.clients.producer.ProducerConfig class; but I want  to customer 
topic properties  which declare in org.apache.kafka.common.config.TopicConfig, 
so I didn't konw how to realize my function.

 

 

 

> instance KafkaSink support config topic properties
> --
>
> Key: FLINK-27738
> URL: https://issues.apache.org/jira/browse/FLINK-27738
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: LCER
>Priority: Major
>
> I use KafkaSink to config Kafka information as following:
> *KafkaSink.builder()*
>  *.setBootstrapServers(brokers)*
>  *.setRecordSerializer(KafkaRecordSerializationSchema.builder()*
>  *.setTopicSelector(topicSelector)*
>  *.setValueSerializationSchema(new SimpleStringSchema())*
>  *.build()*
>  *)*
>  *.setDeliverGuarantee(DeliveryGuarantee.EXACTLY_ONCE)*
>  *.setKafkaProducerConfig(properties)*
>  *.build();*
> **
> *I can't find any method to support config topic properties*



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] afedulov commented on pull request #19228: [FLINK-26074] Improve FlameGraphs scalability for high parallelism jobs

2022-05-23 Thread GitBox


afedulov commented on PR #19228:
URL: https://github.com/apache/flink/pull/19228#issuecomment-1134873831

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] usamj commented on pull request #187: [FLINK-27443] Create standalone mode parameters and decorators for JM and TMs

2022-05-23 Thread GitBox


usamj commented on PR #187:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/187#issuecomment-1134862767

   Updated the module names


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wolfboys closed pull request #19795: [FLINK-27299][Runtime/Configuration] flink parsing parameter bug fixed.

2022-05-23 Thread GitBox


wolfboys closed pull request #19795: [FLINK-27299][Runtime/Configuration] flink 
parsing parameter bug fixed.
URL: https://github.com/apache/flink/pull/19795


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #19797: [FLINK-27741][table-planner] Fix NPE when use dense_rank() and rank()…

2022-05-23 Thread GitBox


flinkbot commented on PR #19797:
URL: https://github.com/apache/flink/pull/19797#issuecomment-1134835423

   
   ## CI report:
   
   * e57aed17c6d78b4c304d92cf0733d89c4e24edf3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on a diff in pull request #19723: [FLINK-27251][checkpoint] Timeout aligned to unaligned checkpoint barrier in the output buffers

2022-05-23 Thread GitBox


1996fanrui commented on code in PR #19723:
URL: https://github.com/apache/flink/pull/19723#discussion_r879608448


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorImpl.java:
##
@@ -234,6 +265,11 @@ public void abortCheckpointOnBarrier(
 () -> operatorChain.broadcastEvent(new 
CancelCheckpointMarker(checkpointId)));

Review Comment:
   Hi @pnowojski , thanks for your quick feedback, I have addressed all 
comments. 
   
   And I choose OptionC in `ChannelStateWriteRequest#buildFutureWriteRequest`, 
it's more correct.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27741) Fix NPE when use dense_rank() and rank() in over aggregation

2022-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27741:
---
Labels: pull-request-available  (was: )

> Fix NPE when use dense_rank() and rank() in over aggregation
> 
>
> Key: FLINK-27741
> URL: https://issues.apache.org/jira/browse/FLINK-27741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: chenzihao
>Priority: Major
>  Labels: pull-request-available
>
> There has an 'NullPointException' when use RANK() and DENSE_RANK() in over 
> window.
> {code:java}
> @Test
>   def testDenseRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, DENSE_RANK() OVER (PARTITION BY a ORDER BY 
> proctime) FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> {code:java}
> @Test
>   def testRankOnOver(): Unit = {
> val t = failingDataSource(TestData.tupleData5)
>   .toTable(tEnv, 'a, 'b, 'c, 'd, 'e, 'proctime.proctime)
> tEnv.registerTable("MyTable", t)
> val sqlQuery = "SELECT a, RANK() OVER (PARTITION BY a ORDER BY proctime) 
> FROM MyTable"
> val sink = new TestingAppendSink
> tEnv.sqlQuery(sqlQuery).toAppendStream[Row].addSink(sink)
> env.execute()
>   }
> {code}
> Exception Info:
> {code:java}
> java.lang.NullPointerException
>   at 
> scala.collection.mutable.ArrayOps$ofInt$.length$extension(ArrayOps.scala:248)
>   at scala.collection.mutable.ArrayOps$ofInt.length(ArrayOps.scala:248)
>   at scala.collection.SeqLike.size(SeqLike.scala:104)
>   at scala.collection.SeqLike.size$(SeqLike.scala:104)
>   at scala.collection.mutable.ArrayOps$ofInt.size(ArrayOps.scala:242)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap(IndexedSeqLike.scala:95)
>   at 
> scala.collection.IndexedSeqLike.sizeHintIfCheap$(IndexedSeqLike.scala:95)
>   at 
> scala.collection.mutable.ArrayOps$ofInt.sizeHintIfCheap(ArrayOps.scala:242)
>   at scala.collection.mutable.Builder.sizeHint(Builder.scala:77)
>   at scala.collection.mutable.Builder.sizeHint$(Builder.scala:76)
>   at scala.collection.mutable.ArrayBuilder.sizeHint(ArrayBuilder.scala:21)
>   at scala.collection.TraversableLike.builder$1(TraversableLike.scala:229)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:232)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.mutable.ArrayOps$ofInt.map(ArrayOps.scala:242)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createDenseRankAggFunction(AggFunctionFactory.scala:454)
>   at 
> org.apache.flink.table.planner.plan.utils.AggFunctionFactory.createAggFunction(AggFunctionFactory.scala:94)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.$anonfun$transformToAggregateInfoList$1(AggregateUtil.scala:445)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToAggregateInfoList(AggregateUtil.scala:435)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:381)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil$.transformToStreamAggregateInfoList(AggregateUtil.scala:361)
>   at 
> org.apache.flink.table.planner.plan.utils.AggregateUtil.transformToStreamAggregateInfoList(AggregateUtil.scala)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.createUnboundedOverProcessFunction(StreamExecOverAggregate.java:279)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.java:198)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:148)
>   at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:249)
>   at 
> 

[GitHub] [flink] chenzihao5 opened a new pull request, #19797: [FLINK-27741][table-planner] Fix NPE when use dense_rank() and rank()…

2022-05-23 Thread GitBox


chenzihao5 opened a new pull request, #19797:
URL: https://github.com/apache/flink/pull/19797

   Fix NPE when use dense_rank() and rank() in over aggregation.
   
   
   
   ## What is the purpose of the change
   
   This pull request fixes NullPointException when use DENSE_RANK() and RANK() 
for an append stream. 
   
   
   ## Brief change log
   
 - Judge the null value
 - Add ITCase for DENSE_RANK and RANK
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] pnowojski commented on a diff in pull request #19723: [FLINK-27251][checkpoint] Timeout aligned to unaligned checkpoint barrier in the output buffers

2022-05-23 Thread GitBox


pnowojski commented on code in PR #19723:
URL: https://github.com/apache/flink/pull/19723#discussion_r879592425


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorImpl.java:
##
@@ -234,6 +265,11 @@ public void abortCheckpointOnBarrier(
 () -> operatorChain.broadcastEvent(new 
CancelCheckpointMarker(checkpointId)));

Review Comment:
   >  Should we move these 4 lines into runThrowing?
   That's a good question. This method `abortCheckpointOnBarrier` is called 
only from the `CheckpointBarrierHandler`, which is only used with network input 
tasks, for which `actionExecutor` is always a no-op (it's being used only for 
legacy sources in `SourceStreamTask`, which doesn't have network input). So I 
think it doesn't matter really matter, but conceptually yes, it would be 
technically more correct to wrap all of those lines with `actionExecutor`. 
   
   > And should we call cancelAlignmentTimer(); in close method?
   
   Probably yes, to wake up anyone that might be waiting on the timer.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-23143) Support state migration

2022-05-23 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17539537#comment-17539537
 ] 

Hangxiang Yu edited comment on FLINK-23143 at 5/23/22 3:20 PM:
---

I have updated the pr.
 > Does that mean postponing state migration until user access?
I think ideally, state should be migrated before modifying it (i.e. on reading 
metadata records from changelog); otherwise, there might be data loss or 
exception when serializing state changes in RocksDB. WDYT?

Currently, state migration may happen not only when user access, but also 
before modifing it as you said (considering materilization part may not be 
included in snapshot).

> Besides that, what about updating TTL? If we return existing state, than TTL 
> settings won't be updated, right?

Currently, ChangelogStateFactory will be disposed while finishing restore. So 
there is no state cache for ChangelogKeyedStateBackend and all TTL of states 
will be updated.

> ChangelogBackend metaInfo and e.g. RocksDBBackend metaInfo don't have to be 
> the same; and the former shouldn't know how to create metaInfo for the latter.

Sure, I agree. So I have used the "upgradeKeyedState" and "upgrade" to make 
inner keyed state backend to have their own "upgrade" logic (recreating their 
own metaInfo).


was (Author: masteryhx):
I have updated the pr.
 > Does that mean postponing state migration until user access?
I think ideally, state should be migrated before modifying it (i.e. on reading 
metadata records from changelog); otherwise, there might be data loss or 
exception when serializing state changes in RocksDB. WDYT?



Currently, state migration may happen not only when user access, but also 
before modifing it as you said (considering materilization part may not be 
included in snapshot).

> Besides that, what about updating TTL? If we return existing state, than TTL 
> settings won't be updated, right?

Currently, ChangelogStateFactory will be disposed while finishing restore. So 
there is no state cache for ChangelogKeyedStateBackend and all TTL of states 
will be updated.



> ChangelogBackend metaInfo and e.g. RocksDBBackend metaInfo don't have to be 
> the same; and the former shouldn't know how to create metaInfo for the latter.

Sure, I agree. So I have change the methods (createInternalState and create) to 
add extra paramater. But I am not sure whether there is a better solution about 
interface change.

> Support state migration
> ---
>
> Key: FLINK-23143
> URL: https://issues.apache.org/jira/browse/FLINK-23143
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> ChangelogKeyedStateBackend.getOrCreateKeyedState is currently used during 
> recovery; on 1st user access, it doesn't update metadata nor migrate state 
> (as opposed to other backends).
>  
> The proposed solution is to
>  # wrap serializers (and maybe other objects) in getOrCreateKeyedState
>  # store wrapping objects in a new map keyed by state name
>  # pass wrapped objects to delegatedBackend.createInternalState
>  # on 1st user access, lookup wrapper and upgrade its wrapped serializer
> This should be done for both KV/PQ states.
>  
> See also [https://github.com/apache/flink/pull/15420#discussion_r656934791]
>  
> cc: [~yunta]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27724) The SubtaskCheckpointCoordinatorImpl#close() isn't called in anywhere

2022-05-23 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski reassigned FLINK-27724:
--

Assignee: fanrui

> The SubtaskCheckpointCoordinatorImpl#close() isn't called in anywhere
> -
>
> Key: FLINK-27724
> URL: https://issues.apache.org/jira/browse/FLINK-27724
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0, 1.15.0
>Reporter: fanrui
>Assignee: fanrui
>Priority: Major
> Fix For: 1.16.0
>
>
> The SubtaskCheckpointCoordinatorImpl#close() isn't called in anywhere



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] 1996fanrui commented on a diff in pull request #19723: [FLINK-27251][checkpoint] Timeout aligned to unaligned checkpoint barrier in the output buffers

2022-05-23 Thread GitBox


1996fanrui commented on code in PR #19723:
URL: https://github.com/apache/flink/pull/19723#discussion_r879558979


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorImpl.java:
##
@@ -234,6 +265,11 @@ public void abortCheckpointOnBarrier(
 () -> operatorChain.broadcastEvent(new 
CancelCheckpointMarker(checkpointId)));

Review Comment:
   Hi @pnowojski , thanks for your review again. I have another question.
   
   Currently, cancelAlignmentTimer should be called in TaskThread. Should we 
move these 4 lines into runThrowing? Like this:
   ```java
   actionExecutor.runThrowing(
   () -> {
   operatorChain.abortCheckpoint(checkpointId, cause);
   if (alignmentTimer != null && checkpointId == 
alignmentCheckpointId) {
   cancelAlignmentTimer();
   }
   operatorChain.broadcastEvent(new 
CancelCheckpointMarker(checkpointId));
   });
   ``` 
   
   And should we call cancelAlignmentTimer(); in close method? In 
[FLINK-27724](https://issues.apache.org/jira/browse/FLINK-27724), I will fix 
the bug.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] ajian2002 commented on pull request #121: [FLINK-27626] Introduce AggregatuibMergeFunction

2022-05-23 Thread GitBox


ajian2002 commented on PR #121:
URL: 
https://github.com/apache/flink-table-store/pull/121#issuecomment-1134781989

   I just rebase master, sorry,please ignore this push


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] fredia commented on pull request #19448: [FLINK-25872][state] Support restore from non-changelog checkpoint with changelog enabled in CLAIM mode

2022-05-23 Thread GitBox


fredia commented on PR #19448:
URL: https://github.com/apache/flink/pull/19448#issuecomment-1134778763

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] pnowojski commented on a diff in pull request #19723: [FLINK-27251][checkpoint] Timeout aligned to unaligned checkpoint barrier in the output buffers

2022-05-23 Thread GitBox


pnowojski commented on code in PR #19723:
URL: https://github.com/apache/flink/pull/19723#discussion_r879507376


##
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedSubpartition.java:
##
@@ -112,6 +115,20 @@ public class PipelinedSubpartition extends 
ResultSubpartition
 
 private int bufferSize = Integer.MAX_VALUE;
 
+/**
+ * The channelState Future of unaligned checkpoint. Access to the 
channelStateFutures is
+ * synchronized on buffers.

Review Comment:
   I would drop:
   > Access to the channelStateFutures is synchronized on buffers.
   
   It duplicates `GuardedBy` annotation. 



##
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedSubpartition.java:
##
@@ -240,9 +260,158 @@ private boolean processPriorityBuffer(BufferConsumer 
bufferConsumer, int partial
 inflightBuffers.toArray(new Buffer[0]));
 }
 }
-return numPriorityElements == 1
-&& !isBlocked; // if subpartition is blocked then downstream 
doesn't expect any
-// notifications
+return needNotifyPriorityEvent();
+}
+
+// It just be called after add priorityEvent.
+private boolean needNotifyPriorityEvent() {
+assert Thread.holdsLock(buffers);
+// if subpartition is blocked then downstream doesn't expect any 
notifications
+return buffers.getNumPriorityElements() == 1 && !isBlocked;
+}
+
+private void processTimeoutableCheckpointBarrier(BufferConsumer 
bufferConsumer) {
+CheckpointBarrier barrier = 
parseAndCheckTimeoutableCheckpointBarrier(bufferConsumer);
+channelStateWriter.addOutputDataFuture(
+barrier.getId(),
+subpartitionInfo,
+ChannelStateWriter.SEQUENCE_NUMBER_UNKNOWN,
+createChannelStateFuture(barrier.getId()));
+}
+
+private void completeTimeoutableCheckpointBarrier(BufferConsumer 
bufferConsumer) {
+CheckpointBarrier barrier = 
parseAndCheckTimeoutableCheckpointBarrier(bufferConsumer);
+if (channelStateFutureIsAvailable(barrier.getId())) {
+completeChannelStateFuture(Collections.emptyList(), null);
+}
+}
+
+private CompletableFuture> createChannelStateFuture(long 
checkpointId) {
+assert Thread.holdsLock(buffers);
+if (channelStateFuture != null) {
+completeChannelStateFuture(
+null,
+new IllegalStateException(
+String.format(
+"%s has uncompleted channelStateFuture of 
checkpointId=%s, but it received "
++ "a new timeoutable checkpoint 
barrier of checkpointId=%s, it maybe "
++ "a bug due to currently does not 
support concurrent unaligned checkpoints.",

Review Comment:
   nit:
   > a bug due to currently not supported concurrent unaligned checkpoint



##
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedSubpartition.java:
##
@@ -240,9 +260,158 @@ private boolean processPriorityBuffer(BufferConsumer 
bufferConsumer, int partial
 inflightBuffers.toArray(new Buffer[0]));
 }
 }
-return numPriorityElements == 1
-&& !isBlocked; // if subpartition is blocked then downstream 
doesn't expect any
-// notifications
+return needNotifyPriorityEvent();
+}
+
+// It just be called after add priorityEvent.
+private boolean needNotifyPriorityEvent() {
+assert Thread.holdsLock(buffers);
+// if subpartition is blocked then downstream doesn't expect any 
notifications
+return buffers.getNumPriorityElements() == 1 && !isBlocked;
+}
+
+private void processTimeoutableCheckpointBarrier(BufferConsumer 
bufferConsumer) {
+CheckpointBarrier barrier = 
parseAndCheckTimeoutableCheckpointBarrier(bufferConsumer);
+channelStateWriter.addOutputDataFuture(
+barrier.getId(),
+subpartitionInfo,
+ChannelStateWriter.SEQUENCE_NUMBER_UNKNOWN,
+createChannelStateFuture(barrier.getId()));
+}
+
+private void completeTimeoutableCheckpointBarrier(BufferConsumer 
bufferConsumer) {
+CheckpointBarrier barrier = 
parseAndCheckTimeoutableCheckpointBarrier(bufferConsumer);
+if (channelStateFutureIsAvailable(barrier.getId())) {
+completeChannelStateFuture(Collections.emptyList(), null);
+}
+}

Review Comment:
   Shouldn't we `checkState` here actually that 
`channelStateFutureIsAvailable(barrier.getId())` is true? Is there a valid 
scenario where this method should return false?



##
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedSubpartition.java:

[jira] [Updated] (FLINK-27725) Create tests for TypeInformation and TypeSerializer classes in table

2022-05-23 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss updated FLINK-27725:
-
Description: 
During the implementation of FLINK-27527 we had to add `flink-table-planner` as 
a test dependency to `flink-tests`.

This created test failures for the reflection tests checking test coverage for 
`TypeInformation` and `TypeSerializer` classes as shown here:

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]

To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
temporary solution but there should be a better solution in place. Some of the 
classes are deprecated but others are not and should probably have tests 
created to improve the coverage.

  was:
During the implementation of FLINK-27527 we had to add `flink-table-planner` as 
a dependency to `flink-tests`.

This created test failures for the reflection tests checking test coverage for 
`TypeInformation` and `TypeSerializer` classes as shown here:


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]



To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
temporary solution but there should be a better solution in place. Some of the 
classes are deprecated but others are not and should probably have tests 
created to improve the coverage.


> Create tests for TypeInformation and TypeSerializer classes in table
> 
>
> Key: FLINK-27725
> URL: https://issues.apache.org/jira/browse/FLINK-27725
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Major
>
> During the implementation of FLINK-27527 we had to add `flink-table-planner` 
> as a test dependency to `flink-tests`.
> This created test failures for the reflection tests checking test coverage 
> for `TypeInformation` and `TypeSerializer` classes as shown here:
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]
> To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
> temporary solution but there should be a better solution in place. Some of 
> the classes are deprecated but others are not and should probably have tests 
> created to improve the coverage.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27724) The SubtaskCheckpointCoordinatorImpl#close() isn't called in anywhere

2022-05-23 Thread fanrui (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540973#comment-17540973
 ] 

fanrui commented on FLINK-27724:


Hi [~pnowojski]  [~Wencong Liu] , it's never closed on the clean/shutdown path. 
Could you assign it to me? I will fix it, thanks.

> The SubtaskCheckpointCoordinatorImpl#close() isn't called in anywhere
> -
>
> Key: FLINK-27724
> URL: https://issues.apache.org/jira/browse/FLINK-27724
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0, 1.15.0
>Reporter: fanrui
>Priority: Major
> Fix For: 1.16.0
>
>
> The SubtaskCheckpointCoordinatorImpl#close() isn't called in anywhere



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27724) The SubtaskCheckpointCoordinatorImpl#close() isn't called in anywhere

2022-05-23 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540947#comment-17540947
 ] 

Piotr Nowojski commented on FLINK-27724:


{quote}
In StreamTask#cancel(), Subtaskcheckbpointcoordinatorimpl#close() should 
probably be called.
{quote}
Yes, but I think on the clean shut down path 
{{SubtaskCheckpointCoordinatorImpl}} is never closed, right?

> The SubtaskCheckpointCoordinatorImpl#close() isn't called in anywhere
> -
>
> Key: FLINK-27724
> URL: https://issues.apache.org/jira/browse/FLINK-27724
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0, 1.15.0
>Reporter: fanrui
>Priority: Major
> Fix For: 1.16.0
>
>
> The SubtaskCheckpointCoordinatorImpl#close() isn't called in anywhere



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] flinkbot commented on pull request #19796: [FLINK-27219][sql-client] Print exception stack when get errors

2022-05-23 Thread GitBox


flinkbot commented on PR #19796:
URL: https://github.com/apache/flink/pull/19796#issuecomment-1134642417

   
   ## CI report:
   
   * 8e580dd89081d2190b1ac5fa880277390a56ecbc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27219) CliClientITCase.testSqlStatements failed on azure with jdk11

2022-05-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27219:
---
Labels: pull-request-available test-stability  (was: test-stability)

> CliClientITCase.testSqlStatements failed on azure with jdk11
> 
>
> Key: FLINK-27219
> URL: https://issues.apache.org/jira/browse/FLINK-27219
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Yun Gao
>Assignee: Shengkai Fang
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # test "ctas" only supported in Hive Dialect
> Apr 13 04:56:44 CREATE TABLE foo as select 1;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # list the configured configuration
> Apr 13 04:56:44 set;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # reset the configuration
> Apr 13 04:56:44 reset;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 set;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> ...
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 org.apache.flink.sql.parser.impl.ParseException: Encountered 
> "STRING" at line 10, column 27.
> Apr 13 04:56:44 Was expecting one of:
> Apr 13 04:56:44 ")" ...
> Apr 13 04:56:44 "," ...
> Apr 13 04:56:44 
> Apr 13 04:56:44 !error
> ...
> Apr 13 04:56:44 SHOW JARS;
> Apr 13 04:56:44 Empty set
> Apr 13 04:56:44 !ok
> Apr 13 04:56:44 "
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 13 04:56:44   at 
> org.apache.flink.table.client.cli.CliClientITCase.testSqlStatements(CliClientITCase.java:139)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 13 04:56:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> Apr 13 04:56:44   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Apr 13 04:56:44   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 13 04:56:44   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Apr 13 04:56:44   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Apr 13 04:56:44   at 
> 

[GitHub] [flink] fsk119 opened a new pull request, #19796: [FLINK-27219][sql-client] Print exception stack when get errors

2022-05-23 Thread GitBox


fsk119 opened a new pull request, #19796:
URL: https://github.com/apache/flink/pull/19796

   
   
   ## What is the purpose of the change
   
   *Print the exception stack when get errors. Currently it just print the root 
cause of the exception, which is hard for us to debug.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on pull request #19792: [FLINK-27740][tests] Migrate flink-test-utils-junit to JUnit5

2022-05-23 Thread GitBox


snuyanzin commented on PR #19792:
URL: https://github.com/apache/flink/pull/19792#issuecomment-1134631993

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27667) YARNHighAvailabilityITCase fails with "Failed to delete temp directory /tmp/junit1681"

2022-05-23 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540933#comment-17540933
 ] 

Huang Xingbo commented on FLINK-27667:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35960=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461

> YARNHighAvailabilityITCase fails with "Failed to delete temp directory 
> /tmp/junit1681"
> --
>
> Key: FLINK-27667
> URL: https://issues.apache.org/jira/browse/FLINK-27667
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.16.0
>Reporter: Martijn Visser
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35733=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=29208
>  
> {code:bash}
> May 17 08:36:22 [INFO] Results: 
> May 17 08:36:22 [INFO] 
> May 17 08:36:22 [ERROR] Errors: 
> May 17 08:36:22 [ERROR] YARNHighAvailabilityITCase » IO Failed to delete temp 
> directory /tmp/junit1681... 
> May 17 08:36:22 [INFO] 
> May 17 08:36:22 [ERROR] Tests run: 28, Failures: 0, Errors: 1, Skipped: 0 
> May 17 08:36:22 [INFO] 
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27745) ClientUtilsTest.uploadAndSetUserArtifacts failed with NoClassDefFoundError

2022-05-23 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-27745:


 Summary: ClientUtilsTest.uploadAndSetUserArtifacts failed with 
NoClassDefFoundError
 Key: FLINK-27745
 URL: https://issues.apache.org/jira/browse/FLINK-27745
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.16.0
Reporter: Huang Xingbo



{code:java}
2022-05-23T10:27:20.0131798Z May 23 10:27:20 [ERROR] Tests run: 2, Failures: 0, 
Errors: 1, Skipped: 0, Time elapsed: 0.729 s <<< FAILURE! - in 
org.apache.flink.runtime.client.ClientUtilsTest
2022-05-23T10:27:20.0133550Z May 23 10:27:20 [ERROR] 
org.apache.flink.runtime.client.ClientUtilsTest.uploadAndSetUserArtifacts  Time 
elapsed: 0.639 s  <<< ERROR!
2022-05-23T10:27:20.0134569Z May 23 10:27:20 
org.apache.flink.util.FlinkException: Could not upload job files.
2022-05-23T10:27:20.0135587Z May 23 10:27:20at 
org.apache.flink.runtime.client.ClientUtils.uploadJobGraphFiles(ClientUtils.java:86)
2022-05-23T10:27:20.0136861Z May 23 10:27:20at 
org.apache.flink.runtime.client.ClientUtils.extractAndUploadJobGraphFiles(ClientUtils.java:62)
2022-05-23T10:27:20.0138163Z May 23 10:27:20at 
org.apache.flink.runtime.client.ClientUtilsTest.uploadAndSetUserArtifacts(ClientUtilsTest.java:137)
2022-05-23T10:27:20.0139618Z May 23 10:27:20at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2022-05-23T10:27:20.0140639Z May 23 10:27:20at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2022-05-23T10:27:20.0142022Z May 23 10:27:20at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2022-05-23T10:27:20.0144222Z May 23 10:27:20at 
java.lang.reflect.Method.invoke(Method.java:498)
2022-05-23T10:27:20.0145368Z May 23 10:27:20at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
2022-05-23T10:27:20.0146856Z May 23 10:27:20at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2022-05-23T10:27:20.0147934Z May 23 10:27:20at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
2022-05-23T10:27:20.0148815Z May 23 10:27:20at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2022-05-23T10:27:20.0149537Z May 23 10:27:20at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
2022-05-23T10:27:20.0150204Z May 23 10:27:20at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
2022-05-23T10:27:20.0150848Z May 23 10:27:20at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2022-05-23T10:27:20.0151599Z May 23 10:27:20at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
2022-05-23T10:27:20.0152293Z May 23 10:27:20at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
2022-05-23T10:27:20.0153073Z May 23 10:27:20at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
2022-05-23T10:27:20.0153876Z May 23 10:27:20at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
2022-05-23T10:27:20.0154555Z May 23 10:27:20at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
2022-05-23T10:27:20.0155189Z May 23 10:27:20at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
2022-05-23T10:27:20.0155846Z May 23 10:27:20at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
2022-05-23T10:27:20.0156708Z May 23 10:27:20at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
2022-05-23T10:27:20.0157380Z May 23 10:27:20at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
2022-05-23T10:27:20.0158056Z May 23 10:27:20at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2022-05-23T10:27:20.0158760Z May 23 10:27:20at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2022-05-23T10:27:20.0159493Z May 23 10:27:20at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
2022-05-23T10:27:20.0160124Z May 23 10:27:20at 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2022-05-23T10:27:20.0160740Z May 23 10:27:20at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2022-05-23T10:27:20.0161649Z May 23 10:27:20at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
2022-05-23T10:27:20.0162267Z May 23 10:27:20at 
org.junit.runner.JUnitCore.run(JUnitCore.java:137)
2022-05-23T10:27:20.0162936Z May 23 10:27:20at 
org.junit.runner.JUnitCore.run(JUnitCore.java:115)
2022-05-23T10:27:20.0163607Z May 23 10:27:20at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
2022-05-23T10:27:20.0164434Z May 23 10:27:20at 

[GitHub] [flink-ml] yunfengzhou-hub commented on pull request #103: [FLINK-27742] Fix Compatibility Issue Between Stages

2022-05-23 Thread GitBox


yunfengzhou-hub commented on PR #103:
URL: https://github.com/apache/flink-ml/pull/103#issuecomment-1134617209

   Hi @lindong28 , Could you please help review this PR? I have just changed 
three algorithms for now, and if we agree that the changes are proper, I'll 
further make the change on all algorithms.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-25188) Cannot install PyFlink on MacOS with M1 chip

2022-05-23 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo closed FLINK-25188.

Resolution: Done

Merged into master via 7e9be78974f95e4eb3c8bb442564c81ea61c563e

> Cannot install PyFlink on MacOS with M1 chip
> 
>
> Key: FLINK-25188
> URL: https://issues.apache.org/jira/browse/FLINK-25188
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Affects Versions: 1.14.0
>Reporter: LuNng Wang
>Assignee: LuNng Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
> Attachments: image-2022-01-04-11-36-20-090.png
>
>
> Need to update dependencies: numpy>= 
> 1.20.3、pyarrow>=5.0.0、pandas>=1.3.0、apache-beam==2.36.0
> This following is some dependencies adapt M1 chip informations
> Numpy version:
> [https://stackoverflow.com/questions/65336789/numpy-build-fail-in-m1-big-sur-11-1]
> [https://github.com/numpy/numpy/releases/tag/v1.21.4]
> pyarrow version:
> [https://stackoverflow.com/questions/68385728/installing-pyarrow-cant-copy-build-lib-macosx-11-arm64-3-9-pyarrow-include-ar]
> pandas version:
> [https://github.com/pandas-dev/pandas/issues/40611#issuecomment-901569655]
> Apache beam:
> https://issues.apache.org/jira/browse/BEAM-12957
> https://issues.apache.org/jira/browse/BEAM-11703
> This following is dependency tree after installed successfully 
> Although Beam need numpy<1.21.0 and M1 need numpy >=1.21.4, when I using 
> numpy 1.20.3  I install successfully on M1 chip.
> {code:java}
> apache-flink==1.14.dev0
>   - apache-beam [required: ==2.34.0, installed: 2.34.0]
>     - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1]
>     - crcmod [required: >=1.7,<2.0, installed: 1.7]
>     - dill [required: >=0.3.1.1,<0.3.2, installed: 0.3.1.1]
>     - fastavro [required: >=0.21.4,<2, installed: 0.23.6]
>       - pytz [required: Any, installed: 2021.3]
>     - future [required: >=0.18.2,<1.0.0, installed: 0.18.2]
>     - grpcio [required: >=1.29.0,<2, installed: 1.42.0]
>       - six [required: >=1.5.2, installed: 1.16.0]
>     - hdfs [required: >=2.1.0,<3.0.0, installed: 2.6.0]
>       - docopt [required: Any, installed: 0.6.2]
>       - requests [required: >=2.7.0, installed: 2.26.0]
>         - certifi [required: >=2017.4.17, installed: 2021.10.8]
>         - charset-normalizer [required: ~=2.0.0, installed: 2.0.9]
>         - idna [required: >=2.5,<4, installed: 3.3]
>         - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7]
>       - six [required: >=1.9.0, installed: 1.16.0]
>     - httplib2 [required: >=0.8,<0.20.0, installed: 0.19.1]
>       - pyparsing [required: >=2.4.2,<3, installed: 2.4.7]
>     - numpy [required: >=1.14.3,<1.21.0, installed: 1.20.3]
>     - oauth2client [required: >=2.0.1,<5, installed: 4.1.3]
>       - httplib2 [required: >=0.9.1, installed: 0.19.1]
>         - pyparsing [required: >=2.4.2,<3, installed: 2.4.7]
>       - pyasn1 [required: >=0.1.7, installed: 0.4.8]
>       - pyasn1-modules [required: >=0.0.5, installed: 0.2.8]
>         - pyasn1 [required: >=0.4.6,<0.5.0, installed: 0.4.8]
>       - rsa [required: >=3.1.4, installed: 4.8]
>         - pyasn1 [required: >=0.1.3, installed: 0.4.8]
>       - six [required: >=1.6.1, installed: 1.16.0]
>     - orjson [required: <4.0, installed: 3.6.5]
>     - protobuf [required: >=3.12.2,<4, installed: 3.17.3]
>       - six [required: >=1.9, installed: 1.16.0]
>     - pyarrow [required: >=0.15.1,<6.0.0, installed: 5.0.0]
>       - numpy [required: >=1.16.6, installed: 1.20.3]
>     - pydot [required: >=1.2.0,<2, installed: 1.4.2]
>       - pyparsing [required: >=2.1.4, installed: 2.4.7]
>     - pymongo [required: >=3.8.0,<4.0.0, installed: 3.12.2]
>     - python-dateutil [required: >=2.8.0,<3, installed: 2.8.0]
>       - six [required: >=1.5, installed: 1.16.0]
>     - pytz [required: >=2018.3, installed: 2021.3]
>     - requests [required: >=2.24.0,<3.0.0, installed: 2.26.0]
>       - certifi [required: >=2017.4.17, installed: 2021.10.8]
>       - charset-normalizer [required: ~=2.0.0, installed: 2.0.9]
>       - idna [required: >=2.5,<4, installed: 3.3]
>       - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7]
>     - typing-extensions [required: >=3.7.0,<4, installed: 3.10.0.2]
>   - apache-flink-libraries [required: ==1.14.dev0, installed: 1.14.dev0]
>   - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1]
>   - cloudpickle [required: ==1.2.2, installed: 1.2.2]
>   - fastavro [required: >=0.21.4,<0.24, installed: 0.23.6]
>     - pytz [required: Any, installed: 2021.3]
>   - numpy [required: >=1.20.3, installed: 1.20.3]
>   - pandas [required: >=1.3.0, installed: 1.3.0]
>     - numpy [required: >=1.17.3, installed: 1.20.3]
>     - python-dateutil [required: 

[jira] [Assigned] (FLINK-25188) Cannot install PyFlink on MacOS with M1 chip

2022-05-23 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo reassigned FLINK-25188:


Assignee: LuNng Wang

> Cannot install PyFlink on MacOS with M1 chip
> 
>
> Key: FLINK-25188
> URL: https://issues.apache.org/jira/browse/FLINK-25188
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Affects Versions: 1.14.0
>Reporter: LuNng Wang
>Assignee: LuNng Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-01-04-11-36-20-090.png
>
>
> Need to update dependencies: numpy>= 
> 1.20.3、pyarrow>=5.0.0、pandas>=1.3.0、apache-beam==2.36.0
> This following is some dependencies adapt M1 chip informations
> Numpy version:
> [https://stackoverflow.com/questions/65336789/numpy-build-fail-in-m1-big-sur-11-1]
> [https://github.com/numpy/numpy/releases/tag/v1.21.4]
> pyarrow version:
> [https://stackoverflow.com/questions/68385728/installing-pyarrow-cant-copy-build-lib-macosx-11-arm64-3-9-pyarrow-include-ar]
> pandas version:
> [https://github.com/pandas-dev/pandas/issues/40611#issuecomment-901569655]
> Apache beam:
> https://issues.apache.org/jira/browse/BEAM-12957
> https://issues.apache.org/jira/browse/BEAM-11703
> This following is dependency tree after installed successfully 
> Although Beam need numpy<1.21.0 and M1 need numpy >=1.21.4, when I using 
> numpy 1.20.3  I install successfully on M1 chip.
> {code:java}
> apache-flink==1.14.dev0
>   - apache-beam [required: ==2.34.0, installed: 2.34.0]
>     - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1]
>     - crcmod [required: >=1.7,<2.0, installed: 1.7]
>     - dill [required: >=0.3.1.1,<0.3.2, installed: 0.3.1.1]
>     - fastavro [required: >=0.21.4,<2, installed: 0.23.6]
>       - pytz [required: Any, installed: 2021.3]
>     - future [required: >=0.18.2,<1.0.0, installed: 0.18.2]
>     - grpcio [required: >=1.29.0,<2, installed: 1.42.0]
>       - six [required: >=1.5.2, installed: 1.16.0]
>     - hdfs [required: >=2.1.0,<3.0.0, installed: 2.6.0]
>       - docopt [required: Any, installed: 0.6.2]
>       - requests [required: >=2.7.0, installed: 2.26.0]
>         - certifi [required: >=2017.4.17, installed: 2021.10.8]
>         - charset-normalizer [required: ~=2.0.0, installed: 2.0.9]
>         - idna [required: >=2.5,<4, installed: 3.3]
>         - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7]
>       - six [required: >=1.9.0, installed: 1.16.0]
>     - httplib2 [required: >=0.8,<0.20.0, installed: 0.19.1]
>       - pyparsing [required: >=2.4.2,<3, installed: 2.4.7]
>     - numpy [required: >=1.14.3,<1.21.0, installed: 1.20.3]
>     - oauth2client [required: >=2.0.1,<5, installed: 4.1.3]
>       - httplib2 [required: >=0.9.1, installed: 0.19.1]
>         - pyparsing [required: >=2.4.2,<3, installed: 2.4.7]
>       - pyasn1 [required: >=0.1.7, installed: 0.4.8]
>       - pyasn1-modules [required: >=0.0.5, installed: 0.2.8]
>         - pyasn1 [required: >=0.4.6,<0.5.0, installed: 0.4.8]
>       - rsa [required: >=3.1.4, installed: 4.8]
>         - pyasn1 [required: >=0.1.3, installed: 0.4.8]
>       - six [required: >=1.6.1, installed: 1.16.0]
>     - orjson [required: <4.0, installed: 3.6.5]
>     - protobuf [required: >=3.12.2,<4, installed: 3.17.3]
>       - six [required: >=1.9, installed: 1.16.0]
>     - pyarrow [required: >=0.15.1,<6.0.0, installed: 5.0.0]
>       - numpy [required: >=1.16.6, installed: 1.20.3]
>     - pydot [required: >=1.2.0,<2, installed: 1.4.2]
>       - pyparsing [required: >=2.1.4, installed: 2.4.7]
>     - pymongo [required: >=3.8.0,<4.0.0, installed: 3.12.2]
>     - python-dateutil [required: >=2.8.0,<3, installed: 2.8.0]
>       - six [required: >=1.5, installed: 1.16.0]
>     - pytz [required: >=2018.3, installed: 2021.3]
>     - requests [required: >=2.24.0,<3.0.0, installed: 2.26.0]
>       - certifi [required: >=2017.4.17, installed: 2021.10.8]
>       - charset-normalizer [required: ~=2.0.0, installed: 2.0.9]
>       - idna [required: >=2.5,<4, installed: 3.3]
>       - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7]
>     - typing-extensions [required: >=3.7.0,<4, installed: 3.10.0.2]
>   - apache-flink-libraries [required: ==1.14.dev0, installed: 1.14.dev0]
>   - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1]
>   - cloudpickle [required: ==1.2.2, installed: 1.2.2]
>   - fastavro [required: >=0.21.4,<0.24, installed: 0.23.6]
>     - pytz [required: Any, installed: 2021.3]
>   - numpy [required: >=1.20.3, installed: 1.20.3]
>   - pandas [required: >=1.3.0, installed: 1.3.0]
>     - numpy [required: >=1.17.3, installed: 1.20.3]
>     - python-dateutil [required: >=2.7.3, installed: 2.8.0]
>       - six [required: >=1.5, installed: 1.16.0]
>     - pytz 

[jira] [Updated] (FLINK-25188) Cannot install PyFlink on MacOS with M1 chip

2022-05-23 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-25188:
-
Fix Version/s: 1.16.0

> Cannot install PyFlink on MacOS with M1 chip
> 
>
> Key: FLINK-25188
> URL: https://issues.apache.org/jira/browse/FLINK-25188
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Affects Versions: 1.14.0
>Reporter: LuNng Wang
>Assignee: LuNng Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
> Attachments: image-2022-01-04-11-36-20-090.png
>
>
> Need to update dependencies: numpy>= 
> 1.20.3、pyarrow>=5.0.0、pandas>=1.3.0、apache-beam==2.36.0
> This following is some dependencies adapt M1 chip informations
> Numpy version:
> [https://stackoverflow.com/questions/65336789/numpy-build-fail-in-m1-big-sur-11-1]
> [https://github.com/numpy/numpy/releases/tag/v1.21.4]
> pyarrow version:
> [https://stackoverflow.com/questions/68385728/installing-pyarrow-cant-copy-build-lib-macosx-11-arm64-3-9-pyarrow-include-ar]
> pandas version:
> [https://github.com/pandas-dev/pandas/issues/40611#issuecomment-901569655]
> Apache beam:
> https://issues.apache.org/jira/browse/BEAM-12957
> https://issues.apache.org/jira/browse/BEAM-11703
> This following is dependency tree after installed successfully 
> Although Beam need numpy<1.21.0 and M1 need numpy >=1.21.4, when I using 
> numpy 1.20.3  I install successfully on M1 chip.
> {code:java}
> apache-flink==1.14.dev0
>   - apache-beam [required: ==2.34.0, installed: 2.34.0]
>     - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1]
>     - crcmod [required: >=1.7,<2.0, installed: 1.7]
>     - dill [required: >=0.3.1.1,<0.3.2, installed: 0.3.1.1]
>     - fastavro [required: >=0.21.4,<2, installed: 0.23.6]
>       - pytz [required: Any, installed: 2021.3]
>     - future [required: >=0.18.2,<1.0.0, installed: 0.18.2]
>     - grpcio [required: >=1.29.0,<2, installed: 1.42.0]
>       - six [required: >=1.5.2, installed: 1.16.0]
>     - hdfs [required: >=2.1.0,<3.0.0, installed: 2.6.0]
>       - docopt [required: Any, installed: 0.6.2]
>       - requests [required: >=2.7.0, installed: 2.26.0]
>         - certifi [required: >=2017.4.17, installed: 2021.10.8]
>         - charset-normalizer [required: ~=2.0.0, installed: 2.0.9]
>         - idna [required: >=2.5,<4, installed: 3.3]
>         - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7]
>       - six [required: >=1.9.0, installed: 1.16.0]
>     - httplib2 [required: >=0.8,<0.20.0, installed: 0.19.1]
>       - pyparsing [required: >=2.4.2,<3, installed: 2.4.7]
>     - numpy [required: >=1.14.3,<1.21.0, installed: 1.20.3]
>     - oauth2client [required: >=2.0.1,<5, installed: 4.1.3]
>       - httplib2 [required: >=0.9.1, installed: 0.19.1]
>         - pyparsing [required: >=2.4.2,<3, installed: 2.4.7]
>       - pyasn1 [required: >=0.1.7, installed: 0.4.8]
>       - pyasn1-modules [required: >=0.0.5, installed: 0.2.8]
>         - pyasn1 [required: >=0.4.6,<0.5.0, installed: 0.4.8]
>       - rsa [required: >=3.1.4, installed: 4.8]
>         - pyasn1 [required: >=0.1.3, installed: 0.4.8]
>       - six [required: >=1.6.1, installed: 1.16.0]
>     - orjson [required: <4.0, installed: 3.6.5]
>     - protobuf [required: >=3.12.2,<4, installed: 3.17.3]
>       - six [required: >=1.9, installed: 1.16.0]
>     - pyarrow [required: >=0.15.1,<6.0.0, installed: 5.0.0]
>       - numpy [required: >=1.16.6, installed: 1.20.3]
>     - pydot [required: >=1.2.0,<2, installed: 1.4.2]
>       - pyparsing [required: >=2.1.4, installed: 2.4.7]
>     - pymongo [required: >=3.8.0,<4.0.0, installed: 3.12.2]
>     - python-dateutil [required: >=2.8.0,<3, installed: 2.8.0]
>       - six [required: >=1.5, installed: 1.16.0]
>     - pytz [required: >=2018.3, installed: 2021.3]
>     - requests [required: >=2.24.0,<3.0.0, installed: 2.26.0]
>       - certifi [required: >=2017.4.17, installed: 2021.10.8]
>       - charset-normalizer [required: ~=2.0.0, installed: 2.0.9]
>       - idna [required: >=2.5,<4, installed: 3.3]
>       - urllib3 [required: >=1.21.1,<1.27, installed: 1.26.7]
>     - typing-extensions [required: >=3.7.0,<4, installed: 3.10.0.2]
>   - apache-flink-libraries [required: ==1.14.dev0, installed: 1.14.dev0]
>   - avro-python3 [required: >=1.8.1,<1.10.0,!=1.9.2, installed: 1.9.2.1]
>   - cloudpickle [required: ==1.2.2, installed: 1.2.2]
>   - fastavro [required: >=0.21.4,<0.24, installed: 0.23.6]
>     - pytz [required: Any, installed: 2021.3]
>   - numpy [required: >=1.20.3, installed: 1.20.3]
>   - pandas [required: >=1.3.0, installed: 1.3.0]
>     - numpy [required: >=1.17.3, installed: 1.20.3]
>     - python-dateutil [required: >=2.7.3, installed: 2.8.0]
>       - six [required: >=1.5, 

[GitHub] [flink] HuangXingBo closed pull request #18769: [FLINK-25188][python][build] Support m1 chip.

2022-05-23 Thread GitBox


HuangXingBo closed pull request #18769: [FLINK-25188][python][build] Support m1 
chip.
URL: https://github.com/apache/flink/pull/18769


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #19795: [FLINK-27299][Runtime/Configuration] flink parsing parameter bug fixed.

2022-05-23 Thread GitBox


flinkbot commented on PR #19795:
URL: https://github.com/apache/flink/pull/19795#issuecomment-1134610208

   
   ## CI report:
   
   * c37a2c48908e08d0a2aca55196b853d2668cd6b6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wolfboys opened a new pull request, #19795: [FLINK-27299][Runtime/Configuration] flink parsing parameter bug fixed.

2022-05-23 Thread GitBox


wolfboys opened a new pull request, #19795:
URL: https://github.com/apache/flink/pull/19795

   ## What is the purpose of the change
   
   When I am running a flink job, I specify a running parameter with a "#" sign 
in it. The parsing fails.
   
   e.g: flink run com.myJob --sink.password db@123#123 
   
   only parse the content in front of "#", after reading the source code It is 
found that the parameters are intercepted according to "#" in the 
loadYAMLResource method of GlobalConfiguration. This part needs to be improved
   
   ## Brief change log
 -  before: `line.split("#", 2) ` after: `line.split("\\s+#", 2)` Following 
yaml's parameter parsing rules, the content after " #" ( space + #) is the real 
comment part, which should be discarded when parsing parameters
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): don't know
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature?  no
 - If yes, how is the feature documented?  not applicable


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] deadwind4 commented on pull request #537: [hotfix] Fix hugo default port error

2022-05-23 Thread GitBox


deadwind4 commented on PR #537:
URL: https://github.com/apache/flink-web/pull/537#issuecomment-1134594448

   @MartijnVisser Please review it, Thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dianfu commented on a diff in pull request #19775: [FLINK-27690][python][connector/pulsar] Add Python documentation and examples for Pulsar connector

2022-05-23 Thread GitBox


dianfu commented on code in PR #19775:
URL: https://github.com/apache/flink/pull/19775#discussion_r879373137


##
docs/content/docs/connectors/datastream/pulsar.md:
##
@@ -169,13 +224,32 @@ you can use the predefined `PulsarDeserializationSchema`. 
Pulsar connector provi
   PulsarDeserializationSchema.pulsarSchema(Schema, Class, Class);
   ```
 - Decode the message by using Flink's `DeserializationSchema`
+  {{< tabs "pulsar-deserializer-deserialization-schema" >}}
+  {{< tab "Java" >}}
   ```java
   PulsarDeserializationSchema.flinkSchema(DeserializationSchema);
   ```
+  {{< /tab >}}
+  {{< tab "Python" >}}
+  ```python
+  PulsarDeserializationSchema.flink_schema(DeserializationSchema)
+  ```
+  {{< /tab >}}
+  {{< /tabs >}}
+
 - Decode the message by using Flink's `TypeInformation`
+  {{< tabs "pulsar-deserializer-type-information" >}}
+  {{< tab "Java" >}}
   ```java
   PulsarDeserializationSchema.flinkTypeInfo(TypeInformation, ExecutionConfig);
   ```
+  {{< /tab >}}
+  {{< tab "Python" >}}
+  ```python
+  PulsarDeserializationSchema.flink_type_info(TypeInformation, ExecutionConfig)

Review Comment:
   Currently, it's not possible to create ExecutionConfig in Python and so I 
have merged one 
commit(https://github.com/apache/flink/commit/73590f432d8dc4cd53f0357025e250a823e6b857)
 to allow  create PulsarDeserializationSchema without specifying 
ExecutionConfig. So what about suggest users to use 
`PulsarDeserializationSchema.flink_type_info(TypeInformation)` here? 



##
docs/content/docs/connectors/datastream/pulsar.md:
##
@@ -335,12 +486,27 @@ job, the Pulsar source periodically discover new 
partitions under a provided
 topic-partition subscription pattern. To enable partition discovery, you can 
set a non-negative value for
 the `PulsarSourceOptions.PULSAR_PARTITION_DISCOVERY_INTERVAL_MS` option:
 
+{{< tabs "pulsar-dynamic-partition-discovery" >}}
+{{< tab "Java" >}}
+
 ```java
 // discover new partitions per 10 seconds
 PulsarSource.builder()
 .setConfig(PulsarSourceOptions.PULSAR_PARTITION_DISCOVERY_INTERVAL_MS, 
1);
 ```
 
+{{< /tab >}}
+{{< tab "Python" >}}
+
+```python
+# discover new partitions per 10 seconds
+PulsarSource.builder()
+.set_config(PulsarSourceOptions.PULSAR_PARTITION_DISCOVERY_INTERVAL_MS, 
1)

Review Comment:
   ```suggestion
   .set_config("pulsar.source.partitionDiscoveryIntervalMs", 1)
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-26621) flink-tests failed on azure due to Error occurred in starting fork

2022-05-23 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540916#comment-17540916
 ] 

Huang Xingbo commented on FLINK-26621:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35945=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca

> flink-tests failed on azure due to Error occurred in starting fork
> --
>
> Key: FLINK-26621
> URL: https://issues.apache.org/jira/browse/FLINK-26621
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.15.0, 1.14.4, 1.16.0
>Reporter: Yun Gao
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-03-11T16:20:12.6929558Z Mar 11 16:20:12 [WARNING] The requested profile 
> "skip-webui-build" could not be activated because it does not exist.
> 2022-03-11T16:20:12.6939269Z Mar 11 16:20:12 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test 
> (integration-tests) on project flink-tests: There are test failures.
> 2022-03-11T16:20:12.6940062Z Mar 11 16:20:12 [ERROR] 
> 2022-03-11T16:20:12.6940954Z Mar 11 16:20:12 [ERROR] Please refer to 
> /__w/2/s/flink-tests/target/surefire-reports for the individual test results.
> 2022-03-11T16:20:12.6941875Z Mar 11 16:20:12 [ERROR] Please refer to dump 
> files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2022-03-11T16:20:12.6942966Z Mar 11 16:20:12 [ERROR] ExecutionException Error 
> occurred in starting fork, check output in log
> 2022-03-11T16:20:12.6943919Z Mar 11 16:20:12 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException Error occurred in starting fork, check output in log
> 2022-03-11T16:20:12.6945023Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:532)
> 2022-03-11T16:20:12.6945878Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:479)
> 2022-03-11T16:20:12.6946761Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:322)
> 2022-03-11T16:20:12.6947532Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:266)
> 2022-03-11T16:20:12.6953051Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314)
> 2022-03-11T16:20:12.6954035Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159)
> 2022-03-11T16:20:12.6954917Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932)
> 2022-03-11T16:20:12.6955749Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 2022-03-11T16:20:12.6956542Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 2022-03-11T16:20:12.6957456Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 2022-03-11T16:20:12.6958232Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> 2022-03-11T16:20:12.6959038Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
> 2022-03-11T16:20:12.6960553Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
> 2022-03-11T16:20:12.6962116Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
> 2022-03-11T16:20:12.6963009Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120)
> 2022-03-11T16:20:12.6963737Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355)
> 2022-03-11T16:20:12.6964644Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
> 2022-03-11T16:20:12.6965647Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
> 2022-03-11T16:20:12.6966732Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.cli.MavenCli.doMain(MavenCli.java:216)
> 2022-03-11T16:20:12.6967818Z Mar 11 16:20:12 [ERROR] at 
> org.apache.maven.cli.MavenCli.main(MavenCli.java:160)
> 2022-03-11T16:20:12.6968857Z Mar 11 

[jira] [Closed] (FLINK-21446) KafkaProducerTestBase.testExactlyOnce Fail

2022-05-23 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-21446.
--
Resolution: Cannot Reproduce

> KafkaProducerTestBase.testExactlyOnce Fail
> --
>
> Key: FLINK-21446
> URL: https://issues.apache.org/jira/browse/FLINK-21446
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.11.3
>Reporter: Guowei Ma
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13612=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8]
>  
> {code:java}
> 22:52:12,032 [ Checkpoint Timer] INFO 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Checkpoint 18 
> of job 84164b7efcb41ca524cc7076030d1f91 expired before completing. 
> 22:52:12,033 [ Checkpoint Timer] INFO 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Triggering 
> checkpoint 19 (type=CHECKPOINT) @ 1614034332033 for job 
> 84164b7efcb41ca524cc7076030d1f91. 22:52:12,036 [Source: Custom Source -> Map 
> -> Sink: Unnamed (1/1)] INFO 
> org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaProducer [] - 
> Flushing new partitions 22:52:12,036 
> [flink-akka.actor.default-dispatcher-172] INFO 
> org.apache.flink.runtime.jobmaster.JobMaster [] - Trying to recover from a 
> global failure. org.apache.flink.util.FlinkRuntimeException: Exceeded 
> checkpoint tolerable failure threshold. at 
> org.apache.flink.runtime.checkpoint.CheckpointFailureManager.handleJobLevelCheckpointException(CheckpointFailureManager.java:68)
>  ~[flink-runtime_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoint(CheckpointCoordinator.java:1892)
>  ~[flink-runtime_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoint(CheckpointCoordinator.java:1869)
>  ~[flink-runtime_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.access$600(CheckpointCoordinator.java:93)
>  ~[flink-runtime_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator$CheckpointCanceller.run(CheckpointCoordinator.java:2006)
>  ~[flink-runtime_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_275] at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_275] at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  ~[?:1.8.0_275] at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  ~[?:1.8.0_275] at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_275] at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_275] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_275]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (FLINK-21533) Kafka011ITCase#testAllDeletes fails on azure

2022-05-23 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-21533.
--
Resolution: Cannot Reproduce

> Kafka011ITCase#testAllDeletes fails on azure
> 
>
> Key: FLINK-21533
> URL: https://issues.apache.org/jira/browse/FLINK-21533
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.3
>Reporter: Dawid Wysakowicz
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13867=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=e4f347ab-2a29-5d7c-3685-b0fcd2b6b051
> {code}
> 2021-02-26T22:27:56.9286925Z [ERROR] 
> testAllDeletes(org.apache.flink.streaming.connectors.kafka.Kafka011ITCase)  
> Time elapsed: 3.228 s  <<< ERROR!
> 2021-02-26T22:27:56.9287994Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2021-02-26T22:27:56.9288805Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2021-02-26T22:27:56.9290091Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:762)
> 2021-02-26T22:27:56.9290978Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:77)
> 2021-02-26T22:27:56.9291926Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1651)
> 2021-02-26T22:27:56.9293538Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runAllDeletesTest(KafkaConsumerTestBase.java:1649)
> 2021-02-26T22:27:56.9294944Z  at 
> org.apache.flink.streaming.connectors.kafka.Kafka011ITCase.testAllDeletes(Kafka011ITCase.java:130)
> 2021-02-26T22:27:56.9295702Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-02-26T22:27:56.9296370Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-02-26T22:27:56.9299360Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-02-26T22:27:56.9299955Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-02-26T22:27:56.9300402Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-02-26T22:27:56.9300897Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-02-26T22:27:56.9301387Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-02-26T22:27:56.9301851Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-02-26T22:27:56.9302471Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2021-02-26T22:27:56.9325899Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2021-02-26T22:27:56.9327852Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2021-02-26T22:27:56.9328934Z  at java.lang.Thread.run(Thread.java:748)
> 2021-02-26T22:27:56.9329795Z Caused by: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2021-02-26T22:27:56.9330778Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:118)
> 2021-02-26T22:27:56.9331904Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:80)
> 2021-02-26T22:27:56.9333126Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:206)
> 2021-02-26T22:27:56.9334090Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:197)
> 2021-02-26T22:27:56.9335043Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:189)
> 2021-02-26T22:27:56.9335946Z  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:639)
> 2021-02-26T22:27:56.9336834Z  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:397)
> 2021-02-26T22:27:56.9337698Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-02-26T22:27:56.9338398Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-02-26T22:27:56.9339236Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-02-26T22:27:56.9339949Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-02-26T22:27:56.9340681Z  at 
> 

[jira] [Closed] (FLINK-21832) Avro Confluent Schema Registry nightly end-to-end fail

2022-05-23 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-21832.
--
Resolution: Cannot Reproduce

> Avro Confluent Schema Registry nightly end-to-end   fail
> 
>
> Key: FLINK-21832
> URL: https://issues.apache.org/jira/browse/FLINK-21832
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.2, 1.13.0
>Reporter: Guowei Ma
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14793=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=20126
> Watchdog could not kill the download successfully.
> {code:java}
>  60  296M   60  179M0 0   235k  0  0:21:28  0:13:01  0:08:27  
> 238kMar 16 13:33:35 Test (pid: 13982) did not finish after 900 seconds.
> Mar 16 13:33:35 Printing Flink logs and killing it:
> cat: 
> '/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
>  No such file or directory
> {code}
> Because the watchdog exit so the case fail
> {code:java}
> Mar 16 13:42:37 Stopping job timeout watchdog (with pid=13983)
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 809: 
> kill: (13983) - No such process
> Mar 16 13:42:37 [FAIL] Test script contains errors.
> Mar 16 13:42:37 Checking for errors...
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (FLINK-21652) Elasticsearch6DynamicSinkITCase.testWritingDocumentsFromTableApi failed because of throwing SocketTimeoutException during the closing stage.

2022-05-23 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-21652.
--
Resolution: Cannot Reproduce

> Elasticsearch6DynamicSinkITCase.testWritingDocumentsFromTableApi failed 
> because of throwing SocketTimeoutException during the closing stage.
> 
>
> Key: FLINK-21652
> URL: https://issues.apache.org/jira/browse/FLINK-21652
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14263=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361
> {code:java}
> 2021-03-07T23:12:11.0702985Z [ERROR] 
> testWritingDocumentsFromTableApi(org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase)
>   Time elapsed: 31.256 s  <<< ERROR!
> 2021-03-07T23:12:11.0704247Z java.util.concurrent.ExecutionException: 
> org.apache.flink.table.api.TableException: Failed to wait job finish
> 2021-03-07T23:12:11.0705203Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2021-03-07T23:12:11.0705982Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2021-03-07T23:12:11.0706791Z  at 
> org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:123)
> 2021-03-07T23:12:11.0707583Z  at 
> org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:86)
> 2021-03-07T23:12:11.0708850Z  at 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase.testWritingDocumentsFromTableApi(Elasticsearch6DynamicSinkITCase.java:205)
> 2021-03-07T23:12:11.0709804Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-03-07T23:12:11.0710479Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-03-07T23:12:11.0711251Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-03-07T23:12:11.0711974Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-03-07T23:12:11.0712663Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-03-07T23:12:11.0713466Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-03-07T23:12:11.0715464Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-03-07T23:12:11.0716070Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-03-07T23:12:11.0716576Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-03-07T23:12:11.0717278Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-03-07T23:12:11.0717873Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-03-07T23:12:11.0718540Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-03-07T23:12:11.0718940Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-03-07T23:12:11.0719362Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-03-07T23:12:11.0719792Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-03-07T23:12:11.0720207Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-03-07T23:12:11.0720728Z  at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> 2021-03-07T23:12:11.0721353Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-03-07T23:12:11.0721940Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2021-03-07T23:12:11.0753352Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2021-03-07T23:12:11.0754394Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2021-03-07T23:12:11.0755260Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2021-03-07T23:12:11.0756131Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2021-03-07T23:12:11.0757021Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2021-03-07T23:12:11.0758317Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 

[jira] [Commented] (FLINK-27219) CliClientITCase.testSqlStatements failed on azure with jdk11

2022-05-23 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540915#comment-17540915
 ] 

Huang Xingbo commented on FLINK-27219:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35945=logs=ce3801ad-3bd5-5f06-d165-34d37e757d90=5e4d9387-1dcc-5885-a901-90469b7e6d2f

> CliClientITCase.testSqlStatements failed on azure with jdk11
> 
>
> Key: FLINK-27219
> URL: https://issues.apache.org/jira/browse/FLINK-27219
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Yun Gao
>Assignee: Shengkai Fang
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # test "ctas" only supported in Hive Dialect
> Apr 13 04:56:44 CREATE TABLE foo as select 1;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # list the configured configuration
> Apr 13 04:56:44 set;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 # reset the configuration
> Apr 13 04:56:44 reset;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> Apr 13 04:56:44 
> Apr 13 04:56:44 set;
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 java.lang.ClassCastException: class 
> jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class 
> java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and 
> java.net.URLClassLoader are in module java.base of loader 'bootstrap')
> Apr 13 04:56:44 !error
> ...
> Apr 13 04:56:44 [ERROR] Could not execute SQL statement. Reason:
> Apr 13 04:56:44 org.apache.flink.sql.parser.impl.ParseException: Encountered 
> "STRING" at line 10, column 27.
> Apr 13 04:56:44 Was expecting one of:
> Apr 13 04:56:44 ")" ...
> Apr 13 04:56:44 "," ...
> Apr 13 04:56:44 
> Apr 13 04:56:44 !error
> ...
> Apr 13 04:56:44 SHOW JARS;
> Apr 13 04:56:44 Empty set
> Apr 13 04:56:44 !ok
> Apr 13 04:56:44 "
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Apr 13 04:56:44   at 
> org.apache.flink.table.client.cli.CliClientITCase.testSqlStatements(CliClientITCase.java:139)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 13 04:56:44   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 13 04:56:44   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> Apr 13 04:56:44   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Apr 13 04:56:44   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 13 04:56:44   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Apr 13 04:56:44   at 
> 

[jira] [Closed] (FLINK-21379) Elasticsearch (v6.3.1) sink end-to-end test fails on azure

2022-05-23 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-21379.
--
Resolution: Cannot Reproduce

> Elasticsearch (v6.3.1) sink end-to-end test fails on azure
> --
>
> Key: FLINK-21379
> URL: https://issues.apache.org/jira/browse/FLINK-21379
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.11.3
>Reporter: Dawid Wysakowicz
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13356=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9
> {code}
> 2021-02-15 23:15:39,939 ERROR 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase [] 
> - Failed Elasticsearch item request: [:intentional invalid index:] 
> ElasticsearchException[Elasticsearch exception 
> [type=invalid_index_name_exception, reason=Invalid index name [:intentional 
> invalid index:], must not contain the following characters [ , ", *, \, <, |, 
> ,, >, /, ?]]]
> org.elasticsearch.ElasticsearchException: Elasticsearch exception 
> [type=invalid_index_name_exception, reason=Invalid index name [:intentional 
> invalid index:], must not contain the following characters [ , ", *, \, <, |, 
> ,, >, /, ?]]
>   at 
> org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:510)
>  
> ~[blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:421)
>  
> ~[blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.elasticsearch.action.bulk.BulkItemResponse.fromXContent(BulkItemResponse.java:135)
>  
> ~[blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.elasticsearch.action.bulk.BulkResponse.fromXContent(BulkResponse.java:198)
>  
> ~[blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:653)
>  
> ~[blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAsyncAndParseEntity$3(RestHighLevelClient.java:549)
>  
> ~[blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.elasticsearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:580)
>  
> [blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:621)
>  
> [blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at org.elasticsearch.client.RestClient$1.completed(RestClient.java:375) 
> [blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at org.elasticsearch.client.RestClient$1.completed(RestClient.java:366) 
> [blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:119) 
> [blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177)
>  
> [blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436)
>  
> [blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:326)
>  
> [blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
>  
> [blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
>  
> [blob_p-1cb3213b045f842a05dea3132500fab00b6335e6-dcafee2d0ae53e4014367dc963a9c5ba:1.11-SNAPSHOT]
>   at 
> 

  1   2   3   >