[GitHub] [flink] HuangZhenQiu commented on pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-30 Thread GitBox


HuangZhenQiu commented on pull request #8952:
URL: https://github.com/apache/flink/pull/8952#issuecomment-752880906


   Thanks for the suggestion of a separate commit. I will follow the practice 
in future PRs. Happy new year.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20822) Don't check whether a function is generic in hive catalog

2020-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20822:
---
Labels: pull-request-available  (was: )

> Don't check whether a function is generic in hive catalog
> -
>
> Key: FLINK-20822
> URL: https://issues.apache.org/jira/browse/FLINK-20822
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> We just store the function identifier and class name to hive metastore, so it 
> seems there's no need to differentiate generic functions from hive functions. 
> Besides, users might not want us to validate the function class when creating 
> the function, e.g. the function jar might not be in class path at this point.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache opened a new pull request #14534: [FLINK-20822][hive] Don't check whether a function is generic in hive…

2020-12-30 Thread GitBox


lirui-apache opened a new pull request #14534:
URL: https://github.com/apache/flink/pull/14534


   … catalog
   
   
   
   ## What is the purpose of the change
   
   Don't check whether a function is generic in HiveCatalog
   
   
   ## Brief change log
   
 - Stop calling `CatalogFunction::isGeneric` in HiveCatalog.
 - Add test case to verify
   
   
   ## Verifying this change
   
   New test case
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? NA
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20772) RocksDBValueState with TTL occurs NullPointerException when calling update(null) method

2020-12-30 Thread Seongbae Chang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seongbae Chang updated FLINK-20772:
---
Summary: RocksDBValueState with TTL occurs NullPointerException when 
calling update(null) method   (was: [DISCUSS] RocksDBValueState with TTL occurs 
NullPointerException when calling update(null) method )

> RocksDBValueState with TTL occurs NullPointerException when calling 
> update(null) method 
> 
>
> Key: FLINK-20772
> URL: https://issues.apache.org/jira/browse/FLINK-20772
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.11.2
> Environment: Flink version: 1.11.2
> Flink Cluster: Standalone cluster with 3 Job managers and Task managers on 
> CentOS 7
>Reporter: Seongbae Chang
>Priority: Major
>  Labels: beginner
>
> h2. Problem
>  * I use ValueState for my custom trigger and set TTL for these ValueState in 
> RocksDB backend environment.
>  * I found an error when I used this code. I know that 
> ValueState.update(null) works equally to ValueState.clear() in general. 
> Unfortunately, this error occurs after using TTL
> {code:java}
> // My Code
> ctx.getPartitionedState(batchTotalSizeStateDesc).update(null);
> {code}
>  * I tested this in Flink 1.11.2, but I think it would be a problem in upper 
> versions.
>  * Plus, I'm a beginner. So, if there is any problem in this discussion 
> issue, please give me advice about that. And I'll fix it! 
> {code:java}
> // Error Stacktrace
> Caused by: TimerException{org.apache.flink.util.FlinkRuntimeException: Error 
> while adding data to RocksDB}
>   ... 12 more
> Caused by: org.apache.flink.util.FlinkRuntimeException: Error while adding 
> data to RocksDB
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:108)
>   at 
> org.apache.flink.runtime.state.ttl.TtlValueState.update(TtlValueState.java:50)
>   at .onProcessingTime(ActionBatchTimeTrigger.java:102)
>   at .onProcessingTime(ActionBatchTimeTrigger.java:29)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onProcessingTime(WindowOperator.java:902)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onProcessingTime(WindowOperator.java:498)
>   at 
> org.apache.flink.streaming.api.operators.InternalTimerServiceImpl.onProcessingTime(InternalTimerServiceImpl.java:260)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1220)
>   ... 11 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.api.common.typeutils.base.IntSerializer.serialize(IntSerializer.java:69)
>   at 
> org.apache.flink.api.common.typeutils.base.IntSerializer.serialize(IntSerializer.java:32)
>   at 
> org.apache.flink.api.common.typeutils.CompositeSerializer.serialize(CompositeSerializer.java:142)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValueInternal(AbstractRocksDBState.java:158)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValue(AbstractRocksDBState.java:178)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValue(AbstractRocksDBState.java:167)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:106)
>   ... 18 more
> {code}
>  
> h2. Reason
>  * It relates to RocksDBValueState with TTLValueState
>  * In RocksDBValueState(as well as other types of ValueState), 
> *.update(null)* has to be caught in if-clauses(null checking). However, it 
> skips the null checking and then tries to serialize the null value.
> {code:java}
> // 
> https://github.com/apache/flink/blob/release-1.11/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBValueState.java#L96-L110
> @Override
> public void update(V value) { 
> if (value == null) { 
> clear(); 
> return; 
> }
>  
> try { 
> backend.db.put(columnFamily, writeOptions, 
> serializeCurrentKeyWithGroupAndNamespace(), serializeValue(value)); 
> } catch (Exception e) { 
> throw new FlinkRuntimeException("Error while adding data to RocksDB", 
> e);  
> }
> }{code}
>  *  It is because that TtlValueState wraps the value(null) with the 
> LastAccessTime and makes the new TtlValue Object with the null value.
> {code:java}
> // 
> https://github.com/apache/flink/blob/release-1.11/flink-runtime/src/main/java/org/apache/flink/runtime/state/ttl/TtlValueState.java#L47-L51
> @Override
> public void update(T value) throws IOException { 
> 

[GitHub] [flink] HuangZhenQiu commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-30 Thread GitBox


HuangZhenQiu commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r550421330



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/active/ActiveResourceManager.java
##
@@ -213,14 +235,33 @@ public void 
onPreviousAttemptWorkersRecovered(Collection recoveredWo
 }
 }
 
+/**
+ * Record failure number of worker in ResourceManagers. Return whether 
maximum failure rate is
+ * detected.
+ *
+ * @return whether should acquire new container/worker after the a stop 
interval
+ */
+public boolean recordWorkerFailure() {
+failureRater.markEvent();
+
+try {
+failureRater.checkAgainstThreshold();
+} catch (ThresholdExceedException e) {
+log.warn(e.getMessage() + " in resource manager failure rater.");
+return true;
+}
+
+return false;
+}
+
 @Override
 public void onWorkerTerminated(ResourceID resourceId, String diagnostics) {
 if (clearStateForWorker(resourceId)) {
 log.info(
 "Worker {} is terminated. Diagnostics: {}",
 resourceId.getStringWithMetadata(),
 diagnostics);
-requestWorkerIfRequired();
+recordWorkerFailureAndPauseWorkerCreationIfNeeded();

Review comment:
   Yes, we should record failure only for 
currentAttemptUnregisteredWorkers. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14533: [hotfix][docs]fix typo in Create statements page

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14533:
URL: https://github.com/apache/flink/pull/14533#issuecomment-752874534


   
   ## CI report:
   
   * 3e017300a6f8f5bf807493a83eb46230af0a4827 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11533)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14532: Draft for Web-UI

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14532:
URL: https://github.com/apache/flink/pull/14532#issuecomment-752840091


   
   ## CI report:
   
   * 91f08cb08fae546730ce88f9e0127e082a1e1a84 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11527)
 
   * 6799c6817ba7fc5af0a23996e62b30e7fb2b56b5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11532)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14524: [FLINK-20769][python] Support minibatch to optimize Python UDAF

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14524:
URL: https://github.com/apache/flink/pull/14524#issuecomment-752383882


   
   ## CI report:
   
   * 87f9a7aac8cbd730781b7cdcbacaa30b76f506fe Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11488)
 
   * e8975d6f66798fc42dca329b2019782367dc6686 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11531)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14399: [FLINK-17827] [scala-shell] scala-shell.sh should fail early if no mo…

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14399:
URL: https://github.com/apache/flink/pull/14399#issuecomment-745854885


   
   ## CI report:
   
   * b77668657f364d87999556ee746e5d995b79443f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10921)
 
   * 5d12faf957a2e129df5a32031e9a88e09f7b6ec7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangZhenQiu commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-30 Thread GitBox


HuangZhenQiu commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r550418267



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/MetricNames.java
##
@@ -62,6 +62,8 @@ private MetricNames() {}
 public static final String CHECKPOINT_ALIGNMENT_TIME = 
"checkpointAlignmentTime";
 public static final String CHECKPOINT_START_DELAY_TIME = 
"checkpointStartDelayNanos";
 
+public static final String WORKER_FAILURE_RATE = "startWorkFailure" + 
SUFFIX_RATE;

Review comment:
   Yes, It is my mistake. There was a big code rebase after FLINK-20651 is 
landed. Will be more careful next time.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] paul8263 commented on pull request #14399: [FLINK-17827] [scala-shell] scala-shell.sh should fail early if no mo…

2020-12-30 Thread GitBox


paul8263 commented on pull request #14399:
URL: https://github.com/apache/flink/pull/14399#issuecomment-752876101


   @zentol Thank you for your suggestion and now I changed my code. I would be 
glad if you take some time to review the updated solution.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangZhenQiu commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-30 Thread GitBox


HuangZhenQiu commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r550417892



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/active/ActiveResourceManager.java
##
@@ -230,6 +271,12 @@ public void onError(Throwable exception) {
 onFatalError(exception);
 }
 
+public static ThresholdMeter createFailureRater(Configuration 
configuration) {
+double rate = 
configuration.getDouble(ResourceManagerOptions.MAXIMUM_WORKERS_FAILURE_RATE);
+Preconditions.checkArgument(rate > 0, "Failure rate should be larger 
than 0");
+return new ThresholdMeter(rate, Duration.ofMinutes(1));
+}

Review comment:
   It is also used in the ActiveResourceManagerTest. Probably, keeping it 
here will be better?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangZhenQiu commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-30 Thread GitBox


HuangZhenQiu commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r550417531



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/active/ActiveResourceManager.java
##
@@ -301,7 +353,25 @@ private boolean clearStateForWorker(ResourceID resourceId) 
{
 return true;
 }
 
-private void requestWorkerIfRequired() {
+private void tryResetWorkerCreationCoolDown() {
+if (workerCreationCoolDown.isDone()) {
+log.info(
+"Reaching max start worker failure rate. Will not retry 
creating worker in {}.",
+workerCreationRetryInterval);
+workerCreationCoolDown = new CompletableFuture<>();
+getMainThreadExecutor()

Review comment:
   Ok, just found the function with a different parameter.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangZhenQiu commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-30 Thread GitBox


HuangZhenQiu commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r550417478



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/active/ActiveResourceManager.java
##
@@ -301,7 +353,25 @@ private boolean clearStateForWorker(ResourceID resourceId) 
{
 return true;
 }
 
-private void requestWorkerIfRequired() {
+private void tryResetWorkerCreationCoolDown() {
+if (workerCreationCoolDown.isDone()) {
+log.info(
+"Reaching max start worker failure rate. Will not retry 
creating worker in {}.",
+workerCreationRetryInterval);
+workerCreationCoolDown = new CompletableFuture<>();
+getMainThreadExecutor()
+.scheduleRunAsync(
+() -> workerCreationCoolDown.complete(null),
+workerCreationRetryInterval.getSize());

Review comment:
   Yes, then what's the concern?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangZhenQiu commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-30 Thread GitBox


HuangZhenQiu commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r550417341



##
File path: 
flink-metrics/flink-metrics-core/src/test/java/org/apache/flink/metrics/ThresholdMeterTest.java
##
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.metrics;
+
+import org.apache.flink.metrics.ThresholdMeter.ThresholdExceedException;
+import org.apache.flink.util.TestLogger;
+
+import org.junit.Test;
+
+import java.time.Duration;
+
+/** Test time stamp based threshold meter. */
+public class ThresholdMeterTest extends TestLogger {

Review comment:
   The test on rate per second will be flaky. But added assert on getCount 
on tests.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangZhenQiu commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-30 Thread GitBox


HuangZhenQiu commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r550417182



##
File path: 
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/ThresholdMeter.java
##
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.metrics;
+
+import java.time.Duration;
+import java.util.ArrayDeque;
+import java.util.Queue;
+import java.util.function.Supplier;
+
+/** A timestamp queue based threshold meter. */
+public class ThresholdMeter implements Meter {
+private static final Double MILLISECONDS_PER_SECOND = 1000.0;
+private final Supplier currentTimeMillisSupplier;
+private final double maxEventsPerInterval;
+private final Duration interval;
+private final Queue failureTimestamps;
+private long failureCounter = 0;
+
+public ThresholdMeter(double maximumFailureRate, Duration interval) {
+this(maximumFailureRate, interval, System::currentTimeMillis);
+}
+
+public ThresholdMeter(

Review comment:
   As ManualClock needs extra dependeny on flink-core, I just remove the 
clock operation and simply relies on thread.sleep. The constructor can be 
removed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangZhenQiu commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-30 Thread GitBox


HuangZhenQiu commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r550416903



##
File path: 
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/ThresholdMeter.java
##
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.metrics;
+
+import java.time.Duration;
+import java.util.ArrayDeque;
+import java.util.Queue;
+import java.util.function.Supplier;
+
+/** A timestamp queue based threshold meter. */
+public class ThresholdMeter implements Meter {
+private static final Double MILLISECONDS_PER_SECOND = 1000.0;
+private final Supplier currentTimeMillisSupplier;
+private final double maxEventsPerInterval;
+private final Duration interval;
+private final Queue failureTimestamps;
+private long failureCounter = 0;
+
+public ThresholdMeter(double maximumFailureRate, Duration interval) {
+this(maximumFailureRate, interval, System::currentTimeMillis);
+}
+
+public ThresholdMeter(
+double maxEventsPerInterval, Duration interval, Supplier 
customSupplier) {
+this.maxEventsPerInterval = maxEventsPerInterval;
+this.interval = interval;
+this.failureTimestamps = new ArrayDeque<>();
+this.currentTimeMillisSupplier = customSupplier;
+}
+
+@Override
+public void markEvent() {
+failureTimestamps.add(currentTimeMillisSupplier.get());
+failureCounter++;
+}
+
+@Override
+public void markEvent(long n) {
+long timestamp = currentTimeMillisSupplier.get();
+for (int i = 0; i < n; i++) {
+failureTimestamps.add(timestamp);
+}
+failureCounter = failureCounter + n;
+}
+
+@Override
+public double getRate() {
+return getEventCountsRecentInterval() / (interval.toMillis() / 
MILLISECONDS_PER_SECOND);

Review comment:
   Agree. Checked in the constructor.

##
File path: 
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/ThresholdMeter.java
##
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.metrics;
+
+import java.time.Duration;
+import java.util.ArrayDeque;
+import java.util.Queue;
+import java.util.function.Supplier;
+
+/** A timestamp queue based threshold meter. */
+public class ThresholdMeter implements Meter {
+private static final Double MILLISECONDS_PER_SECOND = 1000.0;
+private final Supplier currentTimeMillisSupplier;
+private final double maxEventsPerInterval;
+private final Duration interval;
+private final Queue failureTimestamps;
+private long failureCounter = 0;
+
+public ThresholdMeter(double maximumFailureRate, Duration interval) {
+this(maximumFailureRate, interval, System::currentTimeMillis);
+}
+
+public ThresholdMeter(
+double maxEventsPerInterval, Duration interval, Supplier 
customSupplier) {
+this.maxEventsPerInterval = maxEventsPerInterval;
+this.interval = interval;
+this.failureTimestamps = new ArrayDeque<>();
+this.currentTimeMillisSupplier = customSupplier;
+}
+
+@Override
+public void markEvent() {
+failureTimestamps.add(currentTimeMillisSupplier.get());
+failureCounter++;
+}
+
+@Override
+public void markEvent(long n) {
+long timestamp = 

[GitHub] [flink] flinkbot commented on pull request #14533: [hotfix][docs]fix typo in Create statements page

2020-12-30 Thread GitBox


flinkbot commented on pull request #14533:
URL: https://github.com/apache/flink/pull/14533#issuecomment-752874534


   
   ## CI report:
   
   * 3e017300a6f8f5bf807493a83eb46230af0a4827 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14532: Draft for Web-UI

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14532:
URL: https://github.com/apache/flink/pull/14532#issuecomment-752840091


   
   ## CI report:
   
   * 91f08cb08fae546730ce88f9e0127e082a1e1a84 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11527)
 
   * 6799c6817ba7fc5af0a23996e62b30e7fb2b56b5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14524: [FLINK-20769][python] Support minibatch to optimize Python UDAF

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14524:
URL: https://github.com/apache/flink/pull/14524#issuecomment-752383882


   
   ## CI report:
   
   * 87f9a7aac8cbd730781b7cdcbacaa30b76f506fe Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11488)
 
   * e8975d6f66798fc42dca329b2019782367dc6686 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo commented on pull request #14524: [FLINK-20769][python] Support minibatch to optimize Python UDAF

2020-12-30 Thread GitBox


HuangXingBo commented on pull request #14524:
URL: https://github.com/apache/flink/pull/14524#issuecomment-752873339


   @WeiZhong94 Thanks a lot for the review. I have addressed the comment at the 
latest commit.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14533: [hotfix][docs]fix typo in Create statements page

2020-12-30 Thread GitBox


flinkbot commented on pull request #14533:
URL: https://github.com/apache/flink/pull/14533#issuecomment-752869503


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3e017300a6f8f5bf807493a83eb46230af0a4827 (Thu Dec 31 
07:01:20 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19962) fix doc: more examples for expanding arrays into a relation

2020-12-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-19962.
---
Fix Version/s: 1.13.0
   Resolution: Fixed

Fixed in master: f198ebffdf0c543a667b656dc67b904289a9b63f

> fix doc: more examples for expanding arrays into a relation
> ---
>
> Key: FLINK-19962
> URL: https://issues.apache.org/jira/browse/FLINK-19962
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Wong Mulan
>Assignee: Wong Mulan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html
> The ' Expanding arrays into a relation ' section is so simple. I can not 
> understand the usage when I see this section.
> It should be added more examples.
> The following examples
> {quote}SELECT a, b, v FROM T CROSS JOIN UNNEST(c) as f (k, v){quote}
> {quote}SELECT a, s FROM t2 LEFT JOIN UNNEST(t2.`set`) AS A(s){quote}
> {quote}SELECT a, km v FROM t2 CROSS JOIN UNNEST(t2.c) AS A(k, v){quote}
> or add some Table field description. For example, field 'set' type of table 
> t2 is array or map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuxiaoshang opened a new pull request #14533: [hotfix][docs]fix typo in Create statements page

2020-12-30 Thread GitBox


zhuxiaoshang opened a new pull request #14533:
URL: https://github.com/apache/flink/pull/14533


   
   ## What is the purpose of the change
   
   *fix typo in Create statements page*
   
   
   ## Brief change log
   
   fix typo in Create statements page
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)no
 - The serializers: (yes / no / don't know)no
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)no
 - The S3 file system connector: (yes / no / don't know)no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)no
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-30 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256853#comment-17256853
 ] 

Qingsheng Ren commented on FLINK-20777:
---

[~jark] Partition discovery is disabled by default in the 
\{{FlinkKafkaConsumer}}. Under the new design of Kafka source based on FLIP-27 
I think it's better to enable it by default as [~becket_qin] and I explained 
above. 

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #14527: [FLINK-19962][docs] Add examples and explanations for expanding arrays into a relation

2020-12-30 Thread GitBox


wuchong merged pull request #14527:
URL: https://github.com/apache/flink/pull/14527


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20105) Temporal Table does not work when Kafka is used as the versioned side (planner PK problem)

2020-12-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20105:

Summary: Temporal Table does not work when Kafka is used as the versioned 
side (planner PK problem)  (was: Temporary Table does not work when Kafka is 
used as the versioned side (planner PK problem))

> Temporal Table does not work when Kafka is used as the versioned side 
> (planner PK problem)
> --
>
> Key: FLINK-20105
> URL: https://issues.apache.org/jira/browse/FLINK-20105
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: Benoît Paris
>Assignee: Leonard Xu
>Priority: Minor
> Fix For: 1.13.0
>
> Attachments: flink-test-temporal-table.zip
>
>
>  
> This is probably an early bug report, that I'm making before 0.12 is out. 
> In 1.12-SNAPSHOT, doing this:
>  
> {code:java}
> INSERT INTO output_table
> SELECT
>  o.f_sequence * r.f_int,
>  o.f_random_str
> FROM datagen_1 AS o
> LEFT JOIN input_2 FOR SYSTEM_TIME AS OF o.ts r
>  ON o.f_random = r.f_int_pk{code}
>  
> works when input_2 is build with datagen, but fails when data comes from 
> kafka; yielding the following error that comes from planner code:
>  
> {code:java}
> Type INT NOT NULL of table field 'f_int_pk' does not match with the physical 
> type INT of the 'f_int_pk' field of the TableSource return type.{code}
>  
> Included is code for a complete reproduction, with:
>  * docker-compose file for ZooKeeper and Kafka (latest)
>  * pom.xml
>  * OK_TempTableSQLTestDatagen.java: it works with the datagen
>  * KO_TempTableSQLTestKafka.java: fails with Kafka
>  * KO_TempTableSQLTestKafkaNoPK.java: I tried to have no PK, it fails
>  * KO_TempTableSQLTestKafkaNull.java: I tried with the PK being nullable, it 
> fails
>  * KO_TempTableSQLTestKafkaNullif.java: I tried with using the PK in a 
> NULLIF(pk, '') as [advertised 
> here|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Inserting-nullable-data-into-NOT-NULL-columns-td34198.html],
>  but if fails (loses PK powers)
> I just can't think of a workaround. I even tried to GROUP BY on the PK.
> From memory, the Temporal Table Function suffers from a similar problem; My 
> usual workaround being to do a 
> COALESCE(problematic_temp_table_function_primary_key, null), but it fails 
> here as well (interestingly, it does not fail because of losing PK powers, 
> but because of the NOT NULL planner difference).
> It seems like the same problem of having transformations of the same field 
> being NULL and NOT NULL between planner transformations.
> 
>  
> This isssue is probably related to the ongoing FLIP-132 Temporal Table DDL 
> and Temporal Table Join developments.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong edited a comment on pull request #14530: [FLINK-20348][kafka] Make "schema-registry.subject" optional for Kafka sink with avro-confluent format

2020-12-30 Thread GitBox


wuchong edited a comment on pull request #14530:
URL: https://github.com/apache/flink/pull/14530#issuecomment-752864472


   Besides, we can remove the optional `schema-registry.subject` option for 
some test now, e.g. the `SQLClientSchemaRegistryITCase`. And please also update 
the documentations. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14530: [FLINK-20348][kafka] Make "schema-registry.subject" optional for Kafka sink with avro-confluent format

2020-12-30 Thread GitBox


wuchong commented on pull request #14530:
URL: https://github.com/apache/flink/pull/14530#issuecomment-752864472


   Besides, we can remove the optional `schema-registry.subject` option for 
some test now, e.g. the `SQLClientSchemaRegistryITCase`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong edited a comment on pull request #14530: [FLINK-20348][kafka] Make "schema-registry.subject" optional for Kafka sink with avro-confluent format

2020-12-30 Thread GitBox


wuchong edited a comment on pull request #14530:
URL: https://github.com/apache/flink/pull/14530#issuecomment-752863258


   We should also take key format into account, and support this feature for 
upsert-kafka too. We may also can make 
`debezium-avro-confluent.schema-registry.subject` to be optional. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14530: [FLINK-20348][kafka] Make "schema-registry.subject" optional for Kafka sink with avro-confluent format

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14530:
URL: https://github.com/apache/flink/pull/14530#issuecomment-752828495


   
   ## CI report:
   
   * 43ac067ac627cea436169ee50d8cf4769e7ccdce Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11524)
 
   * 2f237865183b4f66838366b4d7114515a612ed81 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11529)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14530: [FLINK-20348][kafka] Make "schema-registry.subject" optional for Kafka sink with avro-confluent format

2020-12-30 Thread GitBox


wuchong commented on pull request #14530:
URL: https://github.com/apache/flink/pull/14530#issuecomment-752863258


   We should also take key format into account, and support this feature for 
upsert-kafka too. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong removed a comment on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disab

2020-12-30 Thread GitBox


wuchong removed a comment on pull request #14531:
URL: https://github.com/apache/flink/pull/14531#issuecomment-752863052


   @PatrickRen  will help to review this PR. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disable for b

2020-12-30 Thread GitBox


wuchong commented on pull request #14531:
URL: https://github.com/apache/flink/pull/14531#issuecomment-752863080


   cc @becketqin 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disable for b

2020-12-30 Thread GitBox


wuchong commented on pull request #14531:
URL: https://github.com/apache/flink/pull/14531#issuecomment-752863052


   @PatrickRen  will help to review this PR. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14530: [FLINK-20348][Table SQL / Ecosystem]Make "schema-registry.subject" op…

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14530:
URL: https://github.com/apache/flink/pull/14530#issuecomment-752828495


   
   ## CI report:
   
   * 43ac067ac627cea436169ee50d8cf4769e7ccdce Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11524)
 
   * 2f237865183b4f66838366b4d7114515a612ed81 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14529: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14529:
URL: https://github.com/apache/flink/pull/14529#issuecomment-752798222


   
   ## CI report:
   
   * 5789d2337a68e2ffd53419a0e317cf3f272348ec Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11519)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163


   
   ## CI report:
   
   * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN
   * f4d02e921d2641fc5692617a4dd50ba2fda1128c UNKNOWN
   * fd8cbf90a807292b0db7b85bda26f1e717b87767 UNKNOWN
   * 5a2d0c87af6448ab36e312a79124f053f3880c12 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11526)
 
   * 3c03189754755222ce29f4d17485c91532da4a8b UNKNOWN
   * 5663475ec56efe4b84e6ae2e6cabd6d58db34bf2 UNKNOWN
   * f5f641fcde2b6d33b89c640994fcb69cafaa00a8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11528)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163


   
   ## CI report:
   
   * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN
   * f4d02e921d2641fc5692617a4dd50ba2fda1128c UNKNOWN
   * fd8cbf90a807292b0db7b85bda26f1e717b87767 UNKNOWN
   * 5a2d0c87af6448ab36e312a79124f053f3880c12 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11526)
 
   * 3c03189754755222ce29f4d17485c91532da4a8b UNKNOWN
   * 5663475ec56efe4b84e6ae2e6cabd6d58db34bf2 UNKNOWN
   * f5f641fcde2b6d33b89c640994fcb69cafaa00a8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163


   
   ## CI report:
   
   * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN
   * f4d02e921d2641fc5692617a4dd50ba2fda1128c UNKNOWN
   * fd8cbf90a807292b0db7b85bda26f1e717b87767 UNKNOWN
   * 5a2d0c87af6448ab36e312a79124f053f3880c12 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11526)
 
   * 3c03189754755222ce29f4d17485c91532da4a8b UNKNOWN
   * 5663475ec56efe4b84e6ae2e6cabd6d58db34bf2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-30 Thread GitBox


xintongsong commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r550381861



##
File path: 
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/ThresholdMeter.java
##
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.metrics;
+
+import java.time.Duration;
+import java.util.ArrayDeque;
+import java.util.Queue;
+import java.util.function.Supplier;
+
+/** A timestamp queue based threshold meter. */
+public class ThresholdMeter implements Meter {
+private static final Double MILLISECONDS_PER_SECOND = 1000.0;
+private final Supplier currentTimeMillisSupplier;
+private final double maxEventsPerInterval;
+private final Duration interval;
+private final Queue failureTimestamps;
+private long failureCounter = 0;
+
+public ThresholdMeter(double maximumFailureRate, Duration interval) {
+this(maximumFailureRate, interval, System::currentTimeMillis);
+}
+
+public ThresholdMeter(
+double maxEventsPerInterval, Duration interval, Supplier 
customSupplier) {
+this.maxEventsPerInterval = maxEventsPerInterval;
+this.interval = interval;
+this.failureTimestamps = new ArrayDeque<>();
+this.currentTimeMillisSupplier = customSupplier;
+}
+
+@Override
+public void markEvent() {
+failureTimestamps.add(currentTimeMillisSupplier.get());
+failureCounter++;
+}
+
+@Override
+public void markEvent(long n) {
+long timestamp = currentTimeMillisSupplier.get();
+for (int i = 0; i < n; i++) {
+failureTimestamps.add(timestamp);
+}
+failureCounter = failureCounter + n;
+}
+
+@Override
+public double getRate() {
+return getEventCountsRecentInterval() / (interval.toMillis() / 
MILLISECONDS_PER_SECOND);
+}
+
+@Override
+public long getCount() {
+return failureCounter;
+}
+
+public void checkAgainstThreshold() throws ThresholdExceedException {
+if (getEventCountsRecentInterval() >= maxEventsPerInterval) {
+throw new ThresholdExceedException(
+String.format(
+"Maximum number of events %f is detected",
+getEventCountsRecentInterval()));
+}
+}
+
+private double getEventCountsRecentInterval() {

Review comment:
   Return type should not be `double`

##
File path: 
flink-metrics/flink-metrics-core/src/main/java/org/apache/flink/metrics/ThresholdMeter.java
##
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.metrics;
+
+import java.time.Duration;
+import java.util.ArrayDeque;
+import java.util.Queue;
+import java.util.function.Supplier;
+
+/** A timestamp queue based threshold meter. */
+public class ThresholdMeter implements Meter {
+private static final Double MILLISECONDS_PER_SECOND = 1000.0;

Review comment:
   The primitive type `double` is preferred.

##
File path: 
flink-metrics/flink-metrics-core/src/test/java/org/apache/flink/metrics/ThresholdMeterTest.java
##
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding 

[GitHub] [flink] flinkbot edited a comment on pull request #14532: Draft for Web-UI

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14532:
URL: https://github.com/apache/flink/pull/14532#issuecomment-752840091


   
   ## CI report:
   
   * 91f08cb08fae546730ce88f9e0127e082a1e1a84 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11527)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163


   
   ## CI report:
   
   * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN
   * f4d02e921d2641fc5692617a4dd50ba2fda1128c UNKNOWN
   * fd8cbf90a807292b0db7b85bda26f1e717b87767 UNKNOWN
   * 5a2d0c87af6448ab36e312a79124f053f3880c12 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11526)
 
   * 3c03189754755222ce29f4d17485c91532da4a8b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14532: Draft for Web-UI

2020-12-30 Thread GitBox


flinkbot commented on pull request #14532:
URL: https://github.com/apache/flink/pull/14532#issuecomment-752840091


   
   ## CI report:
   
   * 91f08cb08fae546730ce88f9e0127e082a1e1a84 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163


   
   ## CI report:
   
   * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN
   * f4d02e921d2641fc5692617a4dd50ba2fda1128c UNKNOWN
   * fd8cbf90a807292b0db7b85bda26f1e717b87767 UNKNOWN
   * 28364b82097029626bf4ad3d18ca14eb759d64db Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11521)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11509)
 
   * 5a2d0c87af6448ab36e312a79124f053f3880c12 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11526)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zck573693104 commented on a change in pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode,

2020-12-30 Thread GitBox


zck573693104 commented on a change in pull request #14531:
URL: https://github.com/apache/flink/pull/14531#discussion_r550393596



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java
##
@@ -458,6 +458,29 @@ private boolean maybeOverride(String key, String value, 
boolean override) {
 return overridden;
 }
 
+private boolean maybeOverridePartitionDiscovery(String key, String value, 
boolean override) {
+boolean overridden = false;
+String userValue = props.getProperty(key);
+if (override) {
+LOG.warn(
+String.format(
+"Property %s is provided but will be overridden 
from %s to %s",
+key, userValue, value));
+props.setProperty(key, value);
+overridden = true;
+} else {
+if (userValue != null) {
+

Review comment:
   maybeoverride and your new method.Only the default value is different.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zck573693104 commented on a change in pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode,

2020-12-30 Thread GitBox


zck573693104 commented on a change in pull request #14531:
URL: https://github.com/apache/flink/pull/14531#discussion_r550393172



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java
##
@@ -458,6 +458,29 @@ private boolean maybeOverride(String key, String value, 
boolean override) {
 return overridden;
 }
 
+private boolean maybeOverridePartitionDiscovery(String key, String value, 
boolean override) {
+boolean overridden = false;
+String userValue = props.getProperty(key);
+if (override) {
+LOG.warn(
+String.format(
+"Property %s is provided but will be overridden 
from %s to %s",
+key, userValue, value));
+props.setProperty(key, value);
+overridden = true;
+} else {
+if (userValue != null) {
+

Review comment:
Imitate another method  maybeoverride . set a default value





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14532: Draft for Web-UI

2020-12-30 Thread GitBox


flinkbot commented on pull request #14532:
URL: https://github.com/apache/flink/pull/14532#issuecomment-752837710


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 91f08cb08fae546730ce88f9e0127e082a1e1a84 (Thu Dec 31 
04:16:24 UTC 2020)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur opened a new pull request #14532: Draft for Web-UI

2020-12-30 Thread GitBox


curcur opened a new pull request #14532:
URL: https://github.com/apache/flink/pull/14532


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20824) BlockingShuffleITCase. testSortMergeBlockingShuffle test failed with "Inconsistent availability: expected true"

2020-12-30 Thread Yingjie Cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256822#comment-17256822
 ] 

Yingjie Cao commented on FLINK-20824:
-

[~hxbks2ks] Thanks for reporting this issue, I think it is the same issue with 
FLINK-20547.

> BlockingShuffleITCase. testSortMergeBlockingShuffle test failed with 
> "Inconsistent availability: expected true"
> ---
>
> Key: FLINK-20824
> URL: https://issues.apache.org/jira/browse/FLINK-20824
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.13.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11516=logs=219e462f-e75e-506c-3671-5017d866ccf6=4c5dc768-5c82-5ab0-660d-086cb90b76a0]
> {code:java}
> 2020-12-30T22:45:42.5933715Z [ERROR] 
> testSortMergeBlockingShuffle(org.apache.flink.test.runtime.BlockingShuffleITCase)
>   Time elapsed: 3.153 s  <<< FAILURE!
> 2020-12-30T22:45:42.5934985Z java.lang.AssertionError: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-12-30T22:45:42.5935943Z  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:60)
> 2020-12-30T22:45:42.5936979Z  at 
> org.apache.flink.test.runtime.BlockingShuffleITCase.testSortMergeBlockingShuffle(BlockingShuffleITCase.java:70)
> 2020-12-30T22:45:42.5937885Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-12-30T22:45:42.5938572Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-12-30T22:45:42.5939448Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-12-30T22:45:42.5940142Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-12-30T22:45:42.5940944Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-12-30T22:45:42.5942210Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-12-30T22:45:42.5943167Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-12-30T22:45:42.5943971Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-12-30T22:45:42.5944703Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-12-30T22:45:42.5945422Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-12-30T22:45:42.5946138Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-12-30T22:45:42.5946615Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-12-30T22:45:42.5947160Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-12-30T22:45:42.5947764Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-12-30T22:45:42.5948475Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-12-30T22:45:42.5948925Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-12-30T22:45:42.5949328Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-12-30T22:45:42.5949787Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-12-30T22:45:42.5950558Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-12-30T22:45:42.5951084Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-12-30T22:45:42.5951608Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-12-30T22:45:42.5952153Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-12-30T22:45:42.5952809Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-12-30T22:45:42.5953323Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> 2020-12-30T22:45:42.5953806Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> 2020-12-30T22:45:42.5954306Z Caused by: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-12-30T22:45:42.5954931Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:118)
> 2020-12-30T22:45:42.5955641Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:80)
> 

[GitHub] [flink] WeiZhong94 commented on a change in pull request #14524: [FLINK-20769][python] Support minibatch to optimize Python UDAF

2020-12-30 Thread GitBox


WeiZhong94 commented on a change in pull request #14524:
URL: https://github.com/apache/flink/pull/14524#discussion_r550390434



##
File path: flink-python/pyflink/fn_execution/aggregate_fast.pyx
##
@@ -564,50 +583,68 @@ cdef class GroupTableAggFunction(GroupAggFunctionBase):
 aggs_handle, key_selector, state_backend, state_value_coder, 
generate_update_before,
 state_cleaning_enabled, index_of_count_star)
 
-cpdef list process_element(self, InternalRow input_data):
+cpdef list finish_bundle(self):
 cdef bint first_row
 cdef list key, accumulators, input_value, results
 cdef SimpleTableAggsHandleFunction aggs_handle
 cdef InternalRowKind input_row_kind
+cdef tuple current_key
+cdef InternalRow input_data
+cdef size_t start_index, i, input_rows_num
+cdef object state_backend, accumulator_state
 results = []
-input_value = input_data.values
-input_row_kind = input_data.row_kind
+# input_value = input_data.values

Review comment:
   Unnecessary comment.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disab

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14531:
URL: https://github.com/apache/flink/pull/14531#issuecomment-752828536


   
   ## CI report:
   
   * 9dfbf0f8575de20c3c3d672ce1bf64f3c5e2ed93 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11525)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14530: [FLINK-20348][Table SQL / Ecosystem]Make "schema-registry.subject" op…

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14530:
URL: https://github.com/apache/flink/pull/14530#issuecomment-752828495


   
   ## CI report:
   
   * 43ac067ac627cea436169ee50d8cf4769e7ccdce Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11524)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163


   
   ## CI report:
   
   * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN
   * f4d02e921d2641fc5692617a4dd50ba2fda1128c UNKNOWN
   * fd8cbf90a807292b0db7b85bda26f1e717b87767 UNKNOWN
   * 28364b82097029626bf4ad3d18ca14eb759d64db Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11521)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11509)
 
   * 5a2d0c87af6448ab36e312a79124f053f3880c12 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-19067) FileNotFoundException when run flink examples

2020-12-30 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256814#comment-17256814
 ] 

JieFang.He edited comment on FLINK-19067 at 12/31/20, 3:39 AM:
---

I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node. So in HA mode, it need to 
ensure they are on one node

When the job submit to the rest,the rest then submit it to dispatcher,the 
jobFiles are the put to dispatcher

Here is the log
{code:java}
2020-12-31 00:42:29,255 DEBUG [BLOB connection for /127.0.0.1:43576] 
[FileSystemBlobStore]: Copying from 
/tmp/blobStore-5fe6b952-fece-4590-b637-04f19d8121dd/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109
 to 
/data2/zdh/flink/storageDir/default/blob/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109.
{code}
and here is the code in JobSubmitHandler
{code:java}
CompletableFuture jobSubmissionFuture = 
finalizedJobGraphFuture.thenCompose(jobGraph -> gateway.submitJob(jobGraph, 
timeout));
{code}
the gateway is define in DefaultDispatcherResourceManagerComponentFactory
{code:java}
final LeaderGatewayRetriever dispatcherGatewayRetriever = 
new RpcGatewayRetriever<>(
   rpcService,
   DispatcherGateway.class,
   DispatcherId::fromUuid,
   10,
   Time.milliseconds(50L));
{code}
{code:java}
webMonitorEndpoint = restEndpointFactory.createRestEndpoint(
   configuration,
   dispatcherGatewayRetriever,
   resourceManagerGatewayRetriever,
   blobServer,
   executor,
   metricFetcher,
   highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),
   fatalErrorHandler);
{code}
 

But the Task get the jobFiles from resource_manager

The code of BlobClient.downloadFromBlobServer
{code:java}
static void downloadFromBlobServer(
  @Nullable JobID jobId,
  BlobKey blobKey,
  File localJarFile,
  InetSocketAddress serverAddress,
  Configuration blobClientConfig,
  int numFetchRetries) throws IOException {
..
  catch (Throwable t) {
 String message = "Failed to fetch BLOB " + jobId + "/" + blobKey + " 
from " + serverAddress +
" and store it under " + localJarFile.getAbsolutePath();
{code}
The serverAddress is define in TaskExecutor
{code:java}
final InetSocketAddress blobServerAddress = new InetSocketAddress(
   clusterInformation.getBlobServerHostname(),
   clusterInformation.getBlobServerPort());

blobCacheService.setBlobServerAddress(blobServerAddress);
{code}
The clusterInformation define in ResourceManagerRegistrationListener
{code:java}
if (resourceManagerConnection == connection) {
   try {
  establishResourceManagerConnection(
 resourceManagerGateway,
 resourceManagerId,
 taskExecutorRegistrationId,
 clusterInformation);
{code}
Where ResourceManagerRegistrationListener used
{code:java}
resourceManagerConnection =
   new TaskExecutorToResourceManagerConnection(
  log,
  getRpcService(),
  taskManagerConfiguration.getRetryingRegistrationConfiguration(),
  resourceManagerAddress.getAddress(),
  resourceManagerAddress.getResourceManagerId(),
  getMainThreadExecutor(),
  new ResourceManagerRegistrationListener(),
  taskExecutorRegistration);
{code}
 

 


was (Author: hejiefang):
I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node

When the job submit to the rest,the rest then submit it to dispatcher,the 
jobFiles are the put to dispatcher

Here is the log
{code:java}
2020-12-31 00:42:29,255 DEBUG [BLOB connection for /127.0.0.1:43576] 
[FileSystemBlobStore]: Copying from 
/tmp/blobStore-5fe6b952-fece-4590-b637-04f19d8121dd/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109
 to 
/data2/zdh/flink/storageDir/default/blob/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109.
{code}
and here is the code in JobSubmitHandler
{code:java}
CompletableFuture jobSubmissionFuture = 
finalizedJobGraphFuture.thenCompose(jobGraph -> gateway.submitJob(jobGraph, 
timeout));
{code}
the gateway is define in DefaultDispatcherResourceManagerComponentFactory
{code:java}
final LeaderGatewayRetriever dispatcherGatewayRetriever = 
new RpcGatewayRetriever<>(
   rpcService,
   DispatcherGateway.class,
   DispatcherId::fromUuid,
   10,
   Time.milliseconds(50L));
{code}
{code:java}
webMonitorEndpoint = restEndpointFactory.createRestEndpoint(
   configuration,
   dispatcherGatewayRetriever,
   resourceManagerGatewayRetriever,
   blobServer,
   executor,
   metricFetcher,
   highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),
   fatalErrorHandler);
{code}
 


[jira] [Comment Edited] (FLINK-19067) FileNotFoundException when run flink examples

2020-12-30 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256814#comment-17256814
 ] 

JieFang.He edited comment on FLINK-19067 at 12/31/20, 3:36 AM:
---

I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node

When the job submit to the rest,the rest then submit it to dispatcher,the 
jobFiles are the put to dispatcher

Here is the log
{code:java}
2020-12-31 00:42:29,255 DEBUG [BLOB connection for /127.0.0.1:43576] 
[FileSystemBlobStore]: Copying from 
/tmp/blobStore-5fe6b952-fece-4590-b637-04f19d8121dd/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109
 to 
/data2/zdh/flink/storageDir/default/blob/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109.
{code}
and here is the code in JobSubmitHandler
{code:java}
CompletableFuture jobSubmissionFuture = 
finalizedJobGraphFuture.thenCompose(jobGraph -> gateway.submitJob(jobGraph, 
timeout));
{code}
the gateway is define in DefaultDispatcherResourceManagerComponentFactory
{code:java}
final LeaderGatewayRetriever dispatcherGatewayRetriever = 
new RpcGatewayRetriever<>(
   rpcService,
   DispatcherGateway.class,
   DispatcherId::fromUuid,
   10,
   Time.milliseconds(50L));
{code}
{code:java}
webMonitorEndpoint = restEndpointFactory.createRestEndpoint(
   configuration,
   dispatcherGatewayRetriever,
   resourceManagerGatewayRetriever,
   blobServer,
   executor,
   metricFetcher,
   highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),
   fatalErrorHandler);
{code}
 

But the Task get the jobFiles from resource_manager

The code of BlobClient.downloadFromBlobServer
{code:java}
static void downloadFromBlobServer(
  @Nullable JobID jobId,
  BlobKey blobKey,
  File localJarFile,
  InetSocketAddress serverAddress,
  Configuration blobClientConfig,
  int numFetchRetries) throws IOException {
..
  catch (Throwable t) {
 String message = "Failed to fetch BLOB " + jobId + "/" + blobKey + " 
from " + serverAddress +
" and store it under " + localJarFile.getAbsolutePath();
{code}
The serverAddress is define in TaskExecutor
{code:java}
final InetSocketAddress blobServerAddress = new InetSocketAddress(
   clusterInformation.getBlobServerHostname(),
   clusterInformation.getBlobServerPort());

blobCacheService.setBlobServerAddress(blobServerAddress);
{code}
The clusterInformation define in ResourceManagerRegistrationListener
{code:java}
if (resourceManagerConnection == connection) {
   try {
  establishResourceManagerConnection(
 resourceManagerGateway,
 resourceManagerId,
 taskExecutorRegistrationId,
 clusterInformation);
{code}
Where ResourceManagerRegistrationListener used
{code:java}
resourceManagerConnection =
   new TaskExecutorToResourceManagerConnection(
  log,
  getRpcService(),
  taskManagerConfiguration.getRetryingRegistrationConfiguration(),
  resourceManagerAddress.getAddress(),
  resourceManagerAddress.getResourceManagerId(),
  getMainThreadExecutor(),
  new ResourceManagerRegistrationListener(),
  taskExecutorRegistration);
{code}
 

 


was (Author: hejiefang):
I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node

When the job submit to the rest,the rest then submit it to dispatcher,the 
jobFiles are the put to dispatcher

Here is the log

 
{code:java}
2020-12-31 00:42:29,255 DEBUG [BLOB connection for /127.0.0.1:43576] 
[FileSystemBlobStore]: Copying from 
/tmp/blobStore-5fe6b952-fece-4590-b637-04f19d8121dd/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109
 to 
/data2/zdh/flink/storageDir/default/blob/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109.
{code}
and here is the code in JobSubmitHandler

 

 
{code:java}
CompletableFuture jobSubmissionFuture = 
finalizedJobGraphFuture.thenCompose(jobGraph -> gateway.submitJob(jobGraph, 
timeout));
{code}
the gateway is define in DefaultDispatcherResourceManagerComponentFactory

 
{code:java}
final LeaderGatewayRetriever dispatcherGatewayRetriever = 
new RpcGatewayRetriever<>(
   rpcService,
   DispatcherGateway.class,
   DispatcherId::fromUuid,
   10,
   Time.milliseconds(50L));
{code}
 
{code:java}
webMonitorEndpoint = restEndpointFactory.createRestEndpoint(
   configuration,
   dispatcherGatewayRetriever,
   resourceManagerGatewayRetriever,
   blobServer,
   executor,
   metricFetcher,
   highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),
   fatalErrorHandler);
{code}
 

 

But the Task get the jobFiles from 

[jira] [Commented] (FLINK-19067) FileNotFoundException when run flink examples

2020-12-30 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256814#comment-17256814
 ] 

JieFang.He commented on FLINK-19067:


I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node

When the job submit to the rest,the rest then submit it to dispatcher,the 
jobFiles are the put to dispatcher

Here is the log

 
{code:java}
2020-12-31 00:42:29,255 DEBUG [BLOB connection for /127.0.0.1:43576] 
[FileSystemBlobStore]: Copying from 
/tmp/blobStore-5fe6b952-fece-4590-b637-04f19d8121dd/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109
 to 
/data2/zdh/flink/storageDir/default/blob/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109.
{code}
and here is the code in JobSubmitHandler

 

 
{code:java}
CompletableFuture jobSubmissionFuture = 
finalizedJobGraphFuture.thenCompose(jobGraph -> gateway.submitJob(jobGraph, 
timeout));
{code}
the gateway is define in DefaultDispatcherResourceManagerComponentFactory

 
{code:java}
final LeaderGatewayRetriever dispatcherGatewayRetriever = 
new RpcGatewayRetriever<>(
   rpcService,
   DispatcherGateway.class,
   DispatcherId::fromUuid,
   10,
   Time.milliseconds(50L));
{code}
 
{code:java}
webMonitorEndpoint = restEndpointFactory.createRestEndpoint(
   configuration,
   dispatcherGatewayRetriever,
   resourceManagerGatewayRetriever,
   blobServer,
   executor,
   metricFetcher,
   highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),
   fatalErrorHandler);
{code}
 

 

But the Task get the jobFiles from resource_manager

The code of BlobClient.downloadFromBlobServer

 
{code:java}
static void downloadFromBlobServer(
  @Nullable JobID jobId,
  BlobKey blobKey,
  File localJarFile,
  InetSocketAddress serverAddress,
  Configuration blobClientConfig,
  int numFetchRetries) throws IOException {
..
  catch (Throwable t) {
 String message = "Failed to fetch BLOB " + jobId + "/" + blobKey + " 
from " + serverAddress +
" and store it under " + localJarFile.getAbsolutePath();
{code}
The serverAddress is define in TaskExecutor

 

 
{code:java}
final InetSocketAddress blobServerAddress = new InetSocketAddress(
   clusterInformation.getBlobServerHostname(),
   clusterInformation.getBlobServerPort());

blobCacheService.setBlobServerAddress(blobServerAddress);
{code}
The clusterInformation define in ResourceManagerRegistrationListener

 
{code:java}
if (resourceManagerConnection == connection) {
   try {
  establishResourceManagerConnection(
 resourceManagerGateway,
 resourceManagerId,
 taskExecutorRegistrationId,
 clusterInformation);
{code}
Where ResourceManagerRegistrationListener used

 
{code:java}
resourceManagerConnection =
   new TaskExecutorToResourceManagerConnection(
  log,
  getRpcService(),
  taskManagerConfiguration.getRetryingRegistrationConfiguration(),
  resourceManagerAddress.getAddress(),
  resourceManagerAddress.getResourceManagerId(),
  getMainThreadExecutor(),
  new ResourceManagerRegistrationListener(),
  taskExecutorRegistration);
{code}
 

 

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> 

[GitHub] [flink] xiaoHoly commented on a change in pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and

2020-12-30 Thread GitBox


xiaoHoly commented on a change in pull request #14531:
URL: https://github.com/apache/flink/pull/14531#discussion_r550387370



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java
##
@@ -458,6 +458,29 @@ private boolean maybeOverride(String key, String value, 
boolean override) {
 return overridden;
 }
 
+private boolean maybeOverridePartitionDiscovery(String key, String value, 
boolean override) {
+boolean overridden = false;
+String userValue = props.getProperty(key);
+if (override) {
+LOG.warn(
+String.format(
+"Property %s is provided but will be overridden 
from %s to %s",
+key, userValue, value));
+props.setProperty(key, value);
+overridden = true;
+} else {
+if (userValue != null) {
+

Review comment:
   @zhuxiaoshang .thanks for review. Yes, I should use a more elegant way.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14530: [FLINK-20348][Table SQL / Ecosystem]Make "schema-registry.subject" op…

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14530:
URL: https://github.com/apache/flink/pull/14530#issuecomment-752828495


   
   ## CI report:
   
   * 43ac067ac627cea436169ee50d8cf4769e7ccdce Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11524)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disab

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14531:
URL: https://github.com/apache/flink/pull/14531#issuecomment-752828536


   
   ## CI report:
   
   * 9dfbf0f8575de20c3c3d672ce1bf64f3c5e2ed93 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11525)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xiaoHoly commented on a change in pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and

2020-12-30 Thread GitBox


xiaoHoly commented on a change in pull request #14531:
URL: https://github.com/apache/flink/pull/14531#discussion_r550387230



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java
##
@@ -458,6 +458,29 @@ private boolean maybeOverride(String key, String value, 
boolean override) {
 return overridden;
 }
 
+private boolean maybeOverridePartitionDiscovery(String key, String value, 
boolean override) {

Review comment:
   i think override is readable. “isBounded” also indicates that the 
property needs to be overridden





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xiaoHoly commented on a change in pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and

2020-12-30 Thread GitBox


xiaoHoly commented on a change in pull request #14531:
URL: https://github.com/apache/flink/pull/14531#discussion_r550387001



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java
##
@@ -458,6 +458,29 @@ private boolean maybeOverride(String key, String value, 
boolean override) {
 return overridden;
 }
 
+private boolean maybeOverridePartitionDiscovery(String key, String value, 
boolean override) {
+boolean overridden = false;
+String userValue = props.getProperty(key);
+if (override) {
+LOG.warn(
+String.format(
+"Property %s is provided but will be overridden 
from %s to %s",

Review comment:
   thanks,you are right , Here is redundant





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuxiaoshang commented on a change in pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode,

2020-12-30 Thread GitBox


zhuxiaoshang commented on a change in pull request #14531:
URL: https://github.com/apache/flink/pull/14531#discussion_r550384551



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java
##
@@ -458,6 +458,29 @@ private boolean maybeOverride(String key, String value, 
boolean override) {
 return overridden;
 }
 
+private boolean maybeOverridePartitionDiscovery(String key, String value, 
boolean override) {

Review comment:
   override->isBounded maybe more readable

##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java
##
@@ -458,6 +458,29 @@ private boolean maybeOverride(String key, String value, 
boolean override) {
 return overridden;
 }
 
+private boolean maybeOverridePartitionDiscovery(String key, String value, 
boolean override) {
+boolean overridden = false;
+String userValue = props.getProperty(key);
+if (override) {
+LOG.warn(
+String.format(
+"Property %s is provided but will be overridden 
from %s to %s",

Review comment:
   L431 has printed the log,here till needed?

##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/KafkaSourceBuilder.java
##
@@ -458,6 +458,29 @@ private boolean maybeOverride(String key, String value, 
boolean override) {
 return overridden;
 }
 
+private boolean maybeOverridePartitionDiscovery(String key, String value, 
boolean override) {
+boolean overridden = false;
+String userValue = props.getProperty(key);
+if (override) {
+LOG.warn(
+String.format(
+"Property %s is provided but will be overridden 
from %s to %s",
+key, userValue, value));
+props.setProperty(key, value);
+overridden = true;
+} else {
+if (userValue != null) {
+

Review comment:
   if statement do nothing looks weird.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disable for

2020-12-30 Thread GitBox


flinkbot commented on pull request #14531:
URL: https://github.com/apache/flink/pull/14531#issuecomment-752828536


   
   ## CI report:
   
   * 9dfbf0f8575de20c3c3d672ce1bf64f3c5e2ed93 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14530: [FLINK-20348][Table SQL / Ecosystem]Make "schema-registry.subject" op…

2020-12-30 Thread GitBox


flinkbot commented on pull request #14530:
URL: https://github.com/apache/flink/pull/14530#issuecomment-752828495


   
   ## CI report:
   
   * 43ac067ac627cea436169ee50d8cf4769e7ccdce UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2020-12-30 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256811#comment-17256811
 ] 

Huang Xingbo commented on FLINK-20329:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11518=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  at 
> 

[jira] [Commented] (FLINK-20824) BlockingShuffleITCase. testSortMergeBlockingShuffle test failed with "Inconsistent availability: expected true"

2020-12-30 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256809#comment-17256809
 ] 

Huang Xingbo commented on FLINK-20824:
--

cc [~kevin.cyj]

> BlockingShuffleITCase. testSortMergeBlockingShuffle test failed with 
> "Inconsistent availability: expected true"
> ---
>
> Key: FLINK-20824
> URL: https://issues.apache.org/jira/browse/FLINK-20824
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.13.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11516=logs=219e462f-e75e-506c-3671-5017d866ccf6=4c5dc768-5c82-5ab0-660d-086cb90b76a0]
> {code:java}
> 2020-12-30T22:45:42.5933715Z [ERROR] 
> testSortMergeBlockingShuffle(org.apache.flink.test.runtime.BlockingShuffleITCase)
>   Time elapsed: 3.153 s  <<< FAILURE!
> 2020-12-30T22:45:42.5934985Z java.lang.AssertionError: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-12-30T22:45:42.5935943Z  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:60)
> 2020-12-30T22:45:42.5936979Z  at 
> org.apache.flink.test.runtime.BlockingShuffleITCase.testSortMergeBlockingShuffle(BlockingShuffleITCase.java:70)
> 2020-12-30T22:45:42.5937885Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-12-30T22:45:42.5938572Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-12-30T22:45:42.5939448Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-12-30T22:45:42.5940142Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-12-30T22:45:42.5940944Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-12-30T22:45:42.5942210Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-12-30T22:45:42.5943167Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-12-30T22:45:42.5943971Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-12-30T22:45:42.5944703Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-12-30T22:45:42.5945422Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-12-30T22:45:42.5946138Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-12-30T22:45:42.5946615Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-12-30T22:45:42.5947160Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-12-30T22:45:42.5947764Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-12-30T22:45:42.5948475Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-12-30T22:45:42.5948925Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-12-30T22:45:42.5949328Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-12-30T22:45:42.5949787Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-12-30T22:45:42.5950558Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-12-30T22:45:42.5951084Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-12-30T22:45:42.5951608Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-12-30T22:45:42.5952153Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-12-30T22:45:42.5952809Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-12-30T22:45:42.5953323Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> 2020-12-30T22:45:42.5953806Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> 2020-12-30T22:45:42.5954306Z Caused by: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-12-30T22:45:42.5954931Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:118)
> 2020-12-30T22:45:42.5955641Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:80)
> 2020-12-30T22:45:42.5956279Z  at 
> 

[jira] [Closed] (FLINK-20704) Some rel data type does not implement the digest correctly

2020-12-30 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-20704.
--
Resolution: Fixed

Fixed in 1.12.1: 712becde6f8f771a000f50a4ed0963d381431900
Fixed in 1.13.0: 64d9ea339fe6360eb9c59a6fd1946948c2fecbf9

> Some rel data type does not implement the digest correctly
> --
>
> Key: FLINK-20704
> URL: https://issues.apache.org/jira/browse/FLINK-20704
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.12.0, 1.11.3
>Reporter: Danny Chen
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.1
>
>
> Some of the rel data types for legacy planner:
> - {{GenericRelDataType}}
> - {{ArrayRelDataType}}
> - {{MapRelDataType}}
> - {{MultisetRelDataType}}
> Does not implement the digest correctly, especially for 
> {{GenericRelDataType}} , the {{RelDataTypeFactory}} caches the type instances 
> based on its digest, a wrong digest impl would mess up the type instance 
> creation, e.g. without this patch, all the {{GenericRelDataType}} instances 
> have same digest `ANY`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20824) BlockingShuffleITCase. testSortMergeBlockingShuffle test failed with "Inconsistent availability: expected true"

2020-12-30 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-20824:


 Summary: BlockingShuffleITCase. testSortMergeBlockingShuffle test 
failed with "Inconsistent availability: expected true"
 Key: FLINK-20824
 URL: https://issues.apache.org/jira/browse/FLINK-20824
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.13.0
Reporter: Huang Xingbo


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11516=logs=219e462f-e75e-506c-3671-5017d866ccf6=4c5dc768-5c82-5ab0-660d-086cb90b76a0]
{code:java}
2020-12-30T22:45:42.5933715Z [ERROR] 
testSortMergeBlockingShuffle(org.apache.flink.test.runtime.BlockingShuffleITCase)
  Time elapsed: 3.153 s  <<< FAILURE!
2020-12-30T22:45:42.5934985Z java.lang.AssertionError: 
org.apache.flink.runtime.JobException: Recovery is suppressed by 
NoRestartBackoffTimeStrategy
2020-12-30T22:45:42.5935943Zat 
org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:60)
2020-12-30T22:45:42.5936979Zat 
org.apache.flink.test.runtime.BlockingShuffleITCase.testSortMergeBlockingShuffle(BlockingShuffleITCase.java:70)
2020-12-30T22:45:42.5937885Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-12-30T22:45:42.5938572Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-12-30T22:45:42.5939448Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-12-30T22:45:42.5940142Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-12-30T22:45:42.5940944Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-12-30T22:45:42.5942210Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-12-30T22:45:42.5943167Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-12-30T22:45:42.5943971Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-12-30T22:45:42.5944703Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2020-12-30T22:45:42.5945422Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2020-12-30T22:45:42.5946138Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2020-12-30T22:45:42.5946615Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2020-12-30T22:45:42.5947160Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2020-12-30T22:45:42.5947764Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2020-12-30T22:45:42.5948475Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2020-12-30T22:45:42.5948925Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2020-12-30T22:45:42.5949328Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2020-12-30T22:45:42.5949787Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2020-12-30T22:45:42.5950558Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2020-12-30T22:45:42.5951084Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
2020-12-30T22:45:42.5951608Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
2020-12-30T22:45:42.5952153Zat 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
2020-12-30T22:45:42.5952809Zat 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
2020-12-30T22:45:42.5953323Zat 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
2020-12-30T22:45:42.5953806Zat 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
2020-12-30T22:45:42.5954306Z Caused by: org.apache.flink.runtime.JobException: 
Recovery is suppressed by NoRestartBackoffTimeStrategy
2020-12-30T22:45:42.5954931Zat 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:118)
2020-12-30T22:45:42.5955641Zat 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:80)
2020-12-30T22:45:42.5956279Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:230)
2020-12-30T22:45:42.5956925Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:221)
2020-12-30T22:45:42.5957624Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:212)
2020-12-30T22:45:42.5958234Zat 

[GitHub] [flink] flinkbot commented on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disable for

2020-12-30 Thread GitBox


flinkbot commented on pull request #14531:
URL: https://github.com/apache/flink/pull/14531#issuecomment-752826517


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9dfbf0f8575de20c3c3d672ce1bf64f3c5e2ed93 (Thu Dec 31 
03:04:48 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-20777).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20824) BlockingShuffleITCase. testSortMergeBlockingShuffle test failed with "Inconsistent availability: expected true"

2020-12-30 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-20824:
-
Labels: test-stability  (was: )

> BlockingShuffleITCase. testSortMergeBlockingShuffle test failed with 
> "Inconsistent availability: expected true"
> ---
>
> Key: FLINK-20824
> URL: https://issues.apache.org/jira/browse/FLINK-20824
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.13.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11516=logs=219e462f-e75e-506c-3671-5017d866ccf6=4c5dc768-5c82-5ab0-660d-086cb90b76a0]
> {code:java}
> 2020-12-30T22:45:42.5933715Z [ERROR] 
> testSortMergeBlockingShuffle(org.apache.flink.test.runtime.BlockingShuffleITCase)
>   Time elapsed: 3.153 s  <<< FAILURE!
> 2020-12-30T22:45:42.5934985Z java.lang.AssertionError: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-12-30T22:45:42.5935943Z  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:60)
> 2020-12-30T22:45:42.5936979Z  at 
> org.apache.flink.test.runtime.BlockingShuffleITCase.testSortMergeBlockingShuffle(BlockingShuffleITCase.java:70)
> 2020-12-30T22:45:42.5937885Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-12-30T22:45:42.5938572Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-12-30T22:45:42.5939448Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-12-30T22:45:42.5940142Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-12-30T22:45:42.5940944Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-12-30T22:45:42.5942210Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-12-30T22:45:42.5943167Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-12-30T22:45:42.5943971Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-12-30T22:45:42.5944703Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-12-30T22:45:42.5945422Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-12-30T22:45:42.5946138Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-12-30T22:45:42.5946615Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-12-30T22:45:42.5947160Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-12-30T22:45:42.5947764Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-12-30T22:45:42.5948475Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-12-30T22:45:42.5948925Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-12-30T22:45:42.5949328Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-12-30T22:45:42.5949787Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-12-30T22:45:42.5950558Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-12-30T22:45:42.5951084Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-12-30T22:45:42.5951608Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-12-30T22:45:42.5952153Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-12-30T22:45:42.5952809Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-12-30T22:45:42.5953323Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> 2020-12-30T22:45:42.5953806Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> 2020-12-30T22:45:42.5954306Z Caused by: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-12-30T22:45:42.5954931Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:118)
> 2020-12-30T22:45:42.5955641Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:80)
> 2020-12-30T22:45:42.5956279Z  at 
> 

[GitHub] [flink] xiaoHoly commented on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disable for

2020-12-30 Thread GitBox


xiaoHoly commented on pull request #14531:
URL: https://github.com/apache/flink/pull/14531#issuecomment-752826319


   cc,@wuchong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14530: [FLINK-20348][Table SQL / Ecosystem]Make "schema-registry.subject" op…

2020-12-30 Thread GitBox


flinkbot commented on pull request #14530:
URL: https://github.com/apache/flink/pull/14530#issuecomment-752826258


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 43ac067ac627cea436169ee50d8cf4769e7ccdce (Thu Dec 31 
03:03:10 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20777:
---
Labels: pull-request-available  (was: )

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xiaoHoly opened a new pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interva…

2020-12-30 Thread GitBox


xiaoHoly opened a new pull request #14531:
URL: https://github.com/apache/flink/pull/14531


   …l.ms" shoule be enabled by default for unbounded mode, and disable for 
bounded mode
   
   
   
   ## What is the purpose of the change
   
   Property "partition.discovery.interval.ms" shoule be enabled by default for 
unbounded mode, and disable for bounded mode
   
   ## Brief change log
   
   
   ## Verifying this change
   
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20348) Make "schema-registry.subject" optional for Kafka sink with avro-confluent format

2020-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20348:
---
Labels: pull-request-available sprint  (was: sprint)

> Make "schema-registry.subject" optional for Kafka sink with avro-confluent 
> format
> -
>
> Key: FLINK-20348
> URL: https://issues.apache.org/jira/browse/FLINK-20348
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: zhuxiaoshang
>Priority: Major
>  Labels: pull-request-available, sprint
> Fix For: 1.13.0
>
>
> Currently, configuration "schema-registry.subject" in avro-confluent format 
> is required by sink. However, this is quite verbose set it manually. By 
> default, it can be to set to {{-key}} and {{-value}} 
> if it works with kafka or upsert-kafka connector. This can also makes 
> 'avro-confluent' format to be more handy and works better with 
> Kafka/Confluent ecosystem. 
> {code:sql}
> CREATE TABLE kafka_gmv (
>   day_str STRING,
>   gmv BIGINT,
>   PRIMARY KEY (day_str) NOT ENFORCED
> ) WITH (
> 'connector' = 'upsert-kafka',
> 'topic' = 'kafka_gmv',
> 'properties.bootstrap.servers' = 'localhost:9092',
> -- 'key.format' = 'raw',
> 'key.format' = 'avro-confluent',
> 'key.avro-confluent.schema-registry.url' = 'http://localhost:8181',
> 'key.avro-confluent.schema-registry.subject' = 'kafka_gmv-key',
> 'value.format' = 'avro-confluent',
> 'value.avro-confluent.schema-registry.url' = 'http://localhost:8181',
> 'value.avro-confluent.schema-registry.subject' = 'kafka_gmv-value'
> );
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuxiaoshang opened a new pull request #14530: [FLINK-20348][Table SQL / Ecosystem]Make "schema-registry.subject" op…

2020-12-30 Thread GitBox


zhuxiaoshang opened a new pull request #14530:
URL: https://github.com/apache/flink/pull/14530


   …tional for Kafka sink with avro-confluent format
   
   
   ## What is the purpose of the change
   
   *Autocomplete the schema-registry.subject by '-value' if not set 
by user in kafka/upsert-kafka connector when use avro-confluent format*
   
   
   ## Brief change log
   
   *(for example:)*
 - *Autocomplete the schema-registry.subject by '-value' if not 
set by user in kafka/upsert-kafka connector when use avro-confluent format*

   
   
   ## Verifying this change
   
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - testCompleteAvroConfluentSubject
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no) no 
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)no
 - The serializers: (yes / no / don't know)no
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)no
 - The S3 file system connector: (yes / no / don't know)no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)no
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangxlong commented on pull request #14280: [FLINK-20110][table-planner] Support 'merge' method for first_value a…

2020-12-30 Thread GitBox


wangxlong commented on pull request #14280:
URL: https://github.com/apache/flink/pull/14280#issuecomment-752823642


   ping @leonardBang 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20385) Allow to read metadata for Canal-json format

2020-12-30 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256800#comment-17256800
 ] 

Nicholas Jiang commented on FLINK-20385:


[~wangfeiair2324], this has already been merge into master branch. Please check 
for it.

> Allow to read metadata for Canal-json format
> 
>
> Key: FLINK-20385
> URL: https://issues.apache.org/jira/browse/FLINK-20385
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Leonard Xu
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> In FLIP-107, we support read meta from CDC format Debezium, Canal-json is also
> another widely used CDC format , we need to support read metadata too.
>  
> The requirement comes from user-zh mail list, the user want to read meta 
> information(database table name) from Canal-json.
> [1] [http://apache-flink.147419.n8.nabble.com/canal-json-tt8939.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18027) ROW value constructor cannot deal with complex expressions

2020-12-30 Thread Danny Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256799#comment-17256799
 ] 

Danny Chen commented on FLINK-18027:


Thanks for the check [~twalthr], let me have a check for the expression `ROW(f0 
+ 12, 'Hello world')` on the Calcite side. Would give the root cause and 
solution soon ~

> ROW value constructor cannot deal with complex expressions
> --
>
> Key: FLINK-18027
> URL: https://issues.apache.org/jira/browse/FLINK-18027
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Benchao Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> {code:java}
> create table my_source (
> my_row row
> ) with (...);
> create table my_sink (
> my_row row
> ) with (...);
> insert into my_sink
> select ROW(my_row.a, my_row.b) 
> from my_source;{code}
> will throw excepions:
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.SqlParserException: SQL 
> parse failed. Encountered "." at line 1, column 18.Exception in thread "main" 
> org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered 
> "." at line 1, column 18.Was expecting one of:    ")" ...    "," ...     at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:56)
>  at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:64)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:627)
>  at com.bytedance.demo.KafkaTableSource.main(KafkaTableSource.java:76)Caused 
> by: org.apache.calcite.sql.parser.SqlParseException: Encountered "." at line 
> 1, column 18.Was expecting one of:    ")" ...    "," ...     at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.convertException(FlinkSqlParserImpl.java:416)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.normalizeException(FlinkSqlParserImpl.java:201)
>  at 
> org.apache.calcite.sql.parser.SqlParser.handleException(SqlParser.java:148) 
> at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:163) at 
> org.apache.calcite.sql.parser.SqlParser.parseStmt(SqlParser.java:188) at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:54)
>  ... 3 moreCaused by: org.apache.flink.sql.parser.impl.ParseException: 
> Encountered "." at line 1, column 18.Was expecting one of:    ")" ...    "," 
> ...     at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.generateParseException(FlinkSqlParserImpl.java:36161)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.jj_consume_token(FlinkSqlParserImpl.java:35975)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.ParenthesizedSimpleIdentifierList(FlinkSqlParserImpl.java:21432)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.Expression3(FlinkSqlParserImpl.java:17164)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.Expression2b(FlinkSqlParserImpl.java:16820)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.Expression2(FlinkSqlParserImpl.java:16861)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.Expression(FlinkSqlParserImpl.java:16792)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SelectExpression(FlinkSqlParserImpl.java:11091)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SelectItem(FlinkSqlParserImpl.java:10293)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SelectList(FlinkSqlParserImpl.java:10267)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SqlSelect(FlinkSqlParserImpl.java:6943)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.LeafQuery(FlinkSqlParserImpl.java:658)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.LeafQueryOrExpr(FlinkSqlParserImpl.java:16775)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.QueryOrExpr(FlinkSqlParserImpl.java:16238)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.OrderedQueryOrExpr(FlinkSqlParserImpl.java:532)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SqlStmt(FlinkSqlParserImpl.java:3761)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SqlStmtEof(FlinkSqlParserImpl.java:3800)
>  at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.parseSqlStmtEof(FlinkSqlParserImpl.java:248)
>  at org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:161) 
> ... 5 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20385) Allow to read metadata for Canal-json format

2020-12-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-20385.
---
Fix Version/s: 1.13.0
   Resolution: Fixed

Implemented in master: 814fe0eb06f1941247742de27e8150b7c9274b43


> Allow to read metadata for Canal-json format
> 
>
> Key: FLINK-20385
> URL: https://issues.apache.org/jira/browse/FLINK-20385
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Leonard Xu
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> In FLIP-107, we support read meta from CDC format Debezium, Canal-json is also
> another widely used CDC format , we need to support read metadata too.
>  
> The requirement comes from user-zh mail list, the user want to read meta 
> information(database table name) from Canal-json.
> [1] [http://apache-flink.147419.n8.nabble.com/canal-json-tt8939.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format

2020-12-30 Thread GitBox


wuchong merged pull request #14464:
URL: https://github.com/apache/flink/pull/14464


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163


   
   ## CI report:
   
   * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN
   * f4d02e921d2641fc5692617a4dd50ba2fda1128c UNKNOWN
   * fd8cbf90a807292b0db7b85bda26f1e717b87767 UNKNOWN
   * 28364b82097029626bf4ad3d18ca14eb759d64db Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11521)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11509)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20822) Don't check whether a function is generic in hive catalog

2020-12-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20822:

Fix Version/s: 1.13.0

> Don't check whether a function is generic in hive catalog
> -
>
> Key: FLINK-20822
> URL: https://issues.apache.org/jira/browse/FLINK-20822
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.13.0
>
>
> We just store the function identifier and class name to hive metastore, so it 
> seems there's no need to differentiate generic functions from hive functions. 
> Besides, users might not want us to validate the function class when creating 
> the function, e.g. the function jar might not be in class path at this point.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20821) `select row(map[1,2],'ab')` parses failed

2020-12-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-20821.
---
Fix Version/s: (was: 1.13.0)
   Resolution: Duplicate

> `select row(map[1,2],'ab')` parses failed
> -
>
> Key: FLINK-20821
> URL: https://issues.apache.org/jira/browse/FLINK-20821
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Priority: Major
>
> when executing {{select row(map[1,2],'ab')}}, we encounter the following 
> error:
> {code:text}
> org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered 
> "[" at line 1, column 15.
> Was expecting one of:
> ")" ...
> "," ...
> 
>   at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:56)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:74)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:640)
> {code}
> while, the similar statement {{select row('ab',map[1,2])}} can be parsed 
> successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-30 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256788#comment-17256788
 ] 

Huang Xingbo commented on FLINK-20781:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11516=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56]

 

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.1
>
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> 

[jira] [Created] (FLINK-20823) Update documentation to mention Table/SQL API doesn't provide cross-major-version state compatibility

2020-12-30 Thread Jark Wu (Jira)
Jark Wu created FLINK-20823:
---

 Summary: Update documentation to mention Table/SQL API doesn't 
provide cross-major-version state compatibility
 Key: FLINK-20823
 URL: https://issues.apache.org/jira/browse/FLINK-20823
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table SQL / API
Reporter: Jark Wu


As discussed in the mailing list: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Did-Flink-1-11-break-backwards-compatibility-for-the-table-environment-tp47472p47492.html

Flink Table/SQL API doesn't provide cross-major-version state compatibility, 
however, this is not documented in anywhere. We should update the 
documentation. Besides, we should also mention that we provide state 
compatibility across minor versions. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20822) Don't check whether a function is generic in hive catalog

2020-12-30 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-20822:
---
Description: We just store the function identifier and class name to hive 
metastore, so it seems there's no need to differentiate generic functions from 
hive functions. Besides, users might not want us to validate the function class 
when creating the function, e.g. the function jar might not be in class path at 
this point.

> Don't check whether a function is generic in hive catalog
> -
>
> Key: FLINK-20822
> URL: https://issues.apache.org/jira/browse/FLINK-20822
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
>
> We just store the function identifier and class name to hive metastore, so it 
> seems there's no need to differentiate generic functions from hive functions. 
> Besides, users might not want us to validate the function class when creating 
> the function, e.g. the function jar might not be in class path at this point.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20822) Don't check whether a function is generic in hive catalog

2020-12-30 Thread Rui Li (Jira)
Rui Li created FLINK-20822:
--

 Summary: Don't check whether a function is generic in hive catalog
 Key: FLINK-20822
 URL: https://issues.apache.org/jira/browse/FLINK-20822
 Project: Flink
  Issue Type: Task
  Components: Connectors / Hive
Reporter: Rui Li






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-30 Thread GitBox


flinkbot edited a comment on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-751734163


   
   ## CI report:
   
   * f1332b021d33a6e4681b0a08ad1c5b58f153c417 UNKNOWN
   * f4d02e921d2641fc5692617a4dd50ba2fda1128c UNKNOWN
   * fd8cbf90a807292b0db7b85bda26f1e717b87767 UNKNOWN
   * 28364b82097029626bf4ad3d18ca14eb759d64db Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11509)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11521)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] V1ncentzzZ commented on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-30 Thread GitBox


V1ncentzzZ commented on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-752816613


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] V1ncentzzZ edited a comment on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-30 Thread GitBox


V1ncentzzZ edited a comment on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-751745476


   @flinkbot run azure.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20654) Unaligned checkpoint recovery may lead to corrupted data stream

2020-12-30 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256780#comment-17256780
 ] 

Huang Xingbo commented on FLINK-20654:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11500=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0

> Unaligned checkpoint recovery may lead to corrupted data stream
> ---
>
> Key: FLINK-20654
> URL: https://issues.apache.org/jira/browse/FLINK-20654
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Arvid Heise
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0, 1.12.1
>
>
> Fix of FLINK-20433 shows potential corruption after recovery for all 
> variations of UnalignedCheckpointITCase.
> To reproduce, run UCITCase a couple hundreds times. The issue showed for me 
> in:
> - execute [Parallel union, p = 5]
> - execute [Parallel union, p = 10]
> - execute [Parallel cogroup, p = 5]
> - execute [parallel pipeline with remote channels, p = 5]
> with decreasing frequency.
> The issue manifests as one of the following issues:
> - stream corrupted exception
> - EOF exception
> - assertion failure in NUM_LOST or NUM_OUT_OF_ORDER
> - (for union) ArithmeticException overflow (because the number that should be 
> [0;10] has been mis-deserialized)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-30 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256779#comment-17256779
 ] 

Huang Xingbo commented on FLINK-20781:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11501=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.1
>
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> 

[jira] [Reopened] (FLINK-19962) fix doc: more examples for expanding arrays into a relation

2020-12-30 Thread Wong Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wong Mulan reopened FLINK-19962:


> fix doc: more examples for expanding arrays into a relation
> ---
>
> Key: FLINK-19962
> URL: https://issues.apache.org/jira/browse/FLINK-19962
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Wong Mulan
>Assignee: Wong Mulan
>Priority: Minor
>  Labels: pull-request-available
>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html
> The ' Expanding arrays into a relation ' section is so simple. I can not 
> understand the usage when I see this section.
> It should be added more examples.
> The following examples
> {quote}SELECT a, b, v FROM T CROSS JOIN UNNEST(c) as f (k, v){quote}
> {quote}SELECT a, s FROM t2 LEFT JOIN UNNEST(t2.`set`) AS A(s){quote}
> {quote}SELECT a, km v FROM t2 CROSS JOIN UNNEST(t2.c) AS A(k, v){quote}
> or add some Table field description. For example, field 'set' type of table 
> t2 is array or map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20781) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-30 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256777#comment-17256777
 ] 

Huang Xingbo commented on FLINK-20781:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11498=logs=34f41360-6c0d-54d3-11a1-0292a2def1d9=2d56e022-1ace-542f-bf1a-b37dd63243f2

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20781
> URL: https://issues.apache.org/jira/browse/FLINK-20781
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.1
>
>
> {{UnalignedCheckpointITCase}} fails due to {{NullPointerException}} (e.g. in 
> [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11083=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=8798]):
> {code:java}
> [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 152.186 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase
> [ERROR] execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 34.869 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> 

[jira] [Resolved] (FLINK-19962) fix doc: more examples for expanding arrays into a relation

2020-12-30 Thread Wong Mulan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wong Mulan resolved FLINK-19962.

Resolution: Fixed

> fix doc: more examples for expanding arrays into a relation
> ---
>
> Key: FLINK-19962
> URL: https://issues.apache.org/jira/browse/FLINK-19962
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Wong Mulan
>Assignee: Wong Mulan
>Priority: Minor
>  Labels: pull-request-available
>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html
> The ' Expanding arrays into a relation ' section is so simple. I can not 
> understand the usage when I see this section.
> It should be added more examples.
> The following examples
> {quote}SELECT a, b, v FROM T CROSS JOIN UNNEST(c) as f (k, v){quote}
> {quote}SELECT a, s FROM t2 LEFT JOIN UNNEST(t2.`set`) AS A(s){quote}
> {quote}SELECT a, km v FROM t2 CROSS JOIN UNNEST(t2.c) AS A(k, v){quote}
> or add some Table field description. For example, field 'set' type of table 
> t2 is array or map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe merged pull request #14451: [FLINK-20704][table-planner] Some rel data type does not implement th…

2020-12-30 Thread GitBox


godfreyhe merged pull request #14451:
URL: https://github.com/apache/flink/pull/14451


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20821) `select row(map[1,2],'ab')` parses failed

2020-12-30 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-20821:
---
Fix Version/s: 1.13.0

> `select row(map[1,2],'ab')` parses failed
> -
>
> Key: FLINK-20821
> URL: https://issues.apache.org/jira/browse/FLINK-20821
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.13.0
>
>
> when executing {{select row(map[1,2],'ab')}}, we encounter the following 
> error:
> {code:text}
> org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered 
> "[" at line 1, column 15.
> Was expecting one of:
> ")" ...
> "," ...
> 
>   at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:56)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:74)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:640)
> {code}
> while, the similar statement {{select row('ab',map[1,2])}} can be parsed 
> successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20821) `select row(map[1,2],'ab')` parses failed

2020-12-30 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256767#comment-17256767
 ] 

godfrey he commented on FLINK-20821:


cc [~danny0405]

> `select row(map[1,2],'ab')` parses failed
> -
>
> Key: FLINK-20821
> URL: https://issues.apache.org/jira/browse/FLINK-20821
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Priority: Major
>
> when executing {{select row(map[1,2],'ab')}}, we encounter the following 
> error:
> {code:text}
> org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered 
> "[" at line 1, column 15.
> Was expecting one of:
> ")" ...
> "," ...
> 
>   at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:56)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:74)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:640)
> {code}
> while, the similar statement {{select row('ab',map[1,2])}} can be parsed 
> successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   >