[GitHub] [flink] flinkbot edited a comment on pull request #16086: [FLINK-22585] Add deprecated message when -m yarn-cluster is used

2021-06-06 Thread GitBox


flinkbot edited a comment on pull request #16086:
URL: https://github.com/apache/flink/pull/16086#issuecomment-855570837


   
   ## CI report:
   
   * b3e6d58f57c43ab2debb3cf461520bad209c45db Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18715)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16060: [FLINK-22860][table sql-client] Supplement 'HELP' command prompt message for SQL-Cli.

2021-06-06 Thread GitBox


flinkbot edited a comment on pull request #16060:
URL: https://github.com/apache/flink/pull/16060#issuecomment-853560762


   
   ## CI report:
   
   * 4e5966fdfd351d5acb2ce04747d66b3f540cb12f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18686)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18694)
 
   * 648a118444caeafa957eac1b7f1767b002dcf9e1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18714)
 
   * 4924c2ff92a28960be1fea3af80c566029091fe7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18716)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #15200: [FLINK-21355] Send changes to the state changelog

2021-06-06 Thread GitBox


curcur commented on a change in pull request #15200:
URL: https://github.com/apache/flink/pull/15200#discussion_r646271862



##
File path: 
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/ChangelogMapState.java
##
@@ -96,12 +164,39 @@ public boolean isEmpty() throws Exception {
 
 @Override
 public void clear() {
+try {
+changeLogger.valueCleared(getCurrentNamespace());
+} catch (IOException e) {
+ExceptionUtils.rethrow(e);
+}
 delegatedState.clear();
 }
 
+private void serializeValue(
+UV value, org.apache.flink.core.memory.DataOutputViewStreamWrapper 
out)
+throws IOException {
+getMapSerializer().getValueSerializer().serialize(value, out);
+}
+
+private void serializeKey(UK key, 
org.apache.flink.core.memory.DataOutputViewStreamWrapper out)
+throws IOException {
+getMapSerializer().getKeySerializer().serialize(key, out);
+}
+
+private MapSerializer getMapSerializer() {
+return (MapSerializer) getValueSerializer();
+}
+
+private final BiConsumerWithException<
+Map.Entry, DataOutputViewStreamWrapper, 
IOException>
+changeWriter = (entry, out) -> serializeKey(entry.getKey(), out);

Review comment:
   Move this to the top of this class...
   
   Confusing to read.

##
File path: 
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/StateChangeLogger.java
##
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.state.changelog;
+
+import org.apache.flink.core.memory.DataOutputViewStreamWrapper;
+import org.apache.flink.util.ExceptionUtils;
+import org.apache.flink.util.function.BiConsumerWithException;
+import org.apache.flink.util.function.ThrowingConsumer;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+
+interface StateChangeLogger {
+static  Iterator 
loggingIterator(
+@Nullable Iterator iterator,
+StateChangeLogger changeLogger,
+BiConsumerWithException
+changeWriter,
+Namespace ns) {
+if (iterator == null) {
+return null;
+}
+return new Iterator() {
+
+@Nullable private StateElement lastReturned;
+
+@Override
+public boolean hasNext() {
+return iterator.hasNext();
+}
+
+@Override
+public StateElement next() {
+return lastReturned = iterator.next();
+}
+
+@Override
+public void remove() {
+try {
+changeLogger.valueElementRemoved(
+out -> changeWriter.accept(lastReturned, out), ns);
+} catch (IOException e) {
+ExceptionUtils.rethrow(e);
+}
+iterator.remove();
+}
+};
+}
+
+static  Iterable 
loggingIterable(
+@Nullable Iterable iterable,
+KvStateChangeLogger changeLogger,
+BiConsumerWithException
+changeWriter,
+Namespace ns) {
+if (iterable == null) {
+return null;
+}
+return () -> loggingIterator(iterable.iterator(), changeLogger, 
changeWriter, ns);
+}
+
+static  Map.Entry loggingMapEntry(
+Map.Entry entry,
+KvStateChangeLogger changeLogger,
+BiConsumerWithException, 
DataOutputViewStreamWrapper, IOException>
+changeWriter,
+Namespace ns) {
+return new Map.Entry() {
+@Override
+public UK getKey() {
+return entry.getKey();
+}
+
+@Override
+public UV getValue() {
+return entry.getValue();
+}
+
+@Override
+public UV setValue(UV value) {
+try {
+changeLogger.valueElementChanged(out 

[jira] [Closed] (FLINK-22855) Translate the 'Overview of Python API' page into Chinese.

2021-06-06 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-22855.
---
Resolution: Fixed

Merged to 
- master via 13324049ed5378ca2abae711c09b73498515
- release-1.13 via b73f6c40cbdbf8c50ea087644b34f8895cc733aa

> Translate the 'Overview of Python API' page into Chinese.
> -
>
> Key: FLINK-22855
> URL: https://issues.apache.org/jira/browse/FLINK-22855
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Affects Versions: 1.14.0
>Reporter: Roc Marshal
>Assignee: Roc Marshal
>Priority: Minor
>  Labels: pull-request-available
>
> target link: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.13/zh/docs/dev/python/overview/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22855) Translate the 'Overview of Python API' page into Chinese.

2021-06-06 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-22855:

Priority: Major  (was: Minor)

> Translate the 'Overview of Python API' page into Chinese.
> -
>
> Key: FLINK-22855
> URL: https://issues.apache.org/jira/browse/FLINK-22855
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Affects Versions: 1.14.0
>Reporter: Roc Marshal
>Assignee: Roc Marshal
>Priority: Major
>  Labels: pull-request-available
>
> target link: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.13/zh/docs/dev/python/overview/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu closed pull request #16061: [FLINK-22855][docs-zh] Translate the 'Overview of Python API' page into Chinese.

2021-06-06 Thread GitBox


dianfu closed pull request #16061:
URL: https://github.com/apache/flink/pull/16061


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16086: [FLINK-22585] Add deprecated message when -m yarn-cluster is used

2021-06-06 Thread GitBox


flinkbot edited a comment on pull request #16086:
URL: https://github.com/apache/flink/pull/16086#issuecomment-855570837


   
   ## CI report:
   
   * b3e6d58f57c43ab2debb3cf461520bad209c45db Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18715)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16061: [FLINK-22855][docs-zh] Translate the 'Overview of Python API' page into Chinese.

2021-06-06 Thread GitBox


flinkbot edited a comment on pull request #16061:
URL: https://github.com/apache/flink/pull/16061#issuecomment-853573483


   
   ## CI report:
   
   * 4075411aeeb5c59602864c346567dd2edab8a3b9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18602)
 
   * f715d52a892c452c0a31e91a3195ffb568753631 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16060: [FLINK-22860][table sql-client] Supplement 'HELP' command prompt message for SQL-Cli.

2021-06-06 Thread GitBox


flinkbot edited a comment on pull request #16060:
URL: https://github.com/apache/flink/pull/16060#issuecomment-853560762


   
   ## CI report:
   
   * 4e5966fdfd351d5acb2ce04747d66b3f540cb12f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18686)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18694)
 
   * 648a118444caeafa957eac1b7f1767b002dcf9e1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18714)
 
   * 4924c2ff92a28960be1fea3af80c566029091fe7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on a change in pull request #16061: [FLINK-22855][docs-zh] Translate the 'Overview of Python API' page into Chinese.

2021-06-06 Thread GitBox


RocMarshal commented on a change in pull request #16061:
URL: https://github.com/apache/flink/pull/16061#discussion_r646268621



##
File path: docs/content.zh/docs/dev/python/overview.md
##
@@ -29,36 +29,33 @@ under the License.
 

Review comment:
   @dianfu Thank you for your suggestions. I made some changes.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22885) Support 'SHOW COLUMNS'.

2021-06-06 Thread Roc Marshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358352#comment-17358352
 ] 

Roc Marshal commented on FLINK-22885:
-

[~jark] 
There's no doubt that "SHOW CREATE TABLE" command can show the information of 
all the columns in the table.

However, when there are many table fields, I hope to use this command 'SHOW 
COLUMNS ( FROM | IN )  [LIKE ]' to find quickly 
some information about the target field.

> Support 'SHOW COLUMNS'.
> ---
>
> Key: FLINK-22885
> URL: https://issues.apache.org/jira/browse/FLINK-22885
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Roc Marshal
>Priority: Major
>  Labels: SQL
>
> h1. Support 'SHOW COLUMNS'.
> SHOW COLUMNS ( FROM | IN )  [LIKE ]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16086: [FLINK-22585] Add deprecated message when -m yarn-cluster is used

2021-06-06 Thread GitBox


flinkbot commented on pull request #16086:
URL: https://github.com/apache/flink/pull/16086#issuecomment-855570837


   
   ## CI report:
   
   * b3e6d58f57c43ab2debb3cf461520bad209c45db UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #15200: [FLINK-21355] Send changes to the state changelog

2021-06-06 Thread GitBox


curcur commented on a change in pull request #15200:
URL: https://github.com/apache/flink/pull/15200#discussion_r646260894



##
File path: 
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/ChangelogMapState.java
##
@@ -71,22 +94,67 @@ public boolean contains(UK key) throws Exception {
 
 @Override
 public Iterable> entries() throws Exception {
-return delegatedState.entries();
+Iterator> iterator = 
delegatedState.entries().iterator();
+final N currentNamespace = getCurrentNamespace();
+return () ->
+loggingIterator(
+new Iterator>() {
+@Override
+public Map.Entry next() {
+return loggingMapEntry(
+iterator.next(),
+changeLogger,
+changeWriter,
+currentNamespace);
+}
+
+@Override
+public boolean hasNext() {
+return iterator.hasNext();
+}
+
+@Override
+public void remove() {
+iterator.remove();
+}
+},
+changeLogger,
+changeWriter,
+currentNamespace);
 }
 
 @Override
 public Iterable keys() throws Exception {
-return delegatedState.keys();
+return loggingIterable(
+delegatedState.keys(), changeLogger, this::serializeKey, 
getCurrentNamespace());

Review comment:
   Sync up offline:
   
   Let me rephrase my question:
   
   1. Some backend supports iterator change/remove; but some does not, but this 
is not the focus, I am asking in the case where iterator change/remove are 
supported (rocksdb for example)
   
   2. In the case where iterator is supported
   Let's say I get an iterator list of keys, and I get an iterator list of 
values, I remove some keys in the key iterator, and some values in the value 
iterator
   
   Is it guaranteed that such remove is always paired? (Key, Value)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #15200: [FLINK-21355] Send changes to the state changelog

2021-06-06 Thread GitBox


curcur commented on a change in pull request #15200:
URL: https://github.com/apache/flink/pull/15200#discussion_r646242033



##
File path: 
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/ChangelogKeyedStateBackend.java
##
@@ -328,14 +340,23 @@ public void notifyCheckpointAborted(long checkpointId) 
throws Exception {
 throw new FlinkRuntimeException(message);
 }
 
-return stateFactory.create(
+InternalKvState state =
 keyedStateBackend.createInternalState(
-namespaceSerializer, stateDesc, 
snapshotTransformFactory));
+namespaceSerializer, stateDesc, 
snapshotTransformFactory);
+KvStateChangeLoggerImpl logger =

Review comment:
   -> `KvStateChangeLogger`

##
File path: 
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/AbstractStateChangeLogger.java
##
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.state.changelog;
+
+import org.apache.flink.core.memory.DataOutputViewStreamWrapper;
+import org.apache.flink.runtime.state.changelog.StateChangelogWriter;
+import org.apache.flink.runtime.state.heap.InternalKeyContext;
+import org.apache.flink.util.function.ThrowingConsumer;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Map;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+import static 
org.apache.flink.state.changelog.AbstractStateChangeLogger.StateChangeOperation.ADD;
+import static 
org.apache.flink.state.changelog.AbstractStateChangeLogger.StateChangeOperation.CHANGE_ELEMENT;
+import static 
org.apache.flink.state.changelog.AbstractStateChangeLogger.StateChangeOperation.CLEAR;
+import static 
org.apache.flink.state.changelog.AbstractStateChangeLogger.StateChangeOperation.REMOVE_ELEMENT;
+import static 
org.apache.flink.state.changelog.AbstractStateChangeLogger.StateChangeOperation.SET;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+abstract class AbstractStateChangeLogger implements 
StateChangeLogger {
+protected final StateChangelogWriter stateChangelogWriter;
+protected final InternalKeyContext keyContext;
+
+public AbstractStateChangeLogger(
+StateChangelogWriter stateChangelogWriter, 
InternalKeyContext keyContext) {
+this.stateChangelogWriter = checkNotNull(stateChangelogWriter);
+this.keyContext = checkNotNull(keyContext);
+}
+
+@Override
+public void stateUpdated(State newState, Ns ns) throws IOException {
+if (newState == null) {
+stateCleared(ns);
+} else {
+log(SET, out -> serializeState(newState, out), ns);
+}
+}
+
+protected abstract void serializeState(State state, 
DataOutputViewStreamWrapper out)
+throws IOException;
+
+@Override
+public void stateAdded(State addedState, Ns ns) throws IOException {
+log(ADD, out -> serializeState(addedState, out), ns);
+}
+
+@Override
+public void stateCleared(Ns ns) throws IOException {
+log(CLEAR, out -> {}, ns);
+}
+
+@Override
+public void stateElementChanged(
+ThrowingConsumer 
dataSerializer, Ns ns)
+throws IOException {
+log(CHANGE_ELEMENT, dataSerializer, ns);
+}
+
+@Override
+public void stateElementRemoved(
+ThrowingConsumer 
dataSerializer, Ns ns)
+throws IOException {
+log(REMOVE_ELEMENT, dataSerializer, ns);
+}
+
+protected void log(
+StateChangeOperation op,
+ThrowingConsumer 
dataWriter,
+Ns ns)
+throws IOException {
+// todo: log metadata (FLINK-22808)
+stateChangelogWriter.append(
+keyContext.getCurrentKeyGroupIndex(), serialize(op, ns, 
dataWriter));
+}
+
+private byte[] serialize(
+StateChangeOperation op,
+Ns ns,
+ThrowingConsumer 
dataWriter)
+throws IOException {
+return serializeRaw(
+   

[jira] [Commented] (FLINK-22585) Add deprecated message when "-m yarn-cluster" is used

2021-06-06 Thread Rainie Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358346#comment-17358346
 ] 

Rainie Li commented on FLINK-22585:
---

Please review pr: https://github.com/apache/flink/pull/16086

> Add deprecated message when "-m yarn-cluster" is used
> -
>
> Key: FLINK-22585
> URL: https://issues.apache.org/jira/browse/FLINK-22585
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Yang Wang
>Assignee: Rainie Li
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.14.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The unified executor interface has been introduced for a long time, which 
> could be activated by "\--target 
> yarn-per-job/yarn-session/kubernetes-application". It is more descriptive and 
> clearer. We should try to deprecate "-m yarn-cluster" and suggest our users 
> to use the new CLI commands.
>  
> However, AFAIK, many companies are using some CLI commands to integrate with 
> their deployers. So we could not remove the "-m yarn-cluster" very soon. 
> Maybe we could do it in the release 2.0 since we could do some breaking 
> changes.
>  
> For now, I suggest to add the {{@Deprecated}} annotation and printing a WARN 
> log message when "-m yarn-cluster" is used. It is useful to let the users 
> know the long-term goals and migrate asap.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22886) Thread leak in RocksDBStateUploader

2021-06-06 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358345#comment-17358345
 ] 

Yun Tang commented on FLINK-22886:
--

Glad to know ready solution.
[~mayuehappy], please go ahead to create PR.

> Thread leak in RocksDBStateUploader
> ---
>
> Key: FLINK-22886
> URL: https://issues.apache.org/jira/browse/FLINK-22886
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.11.3, 1.13.1, 1.12.4
>Reporter: Jiayi Liao
>Assignee: Yue Ma
>Priority: Major
>  Labels: critical
> Attachments: image-2021-06-06-13-46-34-604.png
>
>
> {{ExecutorService}} in {{RocksDBStateUploader}} is not shut down, which may 
> leak thread when tasks fail.
> BTW, we should name the thread group in {{ExecutorService}}, otherwise what 
> we see in the stack, is a lot of threads named with pool-m-thread-n like this:
>  
> !image-2021-06-06-13-46-34-604.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16086: [FLINK-22585] Add deprecated message when -m yarn-cluster is used

2021-06-06 Thread GitBox


flinkbot commented on pull request #16086:
URL: https://github.com/apache/flink/pull/16086#issuecomment-855562059


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b3e6d58f57c43ab2debb3cf461520bad209c45db (Mon Jun 07 
04:14:58 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-22886) Thread leak in RocksDBStateUploader

2021-06-06 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang reassigned FLINK-22886:


Assignee: Yue Ma

> Thread leak in RocksDBStateUploader
> ---
>
> Key: FLINK-22886
> URL: https://issues.apache.org/jira/browse/FLINK-22886
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.11.3, 1.13.1, 1.12.4
>Reporter: Jiayi Liao
>Assignee: Yue Ma
>Priority: Major
>  Labels: critical
> Attachments: image-2021-06-06-13-46-34-604.png
>
>
> {{ExecutorService}} in {{RocksDBStateUploader}} is not shut down, which may 
> leak thread when tasks fail.
> BTW, we should name the thread group in {{ExecutorService}}, otherwise what 
> we see in the stack, is a lot of threads named with pool-m-thread-n like this:
>  
> !image-2021-06-06-13-46-34-604.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22585) Add deprecated message when "-m yarn-cluster" is used

2021-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-22585:
---
Labels: pull-request-available stale-assigned  (was: stale-assigned)

> Add deprecated message when "-m yarn-cluster" is used
> -
>
> Key: FLINK-22585
> URL: https://issues.apache.org/jira/browse/FLINK-22585
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Yang Wang
>Assignee: Rainie Li
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.14.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The unified executor interface has been introduced for a long time, which 
> could be activated by "\--target 
> yarn-per-job/yarn-session/kubernetes-application". It is more descriptive and 
> clearer. We should try to deprecate "-m yarn-cluster" and suggest our users 
> to use the new CLI commands.
>  
> However, AFAIK, many companies are using some CLI commands to integrate with 
> their deployers. So we could not remove the "-m yarn-cluster" very soon. 
> Maybe we could do it in the release 2.0 since we could do some breaking 
> changes.
>  
> For now, I suggest to add the {{@Deprecated}} annotation and printing a WARN 
> log message when "-m yarn-cluster" is used. It is useful to let the users 
> know the long-term goals and migrate asap.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lixmgl opened a new pull request #16086: [FLINK-22585] Add deprecated message when -m yarn-cluster is used

2021-06-06 Thread GitBox


lixmgl opened a new pull request #16086:
URL: https://github.com/apache/flink/pull/16086


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #16061: [FLINK-22855][docs-zh] Translate the 'Overview of Python API' page into Chinese.

2021-06-06 Thread GitBox


dianfu commented on a change in pull request #16061:
URL: https://github.com/apache/flink/pull/16061#discussion_r646249784



##
File path: docs/content.zh/docs/dev/python/overview.md
##
@@ -29,36 +29,33 @@ under the License.
 
 {{< img src="/fig/pyflink.svg" alt="PyFlink" class="offset" width="50%" >}}
 
-PyFlink is a Python API for Apache Flink that allows you to build scalable 
batch and streaming 
-workloads, such as real-time data processing pipelines, large-scale 
exploratory data analysis,
-Machine Learning (ML) pipelines and ETL processes.
-If you're already familiar with Python and libraries such as Pandas, then 
PyFlink makes it simpler
-to leverage the full capabilities of the Flink ecosystem. Depending on the 
level of abstraction you
-need, there are two different APIs that can be used in PyFlink:
+PyFlink 是 Apache Flink 的 Python 
API,你可以使用它构建可扩展的批处理和流工作负载,例如实时数据处理管道、大规模探索性数据分析、机器学习(ML)管道和 ETL 处理。

Review comment:
   ```suggestion
   PyFlink 是 Apache Flink 的 Python 
API,你可以使用它构建可扩展的批处理和流处理任务,例如实时数据处理管道、大规模探索性数据分析、机器学习(ML)管道和 ETL 处理。
   ```

##
File path: docs/content.zh/docs/dev/python/overview.md
##
@@ -29,36 +29,33 @@ under the License.
 
 {{< img src="/fig/pyflink.svg" alt="PyFlink" class="offset" width="50%" >}}
 
-PyFlink is a Python API for Apache Flink that allows you to build scalable 
batch and streaming 
-workloads, such as real-time data processing pipelines, large-scale 
exploratory data analysis,
-Machine Learning (ML) pipelines and ETL processes.
-If you're already familiar with Python and libraries such as Pandas, then 
PyFlink makes it simpler
-to leverage the full capabilities of the Flink ecosystem. Depending on the 
level of abstraction you
-need, there are two different APIs that can be used in PyFlink:
+PyFlink 是 Apache Flink 的 Python 
API,你可以使用它构建可扩展的批处理和流工作负载,例如实时数据处理管道、大规模探索性数据分析、机器学习(ML)管道和 ETL 处理。
+如果你已经熟悉 Python 和 Pandas 等库,那么 PyFlink 可以让你更轻松地利用 Flink 生态系统的全部功能。
+根据你需要的抽象级别,有两种不同的 API 可以在 PyFlink 中使用:
 
-* The **PyFlink Table API** allows you to write powerful relational queries in 
a way that is similar to using SQL or working with tabular data in Python.
-* At the same time, the **PyFlink DataStream API** gives you lower-level 
control over the core building blocks of Flink, [state]({{< ref 
"docs/concepts/stateful-stream-processing" >}}) and [time]({{< ref 
"docs/concepts/time" >}}), to build more complex stream processing use cases.
+* **PyFlink Table API** 允许使用类似于 SQL 或在 Python 中处理表格数据的方式编写强大的关系查询。
+* 与此同时,**PyFlink DataStream API** 提供对 Flink 的核心构建块 [state]({{< ref 
"docs/concepts/stateful-stream-processing" >}}) 和 [time]({{< ref 
"docs/concepts/time" >}}) 进行较低级别(粒度)的控制,以便构建更复杂的流处理用例。
 
 {{< columns >}}
 
-### Try PyFlink
+### 尝试 PyFlink
 
-If you’re interested in playing around with Flink, try one of our tutorials:
+如果你有兴趣使用 PyFlink,可以尝试以下任意教程:
 
-* [Intro to PyFlink DataStream API]({{< ref 
"docs/dev/python/datastream_tutorial" >}})
-* [Intro to PyFlink Table API]({{< ref "docs/dev/python/table_api_tutorial" 
>}})
+* [PyFlink DataStream API 介绍]({{< ref "docs/dev/python/datastream_tutorial" 
>}})
+* [PyFlink Table API 介绍]({{< ref "docs/dev/python/table_api_tutorial" >}})
 
 <--->
 
-### Explore PyFlink
+### 探究 PyFlink

Review comment:
   ```suggestion
   ### 深入 PyFlink
   ```

##
File path: docs/content.zh/docs/dev/python/overview.md
##
@@ -29,36 +29,33 @@ under the License.
 
 {{< img src="/fig/pyflink.svg" alt="PyFlink" class="offset" width="50%" >}}
 
-PyFlink is a Python API for Apache Flink that allows you to build scalable 
batch and streaming 
-workloads, such as real-time data processing pipelines, large-scale 
exploratory data analysis,
-Machine Learning (ML) pipelines and ETL processes.
-If you're already familiar with Python and libraries such as Pandas, then 
PyFlink makes it simpler
-to leverage the full capabilities of the Flink ecosystem. Depending on the 
level of abstraction you
-need, there are two different APIs that can be used in PyFlink:
+PyFlink 是 Apache Flink 的 Python 
API,你可以使用它构建可扩展的批处理和流工作负载,例如实时数据处理管道、大规模探索性数据分析、机器学习(ML)管道和 ETL 处理。
+如果你已经熟悉 Python 和 Pandas 等库,那么 PyFlink 可以让你更轻松地利用 Flink 生态系统的全部功能。
+根据你需要的抽象级别,有两种不同的 API 可以在 PyFlink 中使用:
 
-* The **PyFlink Table API** allows you to write powerful relational queries in 
a way that is similar to using SQL or working with tabular data in Python.
-* At the same time, the **PyFlink DataStream API** gives you lower-level 
control over the core building blocks of Flink, [state]({{< ref 
"docs/concepts/stateful-stream-processing" >}}) and [time]({{< ref 
"docs/concepts/time" >}}), to build more complex stream processing use cases.
+* **PyFlink Table API** 允许使用类似于 SQL 或在 Python 中处理表格数据的方式编写强大的关系查询。

Review comment:
   ```suggestion
   * **PyFlink Table API** 允许你使用类似于 SQL 或在 Python 中处理表格数据的方式编写强大的关系查询。
   ```

##
File path: docs/content.zh/docs/dev/python/overview.md
##
@@ -29,36 +29,33 @@ under the License.
 
 {{< 

[jira] [Commented] (FLINK-22886) Thread leak in RocksDBStateUploader

2021-06-06 Thread Jiayi Liao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358338#comment-17358338
 ] 

Jiayi Liao commented on FLINK-22886:


[~yunta] Hi Yun, [~mayuehappy] is solving this issue in our inner version, you 
can assign this to [~mayuehappy]. 

> Thread leak in RocksDBStateUploader
> ---
>
> Key: FLINK-22886
> URL: https://issues.apache.org/jira/browse/FLINK-22886
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.11.3, 1.13.1, 1.12.4
>Reporter: Jiayi Liao
>Priority: Major
>  Labels: critical
> Attachments: image-2021-06-06-13-46-34-604.png
>
>
> {{ExecutorService}} in {{RocksDBStateUploader}} is not shut down, which may 
> leak thread when tasks fail.
> BTW, we should name the thread group in {{ExecutorService}}, otherwise what 
> we see in the stack, is a lot of threads named with pool-m-thread-n like this:
>  
> !image-2021-06-06-13-46-34-604.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22853) Flink SQL functions max/min/sum return duplicated records

2021-06-06 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-22853:

Component/s: Connectors / JDBC

> Flink SQL functions max/min/sum return duplicated records
> -
>
> Key: FLINK-22853
> URL: https://issues.apache.org/jira/browse/FLINK-22853
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Raypon Wang
>Priority: Major
> Attachments: image-2021-06-07-11-00-05-608.png, 
> image-2021-06-07-11-00-49-109.png, image-2021-06-07-11-01-02-389.png
>
>
> mysql data:
> id    offset
> 1      1
> 1      3
> 1      2
> flinksql code:(I used flink-connector-jdbc_2.12:1.12.1)
>  
>  
> object FlinkSqlOnJdbcForMysql {
>  def main(args: Array[String]): Unit = {
> val settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build()
>  val tableEnvironment = TableEnvironment.create(settings)
> tableEnvironment.executeSql("" +
>  "CREATE TABLE test (" +
>  " id BIGINT," +
>  " `offset` INT," +
>  " PRIMARY KEY (id) NOT ENFORCED" +
>  ") WITH (" +
>  " 'connector' = 'jdbc'," +
>  " 'driver' = 'com.mysql.cj.jdbc.Driver'," +
>  " 'url' = 'jdbc:mysql://127.0.0.1:3306/test?=Asia/Shanghai'," 
> +
>  " 'username' = 'root'," +
>  " 'password' = 'Project.03'," +
>  " 'table-name' = 'test'," +
>  " 'scan.fetch-size' = '1000'," +
>  " 'scan.auto-commit' = 'true'" +
>  ")")
> tableEnvironment.executeSql(
>  "select id,max(`offset`) from test group by id").print()
>  }
> }
>  
> result:
> +---++
> |id|EXPR$1|
> +---++
> |1|1|
> |1|3|
> |1|2|
> +---++
> Result of max/min/sum is duplicated,
> but avg/count/last_value/first_value is ok.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22853) Flink SQL functions max/min/sum return duplicated records

2021-06-06 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358334#comment-17358334
 ] 

Jark Wu commented on FLINK-22853:
-

[~raypon.wang], thanks for the example, the cause of the problem may be related 
to the interaction of jdbc batch source. I will look into it. 

> Flink SQL functions max/min/sum return duplicated records
> -
>
> Key: FLINK-22853
> URL: https://issues.apache.org/jira/browse/FLINK-22853
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Raypon Wang
>Priority: Major
> Attachments: image-2021-06-07-11-00-05-608.png, 
> image-2021-06-07-11-00-49-109.png, image-2021-06-07-11-01-02-389.png
>
>
> mysql data:
> id    offset
> 1      1
> 1      3
> 1      2
> flinksql code:(I used flink-connector-jdbc_2.12:1.12.1)
>  
>  
> object FlinkSqlOnJdbcForMysql {
>  def main(args: Array[String]): Unit = {
> val settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build()
>  val tableEnvironment = TableEnvironment.create(settings)
> tableEnvironment.executeSql("" +
>  "CREATE TABLE test (" +
>  " id BIGINT," +
>  " `offset` INT," +
>  " PRIMARY KEY (id) NOT ENFORCED" +
>  ") WITH (" +
>  " 'connector' = 'jdbc'," +
>  " 'driver' = 'com.mysql.cj.jdbc.Driver'," +
>  " 'url' = 'jdbc:mysql://127.0.0.1:3306/test?=Asia/Shanghai'," 
> +
>  " 'username' = 'root'," +
>  " 'password' = 'Project.03'," +
>  " 'table-name' = 'test'," +
>  " 'scan.fetch-size' = '1000'," +
>  " 'scan.auto-commit' = 'true'" +
>  ")")
> tableEnvironment.executeSql(
>  "select id,max(`offset`) from test group by id").print()
>  }
> }
>  
> result:
> +---++
> |id|EXPR$1|
> +---++
> |1|1|
> |1|3|
> |1|2|
> +---++
> Result of max/min/sum is duplicated,
> but avg/count/last_value/first_value is ok.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-22895) Table common document misspelled

2021-06-06 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-22895:
---

Assignee: sujun

> Table common document misspelled
> 
>
> Key: FLINK-22895
> URL: https://issues.apache.org/jira/browse/FLINK-22895
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.14.0
>Reporter: sujun
>Assignee: sujun
>Priority: Minor
> Attachments: image-2021-06-07-11-04-30-998.png
>
>
> The variable is misspelled, the setting should be changed to settings.
> The document url is: 
> [https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/common/#create-a-tableenvironment]
>  
> !image-2021-06-07-11-04-30-998.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-22673) Add document about add jar related commands

2021-06-06 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-22673:
---

Assignee: Shengkai Fang

> Add document about add jar related commands
> ---
>
> Key: FLINK-22673
> URL: https://issues.apache.org/jira/browse/FLINK-22673
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / Client
>Reporter: Shengkai Fang
>Assignee: Shengkai Fang
>Priority: Major
> Fix For: 1.14.0
>
>
> Including {{ADD JAR}}, {{SHOW JAR}}, {{REMOVE JAR}}. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #16013: [FLINK-21742][sql-client] Support REMOVE JAR statement in SQL Client

2021-06-06 Thread GitBox


wuchong commented on pull request #16013:
URL: https://github.com/apache/flink/pull/16013#issuecomment-81833


   Hi @SteNicholas , do you have time to update it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16060: [FLINK-22860][table sql-client] Supplement 'HELP' command prompt message for SQL-Cli.

2021-06-06 Thread GitBox


flinkbot edited a comment on pull request #16060:
URL: https://github.com/apache/flink/pull/16060#issuecomment-853560762


   
   ## CI report:
   
   * 4e5966fdfd351d5acb2ce04747d66b3f540cb12f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18686)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18694)
 
   * 648a118444caeafa957eac1b7f1767b002dcf9e1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18714)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14861: [FLINK-21088][runtime][checkpoint] Pass the flag about whether a operator is fully finished on recovery

2021-06-06 Thread GitBox


flinkbot edited a comment on pull request #14861:
URL: https://github.com/apache/flink/pull/14861#issuecomment-773094467


   
   ## CI report:
   
   * 9e2a17b6c36cd4a6c54716e973cabff190791695 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14427)
 
   * b460330c657cf91a570f2f6235fd58e218656972 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18713)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22895) Table common document misspelled

2021-06-06 Thread sujun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358327#comment-17358327
 ] 

sujun commented on FLINK-22895:
---

[~jark]  If the community wants to fix it, you can assign it to me

> Table common document misspelled
> 
>
> Key: FLINK-22895
> URL: https://issues.apache.org/jira/browse/FLINK-22895
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.14.0
>Reporter: sujun
>Priority: Minor
> Attachments: image-2021-06-07-11-04-30-998.png
>
>
> The variable is misspelled, the setting should be changed to settings.
> The document url is: 
> [https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/common/#create-a-tableenvironment]
>  
> !image-2021-06-07-11-04-30-998.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22895) Table common document misspelled

2021-06-06 Thread sujun (Jira)
sujun created FLINK-22895:
-

 Summary: Table common document misspelled
 Key: FLINK-22895
 URL: https://issues.apache.org/jira/browse/FLINK-22895
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.14.0
Reporter: sujun
 Attachments: image-2021-06-07-11-04-30-998.png

The variable is misspelled, the setting should be changed to settings.

The document url is: 
[https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/common/#create-a-tableenvironment]

 

!image-2021-06-07-11-04-30-998.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22884) Select view columns fail when store metadata with hive

2021-06-06 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358326#comment-17358326
 ] 

Rui Li commented on FLINK-22884:


I think we don't have to differentiate whether a view is generic or not, as a 
view can reference both generic and non-generic tables.

> Select view columns fail when store metadata with hive
> --
>
> Key: FLINK-22884
> URL: https://issues.apache.org/jira/browse/FLINK-22884
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.1
>Reporter: ELLEX_SHEN
>Priority: Major
>
> I am use hive for filnk metadata, so select view table will mismatch to hive 
> table after create view, I founded is a bug in HiveCatalog.classs, all view 
> table is default mark to hive table unexpected.
> after store in hive metadata, view table without "is_generic" or "connector" 
> properties.
> bug is here:
>  @VisibleForTesting
> public Table getHiveTable(ObjectPath tablePath) throws 
> TableNotExistException {
> try {
> Table table = this.client.getTable(tablePath.getDatabaseName(), 
> tablePath.getObjectName());
> boolean isHiveTable;
> if (table.getParameters().containsKey("is_generic")) {
> isHiveTable = 
> !Boolean.parseBoolean((String)table.getParameters().remove("is_generic"));
> } else {
> isHiveTable = !table.getParameters().containsKey("flink." + 
> FactoryUtil.CONNECTOR.key()) && 
> !table.getParameters().containsKey("flink.connector.type");
> }
> if (isHiveTable) {
> table.getParameters().put(FactoryUtil.CONNECTOR.key(), 
> "hive");
> }
> return table;
> } catch (NoSuchObjectException var4) {
> throw new TableNotExistException(this.getName(), tablePath);
> } catch (TException var5) {
> throw new CatalogException(String.format("Failed to get table %s 
> from Hive metastore", tablePath.getFullName()), var5);
> }
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-22853) Flink SQL functions max/min/sum return duplicated records

2021-06-06 Thread Raypon Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358325#comment-17358325
 ] 

Raypon Wang edited comment on FLINK-22853 at 6/7/21, 3:01 AM:
--

!image-2021-06-07-11-00-05-608.png!

!image-2021-06-07-11-00-49-109.png!

 

!image-2021-06-07-11-01-02-389.png!

 

 

 


was (Author: raypon.wang):
!image-2021-06-07-11-00-05-608.png!

> Flink SQL functions max/min/sum return duplicated records
> -
>
> Key: FLINK-22853
> URL: https://issues.apache.org/jira/browse/FLINK-22853
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Raypon Wang
>Priority: Major
> Attachments: image-2021-06-07-11-00-05-608.png, 
> image-2021-06-07-11-00-49-109.png, image-2021-06-07-11-01-02-389.png
>
>
> mysql data:
> id    offset
> 1      1
> 1      3
> 1      2
> flinksql code:(I used flink-connector-jdbc_2.12:1.12.1)
>  
>  
> object FlinkSqlOnJdbcForMysql {
>  def main(args: Array[String]): Unit = {
> val settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build()
>  val tableEnvironment = TableEnvironment.create(settings)
> tableEnvironment.executeSql("" +
>  "CREATE TABLE test (" +
>  " id BIGINT," +
>  " `offset` INT," +
>  " PRIMARY KEY (id) NOT ENFORCED" +
>  ") WITH (" +
>  " 'connector' = 'jdbc'," +
>  " 'driver' = 'com.mysql.cj.jdbc.Driver'," +
>  " 'url' = 'jdbc:mysql://127.0.0.1:3306/test?=Asia/Shanghai'," 
> +
>  " 'username' = 'root'," +
>  " 'password' = 'Project.03'," +
>  " 'table-name' = 'test'," +
>  " 'scan.fetch-size' = '1000'," +
>  " 'scan.auto-commit' = 'true'" +
>  ")")
> tableEnvironment.executeSql(
>  "select id,max(`offset`) from test group by id").print()
>  }
> }
>  
> result:
> +---++
> |id|EXPR$1|
> +---++
> |1|1|
> |1|3|
> |1|2|
> +---++
> Result of max/min/sum is duplicated,
> but avg/count/last_value/first_value is ok.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22853) Flink SQL functions max/min/sum return duplicated records

2021-06-06 Thread Raypon Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358325#comment-17358325
 ] 

Raypon Wang commented on FLINK-22853:
-

!image-2021-06-07-11-00-05-608.png!

> Flink SQL functions max/min/sum return duplicated records
> -
>
> Key: FLINK-22853
> URL: https://issues.apache.org/jira/browse/FLINK-22853
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Raypon Wang
>Priority: Major
> Attachments: image-2021-06-07-11-00-05-608.png
>
>
> mysql data:
> id    offset
> 1      1
> 1      3
> 1      2
> flinksql code:(I used flink-connector-jdbc_2.12:1.12.1)
>  
>  
> object FlinkSqlOnJdbcForMysql {
>  def main(args: Array[String]): Unit = {
> val settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build()
>  val tableEnvironment = TableEnvironment.create(settings)
> tableEnvironment.executeSql("" +
>  "CREATE TABLE test (" +
>  " id BIGINT," +
>  " `offset` INT," +
>  " PRIMARY KEY (id) NOT ENFORCED" +
>  ") WITH (" +
>  " 'connector' = 'jdbc'," +
>  " 'driver' = 'com.mysql.cj.jdbc.Driver'," +
>  " 'url' = 'jdbc:mysql://127.0.0.1:3306/test?=Asia/Shanghai'," 
> +
>  " 'username' = 'root'," +
>  " 'password' = 'Project.03'," +
>  " 'table-name' = 'test'," +
>  " 'scan.fetch-size' = '1000'," +
>  " 'scan.auto-commit' = 'true'" +
>  ")")
> tableEnvironment.executeSql(
>  "select id,max(`offset`) from test group by id").print()
>  }
> }
>  
> result:
> +---++
> |id|EXPR$1|
> +---++
> |1|1|
> |1|3|
> |1|2|
> +---++
> Result of max/min/sum is duplicated,
> but avg/count/last_value/first_value is ok.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22853) Flink SQL functions max/min/sum return duplicated records

2021-06-06 Thread Raypon Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raypon Wang updated FLINK-22853:

Attachment: image-2021-06-07-11-00-05-608.png

> Flink SQL functions max/min/sum return duplicated records
> -
>
> Key: FLINK-22853
> URL: https://issues.apache.org/jira/browse/FLINK-22853
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Raypon Wang
>Priority: Major
> Attachments: image-2021-06-07-11-00-05-608.png
>
>
> mysql data:
> id    offset
> 1      1
> 1      3
> 1      2
> flinksql code:(I used flink-connector-jdbc_2.12:1.12.1)
>  
>  
> object FlinkSqlOnJdbcForMysql {
>  def main(args: Array[String]): Unit = {
> val settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build()
>  val tableEnvironment = TableEnvironment.create(settings)
> tableEnvironment.executeSql("" +
>  "CREATE TABLE test (" +
>  " id BIGINT," +
>  " `offset` INT," +
>  " PRIMARY KEY (id) NOT ENFORCED" +
>  ") WITH (" +
>  " 'connector' = 'jdbc'," +
>  " 'driver' = 'com.mysql.cj.jdbc.Driver'," +
>  " 'url' = 'jdbc:mysql://127.0.0.1:3306/test?=Asia/Shanghai'," 
> +
>  " 'username' = 'root'," +
>  " 'password' = 'Project.03'," +
>  " 'table-name' = 'test'," +
>  " 'scan.fetch-size' = '1000'," +
>  " 'scan.auto-commit' = 'true'" +
>  ")")
> tableEnvironment.executeSql(
>  "select id,max(`offset`) from test group by id").print()
>  }
> }
>  
> result:
> +---++
> |id|EXPR$1|
> +---++
> |1|1|
> |1|3|
> |1|2|
> +---++
> Result of max/min/sum is duplicated,
> but avg/count/last_value/first_value is ok.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22889) JdbcExactlyOnceSinkE2eTest hangs on azure

2021-06-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358324#comment-17358324
 ] 

Xintong Song commented on FLINK-22889:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18707=logs=c91190b6-40ae-57b2-5999-31b869b0a7c1=43529380-51b4-5e90-5af4-2dccec0ef402=19989

> JdbcExactlyOnceSinkE2eTest hangs on azure
> -
>
> Key: FLINK-22889
> URL: https://issues.apache.org/jira/browse/FLINK-22889
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18690=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=16658



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22886) Thread leak in RocksDBStateUploader

2021-06-06 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358323#comment-17358323
 ] 

Yun Tang commented on FLINK-22886:
--

Thanks for reporting this, would you like to fix this [~wind_ljy]?

> Thread leak in RocksDBStateUploader
> ---
>
> Key: FLINK-22886
> URL: https://issues.apache.org/jira/browse/FLINK-22886
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.11.3, 1.13.1, 1.12.4
>Reporter: Jiayi Liao
>Priority: Major
>  Labels: critical
> Attachments: image-2021-06-06-13-46-34-604.png
>
>
> {{ExecutorService}} in {{RocksDBStateUploader}} is not shut down, which may 
> leak thread when tasks fail.
> BTW, we should name the thread group in {{ExecutorService}}, otherwise what 
> we see in the stack, is a lot of threads named with pool-m-thread-n like this:
>  
> !image-2021-06-06-13-46-34-604.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22889) JdbcExactlyOnceSinkE2eTest hangs on azure

2021-06-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358322#comment-17358322
 ] 

Xintong Song commented on FLINK-22889:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18707=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=b6c4efed-9c7d-55ea-03a9-9bd7d5b08e4c=13502

> JdbcExactlyOnceSinkE2eTest hangs on azure
> -
>
> Key: FLINK-22889
> URL: https://issues.apache.org/jira/browse/FLINK-22889
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18690=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=16658



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16060: [FLINK-22860][table sql-client] Supplement 'HELP' command prompt message for SQL-Cli.

2021-06-06 Thread GitBox


flinkbot edited a comment on pull request #16060:
URL: https://github.com/apache/flink/pull/16060#issuecomment-853560762


   
   ## CI report:
   
   * 4e5966fdfd351d5acb2ce04747d66b3f540cb12f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18686)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18694)
 
   * 648a118444caeafa957eac1b7f1767b002dcf9e1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22889) JdbcExactlyOnceSinkE2eTest hangs on azure

2021-06-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358320#comment-17358320
 ] 

Xintong Song commented on FLINK-22889:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18707=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407=16599

> JdbcExactlyOnceSinkE2eTest hangs on azure
> -
>
> Key: FLINK-22889
> URL: https://issues.apache.org/jira/browse/FLINK-22889
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18690=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=16658



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22889) JdbcExactlyOnceSinkE2eTest hangs on azure

2021-06-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358321#comment-17358321
 ] 

Xintong Song commented on FLINK-22889:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18707=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=20829

> JdbcExactlyOnceSinkE2eTest hangs on azure
> -
>
> Key: FLINK-22889
> URL: https://issues.apache.org/jira/browse/FLINK-22889
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18690=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=16658



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22625) FileSinkMigrationITCase unstable

2021-06-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358319#comment-17358319
 ] 

Xintong Song commented on FLINK-22625:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18707=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=420bd9ec-164e-562e-8947-0dacde3cec91=21736

> FileSinkMigrationITCase unstable
> 
>
> Key: FLINK-22625
> URL: https://issues.apache.org/jira/browse/FLINK-22625
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17817=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=420bd9ec-164e-562e-8947-0dacde3cec91=22179
> {code}
> May 11 00:43:40 Caused by: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Sink: Unnamed (1/3) of job 733a4777cca170f86724832642e2a8b1 
> has not being executed at the moment. Aborting checkpoint. Failure reason: 
> Not all required tasks are currently running.
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.checkTasksStarted(DefaultCheckpointPlanCalculator.java:152)
> May 11 00:43:40   at 
> org.apache.flink.runtime.checkpoint.DefaultCheckpointPlanCalculator.lambda$calculateCheckpointPlan$1(DefaultCheckpointPlanCalculator.java:114)
> May 11 00:43:40   at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
> May 11 00:43:40   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> May 11 00:43:40   at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> May 11 00:43:40   at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> May 11 00:43:40   at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> May 11 00:43:40   at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> May 11 00:43:40   at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> May 11 00:43:40   at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> May 11 00:43:40   at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> May 11 00:43:40   at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> May 11 00:43:40   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14861: [FLINK-21088][runtime][checkpoint] Pass the flag about whether a operator is fully finished on recovery

2021-06-06 Thread GitBox


flinkbot edited a comment on pull request #14861:
URL: https://github.com/apache/flink/pull/14861#issuecomment-773094467


   
   ## CI report:
   
   * 9e2a17b6c36cd4a6c54716e973cabff190791695 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14427)
 
   * b460330c657cf91a570f2f6235fd58e218656972 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22884) Select view columns fail when store metadata with hive

2021-06-06 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358316#comment-17358316
 ] 

Rui Li commented on FLINK-22884:


Thanks [~ELLEX_SHEN] for reporting the issue. Let me know if you want to fix it.

> Select view columns fail when store metadata with hive
> --
>
> Key: FLINK-22884
> URL: https://issues.apache.org/jira/browse/FLINK-22884
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.1
>Reporter: ELLEX_SHEN
>Priority: Major
>
> I am use hive for filnk metadata, so select view table will mismatch to hive 
> table after create view, I founded is a bug in HiveCatalog.classs, all view 
> table is default mark to hive table unexpected.
> after store in hive metadata, view table without "is_generic" or "connector" 
> properties.
> bug is here:
>  @VisibleForTesting
> public Table getHiveTable(ObjectPath tablePath) throws 
> TableNotExistException {
> try {
> Table table = this.client.getTable(tablePath.getDatabaseName(), 
> tablePath.getObjectName());
> boolean isHiveTable;
> if (table.getParameters().containsKey("is_generic")) {
> isHiveTable = 
> !Boolean.parseBoolean((String)table.getParameters().remove("is_generic"));
> } else {
> isHiveTable = !table.getParameters().containsKey("flink." + 
> FactoryUtil.CONNECTOR.key()) && 
> !table.getParameters().containsKey("flink.connector.type");
> }
> if (isHiveTable) {
> table.getParameters().put(FactoryUtil.CONNECTOR.key(), 
> "hive");
> }
> return table;
> } catch (NoSuchObjectException var4) {
> throw new TableNotExistException(this.getName(), tablePath);
> } catch (TException var5) {
> throw new CatalogException(String.format("Failed to get table %s 
> from Hive metastore", tablePath.getFullName()), var5);
> }
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22891) FineGrainedSlotManagerDefaultResourceAllocationStrategyITCase fails on azure

2021-06-06 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358318#comment-17358318
 ] 

Yangze Guo commented on FLINK-22891:


I've run this test over 10,000 times in my local env. My gut feeling is that it 
just be scheduled to an overloaded machine. Let's see how frequent this issue 
is.

> FineGrainedSlotManagerDefaultResourceAllocationStrategyITCase fails on azure
> 
>
> Key: FLINK-22891
> URL: https://issues.apache.org/jira/browse/FLINK-22891
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18700=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=8660
> {code}
> Jun 05 21:16:00 [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 6.24 s <<< FAILURE! - in 
> org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManagerDefaultResourceAllocationStrategyITCase
> Jun 05 21:16:00 [ERROR] 
> testResourceCanBeAllocatedForDifferentJobWithDeclarationBeforeSlotFree(org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManagerDefaultResourceAllocationStrategyITCase)
>   Time elapsed: 5.015 s  <<< ERROR!
> Jun 05 21:16:00 java.util.concurrent.TimeoutException
> Jun 05 21:16:00   at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
> Jun 05 21:16:00   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManagerTestBase.assertFutureCompleteAndReturn(FineGrainedSlotManagerTestBase.java:121)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.AbstractFineGrainedSlotManagerITCase$4.lambda$new$4(AbstractFineGrainedSlotManagerITCase.java:374)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.FineGrainedSlotManagerTestBase$Context.runTest(FineGrainedSlotManagerTestBase.java:212)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.AbstractFineGrainedSlotManagerITCase$4.(AbstractFineGrainedSlotManagerITCase.java:310)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.AbstractFineGrainedSlotManagerITCase.testResourceCanBeAllocatedForDifferentJobAfterFree(AbstractFineGrainedSlotManagerITCase.java:308)
> Jun 05 21:16:00   at 
> org.apache.flink.runtime.resourcemanager.slotmanager.AbstractFineGrainedSlotManagerITCase.testResourceCanBeAllocatedForDifferentJobWithDeclarationBeforeSlotFree(AbstractFineGrainedSlotManagerITCase.java:262)
> Jun 05 21:16:00   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 05 21:16:00   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 05 21:16:00   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 05 21:16:00   at java.lang.reflect.Method.invoke(Method.java:498)
> Jun 05 21:16:00   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jun 05 21:16:00   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jun 05 21:16:00   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jun 05 21:16:00   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jun 05 21:16:00   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> Jun 05 21:16:00   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> Jun 05 21:16:00   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Jun 05 21:16:00   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Jun 05 21:16:00   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Jun 05 21:16:00   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Jun 05 21:16:00   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Jun 05 21:16:00   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Jun 05 21:16:00   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Jun 05 21:16:00   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Jun 05 21:16:00   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Jun 05 21:16:00   at 
> 

[jira] [Commented] (FLINK-22889) JdbcExactlyOnceSinkE2eTest hangs on azure

2021-06-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358317#comment-17358317
 ] 

Xintong Song commented on FLINK-22889:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18707=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=19861

> JdbcExactlyOnceSinkE2eTest hangs on azure
> -
>
> Key: FLINK-22889
> URL: https://issues.apache.org/jira/browse/FLINK-22889
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18690=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=16658



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-22894) Window Top-N should allow n=1

2021-06-06 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-22894:


Assignee: JING ZHANG

> Window Top-N should allow n=1
> -
>
> Key: FLINK-22894
> URL: https://issues.apache.org/jira/browse/FLINK-22894
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.13.1
>Reporter: David Anderson
>Assignee: JING ZHANG
>Priority: Major
>
> I tried to reimplement the Hourly Tips exercise from the DataStream training 
> using Flink SQL. The objective of this exercise is to find the one taxi 
> driver who earned the most in tips during each hour, and report that driver's 
> driverId and the sum of their tips. 
> This can be expressed as a window top-n query, where n=1, as in
> {{FROM (}}
> {{  SELECT *, ROW_NUMBER() OVER }}{{(PARTITION BY window_start, window_end 
> ORDER BY sumOfTips DESC) as rownum}}
> {{  FROM ( }}
> {{    SELECT driverId, window_start, window_end, sum(tip) as sumOfTips}}
> {{    FROM TABLE( }}
> {{      TUMBLE(TABLE fares, DESCRIPTOR(startTime), INTERVAL '1' HOUR))}}
> {{    GROUP BY driverId, window_start, window_end}}
> {{  )}}
> {{) WHERE rownum = 1;}}
>  
> This fails because the {{WindowRankOperatorBuilder}} insists on {{rankEnd > 
> 1. }}So, in other words, while it is possible to report the top 2 drivers, or 
> the driver in 2nd place, it's not possible to report only the top driver.
> This appears to be an off-by-one error in the range checking.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22894) Window Top-N should allow n=1

2021-06-06 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358315#comment-17358315
 ] 

Jingsong Lee commented on FLINK-22894:
--

[~alpinegizmo] Thanks! [~qingru zhang] Assigned to u~

> Window Top-N should allow n=1
> -
>
> Key: FLINK-22894
> URL: https://issues.apache.org/jira/browse/FLINK-22894
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.13.1
>Reporter: David Anderson
>Priority: Major
>
> I tried to reimplement the Hourly Tips exercise from the DataStream training 
> using Flink SQL. The objective of this exercise is to find the one taxi 
> driver who earned the most in tips during each hour, and report that driver's 
> driverId and the sum of their tips. 
> This can be expressed as a window top-n query, where n=1, as in
> {{FROM (}}
> {{  SELECT *, ROW_NUMBER() OVER }}{{(PARTITION BY window_start, window_end 
> ORDER BY sumOfTips DESC) as rownum}}
> {{  FROM ( }}
> {{    SELECT driverId, window_start, window_end, sum(tip) as sumOfTips}}
> {{    FROM TABLE( }}
> {{      TUMBLE(TABLE fares, DESCRIPTOR(startTime), INTERVAL '1' HOUR))}}
> {{    GROUP BY driverId, window_start, window_end}}
> {{  )}}
> {{) WHERE rownum = 1;}}
>  
> This fails because the {{WindowRankOperatorBuilder}} insists on {{rankEnd > 
> 1. }}So, in other words, while it is possible to report the top 2 drivers, or 
> the driver in 2nd place, it's not possible to report only the top driver.
> This appears to be an off-by-one error in the range checking.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22673) Add document about add jar related commands

2021-06-06 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358313#comment-17358313
 ] 

Shengkai Fang commented on FLINK-22673:
---

Please assign it to me.[~jark]

> Add document about add jar related commands
> ---
>
> Key: FLINK-22673
> URL: https://issues.apache.org/jira/browse/FLINK-22673
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / Client
>Reporter: Shengkai Fang
>Priority: Major
> Fix For: 1.14.0
>
>
> Including {{ADD JAR}}, {{SHOW JAR}}, {{REMOVE JAR}}. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on pull request #15939: [FLINK-15390][Connectors/ORC][Connectors/Hive]List/Map/Struct types support for vectorized orc reader

2021-06-06 Thread GitBox


lirui-apache commented on pull request #15939:
URL: https://github.com/apache/flink/pull/15939#issuecomment-855537630


   > @lirui-apache @wangwei1025 Do we need update document and update Hive 
reader?
   
   Yes, we can have a separate PR for that. @wangwei1025 do you want to work on 
it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22890) Few tests fail in HiveTableSinkITCase

2021-06-06 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358310#comment-17358310
 ] 

Rui Li commented on FLINK-22890:


The root cause seems to be  {{testStreamingSinkWithTimestampLtzWatermark}} 
times out and fails to clean up the test DB. I'll take a look into that.

> Few tests fail in HiveTableSinkITCase
> -
>
> Key: FLINK-22890
> URL: https://issues.apache.org/jira/browse/FLINK-22890
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Ecosystem
>Affects Versions: 1.13.1
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18692=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=420bd9ec-164e-562e-8947-0dacde3cec91=23189
> {code}
> Jun 05 01:22:13 [ERROR] Errors: 
> Jun 05 01:22:13 [ERROR]   HiveTableSinkITCase.testBatchAppend:138 » 
> Validation Could not execute CREATE ...
> Jun 05 01:22:13 [ERROR]   
> HiveTableSinkITCase.testDefaultSerPartStreamingWrite:156->testStreamingWrite:494
>  » Validation
> Jun 05 01:22:13 [ERROR]   
> HiveTableSinkITCase.testHiveTableSinkWithParallelismInStreaming:100->testHiveTableSinkWithParallelismBase:108
>  » Validation
> Jun 05 01:22:13 [ERROR]   
> HiveTableSinkITCase.testPartStreamingMrWrite:179->testStreamingWrite:423 » 
> Validation
> Jun 05 01:22:13 [ERROR]   
> HiveTableSinkITCase.testStreamingSinkWithTimestampLtzWatermark:360->fetchRows:384
>  » TestTimedOut
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22894) Window Top-N should allow n=1

2021-06-06 Thread JING ZHANG (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358309#comment-17358309
 ] 

JING ZHANG commented on FLINK-22894:


[~alpinegizmo] Thanks for reporting the bug. I would like to fix it soon.

> Window Top-N should allow n=1
> -
>
> Key: FLINK-22894
> URL: https://issues.apache.org/jira/browse/FLINK-22894
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.13.1
>Reporter: David Anderson
>Priority: Major
>
> I tried to reimplement the Hourly Tips exercise from the DataStream training 
> using Flink SQL. The objective of this exercise is to find the one taxi 
> driver who earned the most in tips during each hour, and report that driver's 
> driverId and the sum of their tips. 
> This can be expressed as a window top-n query, where n=1, as in
> {{FROM (}}
> {{  SELECT *, ROW_NUMBER() OVER }}{{(PARTITION BY window_start, window_end 
> ORDER BY sumOfTips DESC) as rownum}}
> {{  FROM ( }}
> {{    SELECT driverId, window_start, window_end, sum(tip) as sumOfTips}}
> {{    FROM TABLE( }}
> {{      TUMBLE(TABLE fares, DESCRIPTOR(startTime), INTERVAL '1' HOUR))}}
> {{    GROUP BY driverId, window_start, window_end}}
> {{  )}}
> {{) WHERE rownum = 1;}}
>  
> This fails because the {{WindowRankOperatorBuilder}} insists on {{rankEnd > 
> 1. }}So, in other words, while it is possible to report the top 2 drivers, or 
> the driver in 2nd place, it's not possible to report only the top driver.
> This appears to be an off-by-one error in the range checking.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22702) KafkaSourceITCase.testRedundantParallelism failed

2021-06-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358308#comment-17358308
 ] 

Xintong Song commented on FLINK-22702:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18709=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518=7893

> KafkaSourceITCase.testRedundantParallelism failed
> -
>
> Key: FLINK-22702
> URL: https://issues.apache.org/jira/browse/FLINK-22702
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.12.3
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18107=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19=6847
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
>   at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   ... 1 more
> Caused by: java.lang.IllegalStateException: Consumer is not subscribed to any 
> topics or assigned any partitions
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1223)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.fetch(KafkaPartitionSplitReader.java:97)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:56)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:138)
>   ... 6 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on pull request #15939: [FLINK-15390][Connectors/ORC][Connectors/Hive]List/Map/Struct types support for vectorized orc reader

2021-06-06 Thread GitBox


JingsongLi commented on pull request #15939:
URL: https://github.com/apache/flink/pull/15939#issuecomment-855533615


   @lirui-apache @wangwei1025 Do we need update document and update Hive reader?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22067) SavepointWindowReaderITCase.testApplyEvictorWindowStateReader

2021-06-06 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-22067:
-
Affects Version/s: 1.13.1

> SavepointWindowReaderITCase.testApplyEvictorWindowStateReader
> -
>
> Key: FLINK-22067
> URL: https://issues.apache.org/jira/browse/FLINK-22067
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Till Rohrmann
>Assignee: Roman Khachatryan
>Priority: Blocker
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
> Fix For: 1.14.0
>
> Attachments: isolated_logs_builD_9072.log
>
>
> The test case 
> {{SavepointWindowReaderITCase.testApplyEvictorWindowStateReader}} failed on 
> AZP with:
> {code}
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
>   at 
> org.apache.flink.state.api.utils.SavepointTestBase.takeSavepoint(SavepointTestBase.java:69)
>   ... 33 more
> Caused by: java.util.concurrent.TimeoutException: Invocation of public 
> default java.util.concurrent.CompletableFuture 
> org.apache.flink.runtime.webmonitor.RestfulGateway.triggerSavepoint(org.apache.flink.api.common.JobID,java.lang.String,boolean,org.apache.flink.api.common.time.Time)
>  timed out.
>   at com.sun.proxy.$Proxy32.triggerSavepoint(Unknown Source)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.lambda$triggerSavepoint$8(MiniCluster.java:716)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:628)
>   at 
> java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:1996)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:751)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.triggerSavepoint(MiniCluster.java:714)
>   at 
> org.apache.flink.client.program.MiniClusterClient.triggerSavepoint(MiniClusterClient.java:101)
>   at 
> org.apache.flink.state.api.utils.SavepointTestBase.triggerSavepoint(SavepointTestBase.java:93)
>   at 
> org.apache.flink.state.api.utils.SavepointTestBase.lambda$takeSavepoint$0(SavepointTestBase.java:68)
>   at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:966)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1646)
>   at 
> java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1632)
>   at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>   at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/rpc/dispatcher_2#-390276455]] after [1 ms]. 
> Message of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A 
> typical reason for `AskTimeoutException` is that the recipient actor didn't 
> send a reply.
>   at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
>   at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:648)
>   at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>   at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15809=logs=b2f046ab-ae17-5406-acdc-240be7e870e4=93e5ae06-d194-513d-ba8d-150ef6da1d7c=9197



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-22067) SavepointWindowReaderITCase.testApplyEvictorWindowStateReader

2021-06-06 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reopened FLINK-22067:
--

There's an instance with the exactly same error stack on the 1.13 branch.
Reopening the ticket and adding back 1.13 as affected version.

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18709=logs=a8bc9173-2af6-5ba8-775c-12063b4f1d54=46a16c18-c679-5905-432b-9be5d8e27bc6=10183

> SavepointWindowReaderITCase.testApplyEvictorWindowStateReader
> -
>
> Key: FLINK-22067
> URL: https://issues.apache.org/jira/browse/FLINK-22067
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Assignee: Roman Khachatryan
>Priority: Blocker
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
> Fix For: 1.14.0
>
> Attachments: isolated_logs_builD_9072.log
>
>
> The test case 
> {{SavepointWindowReaderITCase.testApplyEvictorWindowStateReader}} failed on 
> AZP with:
> {code}
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
>   at 
> org.apache.flink.state.api.utils.SavepointTestBase.takeSavepoint(SavepointTestBase.java:69)
>   ... 33 more
> Caused by: java.util.concurrent.TimeoutException: Invocation of public 
> default java.util.concurrent.CompletableFuture 
> org.apache.flink.runtime.webmonitor.RestfulGateway.triggerSavepoint(org.apache.flink.api.common.JobID,java.lang.String,boolean,org.apache.flink.api.common.time.Time)
>  timed out.
>   at com.sun.proxy.$Proxy32.triggerSavepoint(Unknown Source)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.lambda$triggerSavepoint$8(MiniCluster.java:716)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:628)
>   at 
> java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:1996)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.runDispatcherCommand(MiniCluster.java:751)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.triggerSavepoint(MiniCluster.java:714)
>   at 
> org.apache.flink.client.program.MiniClusterClient.triggerSavepoint(MiniClusterClient.java:101)
>   at 
> org.apache.flink.state.api.utils.SavepointTestBase.triggerSavepoint(SavepointTestBase.java:93)
>   at 
> org.apache.flink.state.api.utils.SavepointTestBase.lambda$takeSavepoint$0(SavepointTestBase.java:68)
>   at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:966)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1646)
>   at 
> java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1632)
>   at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>   at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/rpc/dispatcher_2#-390276455]] after [1 ms]. 
> Message of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A 
> typical reason for `AskTimeoutException` is that the recipient actor didn't 
> send a reply.
>   at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
>   at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
>   at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:648)
>   at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>   at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>   at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
>   at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
>   at 

[jira] [Commented] (FLINK-22853) Flink SQL functions max/min/sum return duplicated records

2021-06-06 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358307#comment-17358307
 ] 

Jark Wu commented on FLINK-22853:
-

[~raypon.wang] could you have a check again? It seems the problem can't be 
reproduced. 

> Flink SQL functions max/min/sum return duplicated records
> -
>
> Key: FLINK-22853
> URL: https://issues.apache.org/jira/browse/FLINK-22853
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Raypon Wang
>Priority: Major
>
> mysql data:
> id    offset
> 1      1
> 1      3
> 1      2
> flinksql code:(I used flink-connector-jdbc_2.12:1.12.1)
>  
>  
> object FlinkSqlOnJdbcForMysql {
>  def main(args: Array[String]): Unit = {
> val settings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build()
>  val tableEnvironment = TableEnvironment.create(settings)
> tableEnvironment.executeSql("" +
>  "CREATE TABLE test (" +
>  " id BIGINT," +
>  " `offset` INT," +
>  " PRIMARY KEY (id) NOT ENFORCED" +
>  ") WITH (" +
>  " 'connector' = 'jdbc'," +
>  " 'driver' = 'com.mysql.cj.jdbc.Driver'," +
>  " 'url' = 'jdbc:mysql://127.0.0.1:3306/test?=Asia/Shanghai'," 
> +
>  " 'username' = 'root'," +
>  " 'password' = 'Project.03'," +
>  " 'table-name' = 'test'," +
>  " 'scan.fetch-size' = '1000'," +
>  " 'scan.auto-commit' = 'true'" +
>  ")")
> tableEnvironment.executeSql(
>  "select id,max(`offset`) from test group by id").print()
>  }
> }
>  
> result:
> +---++
> |id|EXPR$1|
> +---++
> |1|1|
> |1|3|
> |1|2|
> +---++
> Result of max/min/sum is duplicated,
> but avg/count/last_value/first_value is ok.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22885) Support 'SHOW COLUMNS'.

2021-06-06 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358305#comment-17358305
 ] 

Jark Wu commented on FLINK-22885:
-

Can "SHOW CREATE TABLE" meet your need? 

> Support 'SHOW COLUMNS'.
> ---
>
> Key: FLINK-22885
> URL: https://issues.apache.org/jira/browse/FLINK-22885
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Roc Marshal
>Priority: Major
>  Labels: SQL
>
> h1. Support 'SHOW COLUMNS'.
> SHOW COLUMNS ( FROM | IN )  [LIKE ]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22457) KafkaSourceLegacyITCase.testMultipleSourcesOnePartition fails because of timeout

2021-06-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358304#comment-17358304
 ] 

Xintong Song commented on FLINK-22457:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18709=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=c1d93a6a-ba91-515d-3196-2ee8019fbda7=6883

> KafkaSourceLegacyITCase.testMultipleSourcesOnePartition fails because of 
> timeout
> 
>
> Key: FLINK-22457
> URL: https://issues.apache.org/jira/browse/FLINK-22457
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17140=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19=7045
> {code:java}
> Apr 24 23:47:33 [ERROR] Tests run: 21, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 174.335 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase
> Apr 24 23:47:33 [ERROR] 
> testMultipleSourcesOnePartition(org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase)
>   Time elapsed: 60.019 s  <<< ERROR!
> Apr 24 23:47:33 org.junit.runners.model.TestTimedOutException: test timed out 
> after 6 milliseconds
> Apr 24 23:47:33   at sun.misc.Unsafe.park(Native Method)
> Apr 24 23:47:33   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Apr 24 23:47:33   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Apr 24 23:47:33   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> Apr 24 23:47:33   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Apr 24 23:47:33   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Apr 24 23:47:33   at 
> org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:49)
> Apr 24 23:47:33   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runMultipleSourcesOnePartitionExactlyOnceTest(KafkaConsumerTestBase.java:1112)
> Apr 24 23:47:33   at 
> org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase.testMultipleSourcesOnePartition(KafkaSourceLegacyITCase.java:87)
> Apr 24 23:47:33   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Apr 24 23:47:33   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 24 23:47:33   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 24 23:47:33   at java.lang.reflect.Method.invoke(Method.java:498)
> Apr 24 23:47:33   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Apr 24 23:47:33   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 24 23:47:33   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Apr 24 23:47:33   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Apr 24 23:47:33   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 24 23:47:33   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Apr 24 23:47:33   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Apr 24 23:47:33   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17707) Support configuring replica of Deployment based HA setups

2021-06-06 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-17707.

Fix Version/s: 1.14.0
   Resolution: Done

Merged via
- master (1.14): 5fe3f5bbc78df4e932113cfcedaf76567eff8810

> Support configuring replica of Deployment based HA setups
> -
>
> Key: FLINK-17707
> URL: https://issues.apache.org/jira/browse/FLINK-17707
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Reporter: Canbin Zheng
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> At the moment, in the native K8s setups, we hard code the replica of 
> Deployment to 1. However, when users enable the ZooKeeper 
> HighAvailabilityServices, they would like to configure the replica of 
> Deployment also for faster failover.
> This ticket proposes to make *replica* of Deployment configurable in the 
> ZooKeeper based HA setups.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong closed pull request #15286: [FLINK-17707][k8s] Support configuring replicas of JobManager deployment when HA enabled

2021-06-06 Thread GitBox


xintongsong closed pull request #15286:
URL: https://github.com/apache/flink/pull/15286


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-22854) Translate 'Apache Flink Documentation' index page to Chinese

2021-06-06 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-22854.
---
Fix Version/s: 1.14.0
 Assignee: Roc Marshal
   Resolution: Fixed

Fixed in master: 163690a178c2cfdae6d08012c5f63f61bc2e29ac

> Translate 'Apache Flink Documentation' index page to Chinese
> 
>
> Key: FLINK-22854
> URL: https://issues.apache.org/jira/browse/FLINK-22854
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Affects Versions: 1.14.0
>Reporter: Roc Marshal
>Assignee: Roc Marshal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> target page : https://ci.apache.org/projects/flink/flink-docs-release-1.13/zh/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22854) Translate 'Apache Flink Documentation' index page to Chinese

2021-06-06 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-22854:

Component/s: Documentation

> Translate 'Apache Flink Documentation' index page to Chinese
> 
>
> Key: FLINK-22854
> URL: https://issues.apache.org/jira/browse/FLINK-22854
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.14.0
>Reporter: Roc Marshal
>Assignee: Roc Marshal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> target page : https://ci.apache.org/projects/flink/flink-docs-release-1.13/zh/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #16062: [FLINK-22854][docs-zh] Translate 'Apache Flink Documentation' index page to Chinese.

2021-06-06 Thread GitBox


wuchong merged pull request #16062:
URL: https://github.com/apache/flink/pull/16062


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22884) Select view columns fail when store metadata with hive

2021-06-06 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358296#comment-17358296
 ] 

Jark Wu commented on FLINK-22884:
-

cc [~lirui]

> Select view columns fail when store metadata with hive
> --
>
> Key: FLINK-22884
> URL: https://issues.apache.org/jira/browse/FLINK-22884
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.1
>Reporter: ELLEX_SHEN
>Priority: Major
>
> I am use hive for filnk metadata, so select view table will mismatch to hive 
> table after create view, I founded is a bug in HiveCatalog.classs, all view 
> table is default mark to hive table unexpected.
> after store in hive metadata, view table without "is_generic" or "connector" 
> properties.
> bug is here:
>  @VisibleForTesting
> public Table getHiveTable(ObjectPath tablePath) throws 
> TableNotExistException {
> try {
> Table table = this.client.getTable(tablePath.getDatabaseName(), 
> tablePath.getObjectName());
> boolean isHiveTable;
> if (table.getParameters().containsKey("is_generic")) {
> isHiveTable = 
> !Boolean.parseBoolean((String)table.getParameters().remove("is_generic"));
> } else {
> isHiveTable = !table.getParameters().containsKey("flink." + 
> FactoryUtil.CONNECTOR.key()) && 
> !table.getParameters().containsKey("flink.connector.type");
> }
> if (isHiveTable) {
> table.getParameters().put(FactoryUtil.CONNECTOR.key(), 
> "hive");
> }
> return table;
> } catch (NoSuchObjectException var4) {
> throw new TableNotExistException(this.getName(), tablePath);
> } catch (TException var5) {
> throw new CatalogException(String.format("Failed to get table %s 
> from Hive metastore", tablePath.getFullName()), var5);
> }
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22884) Select view columns fail when store metadata with hive

2021-06-06 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-22884:

Component/s: (was: Table SQL / API)
 Connectors / Hive

> Select view columns fail when store metadata with hive
> --
>
> Key: FLINK-22884
> URL: https://issues.apache.org/jira/browse/FLINK-22884
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.1
>Reporter: ELLEX_SHEN
>Priority: Major
>
> I am use hive for filnk metadata, so select view table will mismatch to hive 
> table after create view, I founded is a bug in HiveCatalog.classs, all view 
> table is default mark to hive table unexpected.
> after store in hive metadata, view table without "is_generic" or "connector" 
> properties.
> bug is here:
>  @VisibleForTesting
> public Table getHiveTable(ObjectPath tablePath) throws 
> TableNotExistException {
> try {
> Table table = this.client.getTable(tablePath.getDatabaseName(), 
> tablePath.getObjectName());
> boolean isHiveTable;
> if (table.getParameters().containsKey("is_generic")) {
> isHiveTable = 
> !Boolean.parseBoolean((String)table.getParameters().remove("is_generic"));
> } else {
> isHiveTable = !table.getParameters().containsKey("flink." + 
> FactoryUtil.CONNECTOR.key()) && 
> !table.getParameters().containsKey("flink.connector.type");
> }
> if (isHiveTable) {
> table.getParameters().put(FactoryUtil.CONNECTOR.key(), 
> "hive");
> }
> return table;
> } catch (NoSuchObjectException var4) {
> throw new TableNotExistException(this.getName(), tablePath);
> } catch (TException var5) {
> throw new CatalogException(String.format("Failed to get table %s 
> from Hive metastore", tablePath.getFullName()), var5);
> }
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13203) [proper fix] Deadlock occurs when requiring exclusive buffer for RemoteInputChannel

2021-06-06 Thread Yingjie Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yingjie Cao updated FLINK-13203:

Labels: auto-deprioritized-critical  (was: auto-deprioritized-critical 
stale-major)

> [proper fix] Deadlock occurs when requiring exclusive buffer for 
> RemoteInputChannel
> ---
>
> Key: FLINK-13203
> URL: https://issues.apache.org/jira/browse/FLINK-13203
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.9.0
>Reporter: Piotr Nowojski
>Priority: Major
>  Labels: auto-deprioritized-critical
>
> The issue is during requesting exclusive buffers with a timeout. Since 
> currently the number of maximum buffers and the number of required buffers 
> are not the same for local buffer pools, there may be cases that the local 
> buffer pools of the upstream tasks occupy all the buffers while the 
> downstream tasks fail to acquire exclusive buffers to make progress. As for 
> 1.9 in https://issues.apache.org/jira/browse/FLINK-12852 deadlock was avoided 
> by adding a timeout to try to failover the current execution when the timeout 
> occurs and tips users to increase the number of buffers in the exception 
> message.
> In the discussion under the https://issues.apache.org/jira/browse/FLINK-12852 
> there were numerous proper solutions discussed and as for now there is no 
> consensus how to fix it:
> 1. Only allocate the minimum per producer, which is one buffer per channel. 
> This would be needed to keep the requirement similar to what we have at the 
> moment, but it is much less than we recommend for the credit-based network 
> data exchange (2* channels + floating)
> 2a. Coordinate the deployment sink-to-source such that receivers always have 
> their buffers first. This will be complex to implement and coordinate and 
> break with many assumptions about tasks being independent (coordination wise) 
> on the TaskManagers. Giving that assumption up will be a pretty big step and 
> cause lot's of complexity in the future.
> {quote}
> It will also increase deployment delays. Low deployment delays should be a 
> design goal in my opinion, as it will enable other features more easily, like 
> low-disruption upgrades, etc.
> {quote}
> 2b. Assign extra buffers only once all of the tasks are RUNNING. This is a 
> simplified version of 2a, without tracking the tasks sink-to-source.
> 3. Make buffers always revokable, by spilling.
> This is tricky to implement very efficiently, especially because there is the 
> logic that slices buffers for early sends for the low-latency streaming stuff
> the spilling request will come from an asynchronous call. That will probably 
> stay like that even with the mailbox, because the main thread will be 
> frequently blocked on buffer allocation when this request comes.
> 4. We allocate the recommended number for good throughput (2*numChannels + 
> floating) per consumer and per producer.
> No dynamic rebalancing any more. This would increase the number of required 
> network buffers in certain high-parallelism scenarios quite a bit with the 
> default config. Users can down-configure this by setting the per-channel 
> buffers lower. But it would break user setups and require them to adjust the 
> config when upgrading.
> 5. We make the network resource per slot and ask the scheduler to attach 
> information about how many producers and how many consumers will be in the 
> slot, worst case. We use that to pre-compute how many excess buffers the 
> producers may take.
> This will also break with some assumptions and lead us to the point that we 
> have to pre-compute network buffers in the same way as managed memory. Seeing 
> how much pain it is with the managed memory, this seems not so great.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22702) KafkaSourceITCase.testRedundantParallelism failed

2021-06-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358293#comment-17358293
 ] 

Xintong Song commented on FLINK-22702:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18708=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=f266c805-9429-58ed-2f9e-482e7b82f58b=6586

> KafkaSourceITCase.testRedundantParallelism failed
> -
>
> Key: FLINK-22702
> URL: https://issues.apache.org/jira/browse/FLINK-22702
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.12.3
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18107=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19=6847
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
>   at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   ... 1 more
> Caused by: java.lang.IllegalStateException: Consumer is not subscribed to any 
> topics or assigned any partitions
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1223)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.fetch(KafkaPartitionSplitReader.java:97)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:56)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:138)
>   ... 6 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18634) FlinkKafkaProducerITCase.testRecoverCommittedTransaction failed with "Timeout expired after 60000milliseconds while awaiting InitProducerId"

2021-06-06 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-18634:
-
Labels: auto-unassigned test-stability  (was: auto-unassigned stale-major 
test-stability)

> FlinkKafkaProducerITCase.testRecoverCommittedTransaction failed with "Timeout 
> expired after 6milliseconds while awaiting InitProducerId"
> 
>
> Key: FLINK-18634
> URL: https://issues.apache.org/jira/browse/FLINK-18634
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0, 1.13.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: auto-unassigned, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4590=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20
> {code}
> 2020-07-17T11:43:47.9693015Z [ERROR] Tests run: 12, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 269.399 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 2020-07-17T11:43:47.9693862Z [ERROR] 
> testRecoverCommittedTransaction(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>   Time elapsed: 60.679 s  <<< ERROR!
> 2020-07-17T11:43:47.9694737Z org.apache.kafka.common.errors.TimeoutException: 
> org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> 2020-07-17T11:43:47.9695376Z Caused by: 
> org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18634) FlinkKafkaProducerITCase.testRecoverCommittedTransaction failed with "Timeout expired after 60000milliseconds while awaiting InitProducerId"

2021-06-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358292#comment-17358292
 ] 

Xintong Song commented on FLINK-18634:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18671=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6943

> FlinkKafkaProducerITCase.testRecoverCommittedTransaction failed with "Timeout 
> expired after 6milliseconds while awaiting InitProducerId"
> 
>
> Key: FLINK-18634
> URL: https://issues.apache.org/jira/browse/FLINK-18634
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0, 1.13.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: auto-unassigned, stale-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4590=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20
> {code}
> 2020-07-17T11:43:47.9693015Z [ERROR] Tests run: 12, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 269.399 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 2020-07-17T11:43:47.9693862Z [ERROR] 
> testRecoverCommittedTransaction(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>   Time elapsed: 60.679 s  <<< ERROR!
> 2020-07-17T11:43:47.9694737Z org.apache.kafka.common.errors.TimeoutException: 
> org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> 2020-07-17T11:43:47.9695376Z Caused by: 
> org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] leonardBang commented on a change in pull request #16060: [FLINK-22860][table sql-client] Supplement 'HELP' command prompt message for SQL-Cli.

2021-06-06 Thread GitBox


leonardBang commented on a change in pull request #16060:
URL: https://github.com/apache/flink/pull/16060#discussion_r646222862



##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliStrings.java
##
@@ -39,82 +39,175 @@ private CliStrings() {
 public static final AttributedString MESSAGE_HELP =
 new AttributedStringBuilder()
 .append("The following commands are available:\n\n")
+// About base commands.
+.append(formatCommand("HELP", "Prints the available 
commands."))
 .append(formatCommand("CLEAR", "Clears the current 
terminal."))
+.append(formatCommand("QUIT", "Quits the SQL CLI client."))
 .append(
 formatCommand(
-"CREATE TABLE",
-"Create table under current catalog and 
database."))
+"EXIT",
+"Exits the SQL CLI client, which is the 
same as 'QUIT'."))
+// About 'CREATE' statements commands.
+.append("\n")
 .append(
 formatCommand(
-"DROP TABLE",
-"Drop table with optional catalog and 
database. Syntax: 'DROP TABLE [IF EXISTS] ;'"))
+"CREATE CATALOG",
+"Creates a catalog. Syntax: 'CREATE 
CATALOG  WITH (=, ...);'."))
+.append(
+formatCommand(
+"CREATE DATABASE",
+"Creates a database. Syntax: 'CREATE 
DATABASE [IF NOT EXISTS] [.] [COMMENT ] WITH 
(=, ...);'."))
+.append(
+formatCommand(
+"CREATE TABLE",

Review comment:
   Also given the `CREATE TABLE` Syntax ?

##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliStrings.java
##
@@ -39,82 +39,175 @@ private CliStrings() {
 public static final AttributedString MESSAGE_HELP =
 new AttributedStringBuilder()
 .append("The following commands are available:\n\n")
+// About base commands.
+.append(formatCommand("HELP", "Prints the available 
commands."))
 .append(formatCommand("CLEAR", "Clears the current 
terminal."))
+.append(formatCommand("QUIT", "Quits the SQL CLI client."))
 .append(
 formatCommand(
-"CREATE TABLE",
-"Create table under current catalog and 
database."))
+"EXIT",
+"Exits the SQL CLI client, which is the 
same as 'QUIT'."))
+// About 'CREATE' statements commands.
+.append("\n")
 .append(
 formatCommand(
-"DROP TABLE",
-"Drop table with optional catalog and 
database. Syntax: 'DROP TABLE [IF EXISTS] ;'"))
+"CREATE CATALOG",
+"Creates a catalog. Syntax: 'CREATE 
CATALOG  WITH (=, ...);'."))
+.append(
+formatCommand(
+"CREATE DATABASE",
+"Creates a database. Syntax: 'CREATE 
DATABASE [IF NOT EXISTS] [.] [COMMENT ] WITH 
(=, ...);'."))
+.append(
+formatCommand(
+"CREATE TABLE",
+"Creates table under current catalog and 
database."))
 .append(
 formatCommand(
 "CREATE VIEW",
-"Creates a virtual table from a SQL query. 
Syntax: 'CREATE VIEW  AS ;'"))
+"Creates a virtual table from a SQL query. 
Syntax: 'CREATE VIEW  AS ;'."))
 .append(
 formatCommand(
-"DESCRIBE",
-"Describes the schema of a table with the 
given name."))
+"CREATE FUNCTION",
+"Creates a function. Syntax: 'CREATE 
[TEMPORARY|TEMPORARY SYSTEM] FUNCTION [IF NOT EXISTS] 
[.][ .] AS  [LANGUAGE 
JAVA|SCALA|PYTHON];'."))
+// About 'DROP' statements commands.
+.append("\n")
+   

[jira] [Commented] (FLINK-22869) SQLClientSchemaRegistryITCase timeouts on azure

2021-06-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358291#comment-17358291
 ] 

Xintong Song commented on FLINK-22869:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18671=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=27252

> SQLClientSchemaRegistryITCase timeouts on azure
> ---
>
> Key: FLINK-22869
> URL: https://issues.apache.org/jira/browse/FLINK-22869
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.12.4
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18652=logs=68a897ab-3047-5660-245a-cce8f83859f6=16ca2cca-2f63-5cce-12d2-d519b930a729=27324
> {code}
> Jun 03 23:51:30 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 227.425 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase
> Jun 03 23:51:30 [ERROR] 
> testReading(org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase)  
> Time elapsed: 194.931 s  <<< ERROR!
> Jun 03 23:51:30 org.junit.runners.model.TestTimedOutException: test timed out 
> after 12 milliseconds
> Jun 03 23:51:30   at java.lang.Object.wait(Native Method)
> Jun 03 23:51:30   at java.lang.Thread.join(Thread.java:1252)
> Jun 03 23:51:30   at java.lang.Thread.join(Thread.java:1326)
> Jun 03 23:51:30   at 
> org.apache.kafka.clients.admin.KafkaAdminClient.close(KafkaAdminClient.java:541)
> Jun 03 23:51:30   at 
> org.apache.kafka.clients.admin.Admin.close(Admin.java:96)
> Jun 03 23:51:30   at 
> org.apache.kafka.clients.admin.Admin.close(Admin.java:79)
> Jun 03 23:51:30   at 
> org.apache.flink.tests.util.kafka.KafkaContainerClient.createTopic(KafkaContainerClient.java:71)
> Jun 03 23:51:30   at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading(SQLClientSchemaRegistryITCase.java:102)
> Jun 03 23:51:30   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jun 03 23:51:30   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jun 03 23:51:30   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jun 03 23:51:30   at java.lang.reflect.Method.invoke(Method.java:498)
> Jun 03 23:51:30   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Jun 03 23:51:30   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jun 03 23:51:30   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Jun 03 23:51:30   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jun 03 23:51:30   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Jun 03 23:51:30   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Jun 03 23:51:30   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Jun 03 23:51:30   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22874) flink table partition trigger doesn't effect as expectation when sink into hive table

2021-06-06 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358290#comment-17358290
 ] 

luoyuxia commented on FLINK-22874:
--

[~SpongebobZ] Hi, have you enabled checkpointing for flink will only close 
inprogressing files during checkpoint and commit  them after completing 
checkpoint.

> flink table partition trigger doesn't effect as expectation when sink into 
> hive table
> -
>
> Key: FLINK-22874
> URL: https://issues.apache.org/jira/browse/FLINK-22874
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.1
>Reporter: Spongebob
>Priority: Major
>
> I am trying to sink into hive partitioned table which partition commit 
> trigger is declared as "
> partition-time", and I had assigned watermark on the dataStream. When I input 
> some data into dataStream it can not commit hive partition on time. Here's my 
> code
> {code:java}
> //ddl of hive table 
> create table test_table(username string)
> partitioned by (ts bigint)
> stored as orc
> TBLPROPERTIES (
>   'sink.partition-commit.trigger'='partition-time',
>   'sink.partition-commit.policy.kind'='metastore,success-file'
> );{code}
> {code:java}
> // flink application code
> val streamEnv = ...
> val dataStream:DataStream[(String, Long)] = ...
> // assign watermark and output watermark info in processFunction
> class MyProcessFunction extends ProcessFunction[(String, Long), (String, 
> Long, Long)] {
>   override def processElement(value: (String, Long), ctx: 
> ProcessFunction[(String, Long), (String, Long, Long)]#Context, out: 
> Collector[(String, Long, Long)]): Unit = {
> out.collect((value._1, value._2, ctx.timerService().currentWatermark()))
>   }
> }
> val resultStream = dataStream
> .assignTimestampsAndWatermarks(WatermarkStrategy.forBoundedOutOfOrderness(Duration.ZERO)
>   .withTimestampAssigner(new SerializableTimestampAssigner[(String, Long)] {
> override def extractTimestamp(element: (String, Long), recordTimestamp: 
> Long): Long = {
>   element._2 * 1000
> }
>   }))
> .process(new MyProcessFunction)
> //
> val streamTableEnv = buildStreamTableEnv(streamEnv, 
> EnvironmentSettings.newInstance().inStreamingMode().useBlinkPlanner().build())
> // convert dataStream into hive catalog table and sink into hive
> streamTableEnv.createTemporaryView("test_catalog_t", resultStream)
> val catalog = ...
> streamTableEnv.registerCatalog("hive", catalog)
> streamTableEnv.useCatalog("hive")
> streamTableEnv.executeSql("insert into test_table select _1,_2 from 
> default_catalog.default_database.test_catalog_t").print()
> // flink use the default parallelism 4
> // input data
> (a, 1)
> (b, 2)
> (c, 3)
> (d, 4)
> (a, 5)
>  ...
> // result
> there are much partition directories on hdfs but all they are inprogressing 
> files and never would be commit to hive metastore.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sunxiaoguang edited a comment on pull request #16057: [FLINK-22851][security] Add basic authentication support to job dashboard and REST

2021-06-06 Thread GitBox


sunxiaoguang edited a comment on pull request #16057:
URL: https://github.com/apache/flink/pull/16057#issuecomment-854321050


   > cc @mbalassi @gyfora
   > 
   > @sunxiaoguang we've seen you're struggling w/ tests for quite some time 
and we would like to help you out by contributing our implementation which 
works already in production. Hope this helps.
   > As I've seen you would like to add more weak authentications. My view on 
that it that it's not giving any business/return in investment advantage having 
more than one.
   > 
   > @tillrohrmann this is the basic (weak) authentication part and kerberos 
SPNEGO (strong) authentication has similar complexity but it's depending on 
this change. When these 2 added no further implementation is planned.
   > Just to give some reasoning why I've made the split: having horror huge 
PRs would make the review extremely hard which just wastes everybody's time.
   > 
   > Comments/suggestions are welcome as always.
   
   Well, I‘m ok as long as our concern about security is covered. The reason I 
didn't put tests is that I would like to hear some feedback about the PR from 
the community in case the implementation would be changed and the test has to 
be changed as well. So basically there is no struggling but I'd be glad to see 
that this is fixed somehow in community. As long as the community reaches 
consensus and it definitely doesn't have to be my effort.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol merged pull request #16085: [hotfix][docs] Corrected artifactId element in example

2021-06-06 Thread GitBox


zentol merged pull request #16085:
URL: https://github.com/apache/flink/pull/16085


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16085: [hotfix][docs] Corrected artifactId element in example

2021-06-06 Thread GitBox


flinkbot commented on pull request #16085:
URL: https://github.com/apache/flink/pull/16085#issuecomment-855478563


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b99b5690d2d8bc38efd5c0cc7005e797c83768f0 (Sun Jun 06 
23:13:15 UTC 2021)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] mans2singh opened a new pull request #16085: [hotfix][docs] Corrected artifactId element in example

2021-06-06 Thread GitBox


mans2singh opened a new pull request #16085:
URL: https://github.com/apache/flink/pull/16085


   
   ## What is the purpose of the change
   
   * Corrected artifactId element in pom 
   
   ## Brief change log
   
   * Corrected artifactId element in pom 
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22826:
---

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as a 
Blocker but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 1 days. I have gone ahead and marked it "stale-blocker". If this 
ticket is a Blocker, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> flink sql1.13.1 causes data loss based on change log stream data join
> -
>
> Key: FLINK-22826
> URL: https://issues.apache.org/jira/browse/FLINK-22826
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0, 1.13.1
>Reporter: 徐州州
>Priority: Blocker
>  Labels: stale-blocker
>
> {code:java}
> insert into dwd_order_detail
> select
>ord.Id,
>ord.Code,
>Status
>  concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id  
> as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd'))  as uuids,
>  TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date
> from
> orders ord
> left join order_extend oed on  ord.Id=oed.OrderId and oed.IsDeleted=0 and 
> oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS 
> TIMESTAMP)
> or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> ) and ord.IsDeleted=0;
> {code}
> My upsert-kafka table for PRIMARY KEY for uuids.
> This is the logic of my kafka based canal-json stream data join and write to 
> Upsert-kafka tables I confirm that version 1.12 also has this problem I just 
> upgraded from 1.12 to 1.13.
> I look up a user s order data and order number XJ0120210531004794 in 
> canal-json original table as U which is normal.
> {code:java}
> | +U | XJ0120210531004794 |  50 |
> | +U | XJ0120210531004672 |  50 |
> {code}
> But written to upsert-kakfa via join, the data consumed from upsert kafka is,
> {code:java}
> | +I | XJ0120210531004794 |  50 |
> | -U | XJ0120210531004794 |  50 |
> {code}
> The order is two records this sheet in orders and order_extend tables has not 
> changed since created -U status caused my data loss not computed and the 
> final result was wrong.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22334) Fail to translate the hive-sql in STREAMING mode

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22334:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Fail to translate the hive-sql in STREAMING mode
> 
>
> Key: FLINK-22334
> URL: https://issues.apache.org/jira/browse/FLINK-22334
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.14.0
>
>
> The failed statement 
> {code:java}
> // Some comments here
> insert into dest(y,x) select x,y from foo cluster by x
> {code}
> Exception stack:
> {code:java}
> org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not 
> enough rules to produce a node with desired properties: convention=LOGICAL, 
> FlinkRelDistributionTraitDef=any, MiniBatchIntervalTraitDef=None: 0, 
> ModifyKindSetTraitDef=[NONE], UpdateKindTraitDef=[NONE].
> Missing conversion is LogicalDistribution[convention: NONE -> LOGICAL]
> There is 1 empty subset: rel#5176:RelSubset#43.LOGICAL.any.None: 
> 0.[NONE].[NONE], the relevant part of the original plan is as follows
> 5174:LogicalDistribution(collation=[[0 ASC-nulls-first]], dist=[[]])
>   5172:LogicalProject(subset=[rel#5173:RelSubset#42.NONE.any.None: 
> 0.[NONE].[NONE]], x=[$0])
> 5106:LogicalTableScan(subset=[rel#5171:RelSubset#41.NONE.any.None: 
> 0.[NONE].[NONE]], table=[[myhive, default, foo]])
> Root: rel#5176:RelSubset#43.LOGICAL.any.None: 0.[NONE].[NONE]
> Original rel:
> FlinkLogicalLegacySink(subset=[rel#4254:RelSubset#8.LOGICAL.any.None: 
> 0.[NONE].[NONE]], name=[collect], fields=[_o__c0]): rowcount = 1.0E8, 
> cumulative cost = {1.0E8 rows, 1.0E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, 
> id = 4276
>   FlinkLogicalCalc(subset=[rel#4275:RelSubset#7.LOGICAL.any.None: 
> 0.[NONE].[NONE]], select=[CASE(IS NULL($f1), 0:BIGINT, $f1) AS _o__c0]): 
> rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 0.0 io, 0.0 
> network, 0.0 memory}, id = 4288
> FlinkLogicalJoin(subset=[rel#4272:RelSubset#6.LOGICAL.any.None: 
> 0.[NONE].[NONE]], condition=[=($0, $1)], joinType=[left]): rowcount = 1.0E8, 
> cumulative cost = {1.0E8 rows, 1.0856463237676364E8 cpu, 4.0856463237676364E8 
> io, 0.0 network, 0.0 memory}, id = 4271
>   
> FlinkLogicalTableSourceScan(subset=[rel#4270:RelSubset#1.LOGICAL.any.None: 
> 0.[NONE].[NONE]], table=[[myhive, default, bar, project=[i]]], fields=[i]): 
> rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 4.0E8 io, 0.0 
> network, 0.0 memory}, id = 4279
>   FlinkLogicalAggregate(subset=[rel#4268:RelSubset#5.LOGICAL.any.None: 
> 0.[NONE].[NONE]], group=[{1}], agg#0=[COUNT($0)]): rowcount = 
> 8564632.376763644, cumulative cost = {9.0E7 rows, 1.89E8 cpu, 7.2E8 io, 0.0 
> network, 0.0 memory}, id = 4286
> FlinkLogicalCalc(subset=[rel#4283:RelSubset#3.LOGICAL.any.None: 
> 0.[NONE].[NONE]], select=[x, y], where=[IS NOT NULL(y)]): rowcount = 9.0E7, 
> cumulative cost = {9.0E7 rows, 0.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id 
> = 4282
>   
> FlinkLogicalTableSourceScan(subset=[rel#4262:RelSubset#2.LOGICAL.any.None: 
> 0.[NONE].[NONE]], table=[[myhive, default, foo]], fields=[x, y]): rowcount = 
> 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 8.0E8 io, 0.0 network, 0.0 
> memory}, id = 4261
> Sets:
> Set#41, type: RecordType(INTEGER x, INTEGER y)
>   rel#5171:RelSubset#41.NONE.any.None: 0.[NONE].[NONE], best=null
>   rel#5106:LogicalTableScan.NONE.any.None: 
> 0.[NONE].[NONE](table=[myhive, default, foo]), rowcount=1.0E8, cumulative 
> cost={inf}
>   rel#5179:RelSubset#41.LOGICAL.any.None: 0.[NONE].[NONE], best=rel#5178
>   rel#5178:FlinkLogicalTableSourceScan.LOGICAL.any.None: 
> 0.[NONE].[NONE](table=[myhive, default, foo],fields=x, y), rowcount=1.0E8, 
> cumulative cost={1.0E8 rows, 1.0E8 cpu, 8.0E8 io, 0.0 network, 0.0 memory}
> Set#42, type: RecordType(INTEGER x)
>   rel#5173:RelSubset#42.NONE.any.None: 0.[NONE].[NONE], best=null
>   rel#5172:LogicalProject.NONE.any.None: 
> 0.[NONE].[NONE](input=RelSubset#5171,inputs=0), rowcount=1.0E8, cumulative 
> cost={inf}
>   rel#5180:LogicalTableScan.NONE.any.None: 
> 0.[NONE].[NONE](table=[myhive, default, foo, project=[x]]), rowcount=1.0E8, 
> cumulative cost={inf}
>   rel#5182:LogicalCalc.NONE.any.None: 
> 0.[NONE].[NONE](input=RelSubset#5171,expr#0..1={inputs},0=$t0), 

[jira] [Updated] (FLINK-22285) YARNHighAvailabilityITCase hangs

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22285:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> YARNHighAvailabilityITCase hangs
> 
>
> Key: FLINK-22285
> URL: https://issues.apache.org/jira/browse/FLINK-22285
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Coordination
>Affects Versions: 1.11.3, 1.12.2
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16568=logs=8fd975ef-f478-511d-4997-6f15fe8a1fd3=ac0fa443-5d45-5a6b-3597-0310ecc1d2ab=31437



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13032) Allow Access to Per-Window State in mergeable window ProcessWindowFunction

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13032:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Allow Access to Per-Window State in mergeable window  ProcessWindowFunction
> ---
>
> Key: FLINK-13032
> URL: https://issues.apache.org/jira/browse/FLINK-13032
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.8.0
>Reporter: jasine chen
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> access to per-window state is allowed in non-merging windows, but it's 
> necessary to access per-window state in mergeable windows 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22443) can not be execute an extreme long sql under batch mode

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22443:
---

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as a 
Blocker but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 1 days. I have gone ahead and marked it "stale-blocker". If this 
ticket is a Blocker, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> can not be execute an extreme long sql under batch mode
> ---
>
> Key: FLINK-22443
> URL: https://issues.apache.org/jira/browse/FLINK-22443
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
> Environment: execute command
>  
> {code:java}
> bin/sql-client.sh embedded -d conf/sql-client-batch.yaml 
> {code}
> content of conf/sql-client-batch.yaml
>  
> {code:java}
> catalogs:
> - name: bnpmphive
>   type: hive
>   hive-conf-dir: /home/gum/hive/conf
>   hive-version: 3.1.2
> execution:
>   planner: blink
>   type: batch
>   #type: streaming
>   result-mode: table
>   parallelism: 4
>   max-parallelism: 2000
>   current-catalog: bnpmphive
>   #current-database: snmpprobe 
> #configuration:
> #  table.sql-dialect: hivemodules:
>- name: core
>  type: core
>- name: myhive
>  type: hivedeployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
> {code}
>  
>Reporter: macdoor615
>Priority: Blocker
>  Labels: stale-blocker, stale-critical
> Attachments: flink-gum-taskexecutor-8-hb3-prod-hadoop-002.log.4.zip, 
> raw_p_restapi_hcd.csv.zip
>
>
> 1. execute an extreme long sql under batch mode
>  
> {code:java}
> select
> 'CD' product_name,
> r.code business_platform,
> 5 statisticperiod,
> cast('2021-03-24 00:00:00' as timestamp) coltime,
> cast(r1.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_2,
> cast(r2.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_7,
> cast(r3.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_5,
> cast(r4.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_6,
> cast(r5.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00029,
> cast(r6.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00028,
> cast(r7.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00015,
> cast(r8.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00014,
> cast(r9.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00011,
> cast(r10.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00010,
> cast(r11.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00013,
> cast(r12.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00012,
> cast(r13.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00027,
> cast(r14.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00026,
> cast(r15.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00046,
> cast(r16.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00047,
> cast(r17.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00049,
> cast(r18.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00048,
> cast(r19.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00024,
> cast(r20.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00025,
> cast(r21.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00022,
> cast(r22.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00023,
> cast(r23.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00054,
> cast(r24.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00055,
> cast(r25.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00033,
> cast(r26.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00032,
> cast(r27.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00053,
> cast(r28.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00052,
> cast(r29.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00051,
> cast(r30.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00050,
> cast(r31.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00043,
> cast(r32.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00042,
> cast(r33.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00017,
> cast(r34.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00016,
> cast(r35.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_3,
> cast(r36.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00045,
> cast(r37.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00044,
> cast(r38.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00038,
> cast(r39.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00039,
> cast(r40.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00037,
> 

[jira] [Updated] (FLINK-22416) UpsertKafkaTableITCase hangs when collecting results

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22416:
---

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as a 
Blocker but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 1 days. I have gone ahead and marked it "stale-blocker". If this 
ticket is a Blocker, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> UpsertKafkaTableITCase hangs when collecting results
> 
>
> Key: FLINK-22416
> URL: https://issues.apache.org/jira/browse/FLINK-22416
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Blocker
>  Labels: auto-deprioritized-critical, stale-blocker, 
> test-stability
> Fix For: 1.14.0
>
> Attachments: idea-test.png, threads_report.txt
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17037=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=7002
> {code}
> 2021-04-22T11:16:35.6812919Z Apr 22 11:16:35 [ERROR] 
> testSourceSinkWithKeyAndPartialValue[format = 
> csv](org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase)
>   Time elapsed: 30.01 s  <<< ERROR!
> 2021-04-22T11:16:35.6814151Z Apr 22 11:16:35 
> org.junit.runners.model.TestTimedOutException: test timed out after 30 seconds
> 2021-04-22T11:16:35.6814781Z Apr 22 11:16:35  at 
> java.lang.Thread.sleep(Native Method)
> 2021-04-22T11:16:35.6815444Z Apr 22 11:16:35  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> 2021-04-22T11:16:35.6816250Z Apr 22 11:16:35  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> 2021-04-22T11:16:35.6817033Z Apr 22 11:16:35  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> 2021-04-22T11:16:35.6817719Z Apr 22 11:16:35  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> 2021-04-22T11:16:35.6818351Z Apr 22 11:16:35  at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> 2021-04-22T11:16:35.6818980Z Apr 22 11:16:35  at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestUtils.collectRows(KafkaTableTestUtils.java:52)
> 2021-04-22T11:16:35.6819978Z Apr 22 11:16:35  at 
> org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.testSourceSinkWithKeyAndPartialValue(UpsertKafkaTableITCase.java:147)
> 2021-04-22T11:16:35.6820803Z Apr 22 11:16:35  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-04-22T11:16:35.6821365Z Apr 22 11:16:35  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-04-22T11:16:35.6822072Z Apr 22 11:16:35  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-04-22T11:16:35.6822656Z Apr 22 11:16:35  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-04-22T11:16:35.6823124Z Apr 22 11:16:35  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-04-22T11:16:35.6823672Z Apr 22 11:16:35  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-04-22T11:16:35.6824202Z Apr 22 11:16:35  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-04-22T11:16:35.6824709Z Apr 22 11:16:35  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-04-22T11:16:35.6825230Z Apr 22 11:16:35  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-04-22T11:16:35.6825716Z Apr 22 11:16:35  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2021-04-22T11:16:35.6826204Z Apr 22 11:16:35  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-04-22T11:16:35.6826807Z Apr 22 11:16:35  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2021-04-22T11:16:35.6827378Z Apr 22 11:16:35  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2021-04-22T11:16:35.6827926Z Apr 22 11:16:35  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2021-04-22T11:16:35.6828331Z Apr 22 

[jira] [Updated] (FLINK-12992) All host(s) tried for query failed (tried com.datastax.driver.core.exceptions.TransportException: Error writing...)

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-12992:
---
Labels: auto-deprioritized-critical stale-major  (was: 
auto-deprioritized-critical)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> All host(s) tried for query failed (tried 
> com.datastax.driver.core.exceptions.TransportException: Error writing...)
> ---
>
> Key: FLINK-12992
> URL: https://issues.apache.org/jira/browse/FLINK-12992
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.7.2, 1.8.0
> Environment: 
>  org.apache.flink
>  flink-connector-cassandra_2.11
>  1.8.0
> 
>Reporter: yanxiaobin
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major
>
> We are using flink streming application with cassandra connector providing 
> sinks that writes data into a [Apache 
> Cassandra|https://cassandra.apache.org/] database. 
> At first we found the following exceptions:All host(s) tried for query failed 
> (tried com.datastax.driver.core.exceptions.TransportException: Error 
> writing...). This exception will cause the streaming job to fail
>  
> And we have carefully checked that Cassandra service and network are all 
> normal. Finally, we refer to the source code of DataStax Java Driver that the 
> connector depends on. We found that the real exception caused the problem is 
> as follows:
> com.datastax.shaded.netty.handler.codec.EncoderException: 
> java.lang.IllegalAccessError: com/datastax/driver/core/Frame at 
> com.datastax.shaded.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:107)
>  at 
> com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:643)
>  at 
> com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:700)
>  at 
> com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:636)
>  at 
> com.datastax.shaded.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:112)
>  at 
> com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:643)
>  at 
> com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:700)
>  at 
> com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:636)
>  at 
> com.datastax.shaded.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:284)
>  at 
> com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:643)
>  at 
> com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:700)
>  at 
> com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:636)
>  at 
> com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:622)
>  at 
> com.datastax.shaded.netty.channel.DefaultChannelPipeline.write(DefaultChannelPipeline.java:939)
>  at 
> com.datastax.shaded.netty.channel.AbstractChannel.write(AbstractChannel.java:234)
>  at com.datastax.driver.core.Connection$Flusher.run(Connection.java:872) at 
> com.datastax.shaded.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
>  at 
> com.datastax.shaded.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) 
> at 
> com.datastax.shaded.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
>  at java.lang.Thread.run(Thread.java:748) Caused by: 
> java.lang.IllegalAccessError: com/datastax/driver/core/Frame at 
> com.datastax.shaded.netty.util.internal.__matchers__.com.datastax.driver.core.FrameMatcher.match(NoOpTypeParameterMatcher.java)
>  at 
> com.datastax.shaded.netty.handler.codec.MessageToMessageEncoder.acceptOutboundMessage(MessageToMessageEncoder.java:77)
>  at 
> com.datastax.shaded.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:84)
>  
> Based on this exception, we found relevant information 
> 

[jira] [Updated] (FLINK-12900) Refactor the class hierarchy for BinaryFormat

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-12900:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Refactor the class hierarchy for BinaryFormat
> -
>
> Key: FLINK-12900
> URL: https://issues.apache.org/jira/browse/FLINK-12900
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Liya Fan
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The there are many classes in the class hierarchy of BinaryFormat. They share 
> the same memory format:
> header + nullable bits + fixed length part + variable length part
> So many operations can be applied to a number of sub-classes. Currently, many 
> such operations are implemented in each sub-class, although they implement 
> identical functionality. 
> This makes the code hard to understand and maintain.
> In this proposal, we refactor the class hierarchy, and move common operations 
> into the base class, leaving only one implementation for each common 
> operation. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13372) Timestamp conversion bug in non-blink Table/SQL runtime

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13372:
---
  Labels: auto-deprioritized-critical auto-unassigned  (was: 
auto-unassigned stale-critical)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> Timestamp conversion bug in non-blink Table/SQL runtime
> ---
>
> Key: FLINK-13372
> URL: https://issues.apache.org/jira/browse/FLINK-13372
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0, 1.8.1, 1.9.0
>Reporter: Shuyi Chen
>Priority: Major
>  Labels: auto-deprioritized-critical, auto-unassigned
>
> Currently, in the non-blink table/SQL runtime, Flink used 
> SqlFunctions.internalToTimestamp(long v) from Calcite to convert event time 
> (in long) to java.sql.Timestamp.
> {code:java}
>  public static Timestamp internalToTimestamp(long v) { return new Timestamp(v 
> - (long)LOCAL_TZ.getOffset(v)); } {code}
> However, as discussed in the recent Calcite mailing list, 
> SqlFunctions.internalToTimestamp() assumes the input timestamp value is in 
> the current JVM’s default timezone (which is unusual), NOT milliseconds since 
> epoch. And SqlFunctions.internalToTimestamp() is used to convert timestamp 
> value in the current JVM’s default timezone to milliseconds since epoch, 
> which java.sql.Timestamp constructor takes. Therefore, the results will not 
> only be wrong, but change if the job runs in machines on different timezones 
> as well.(The only exception is that all your production machines uses UTC 
> timezone.)
> Here is an example, if the user input value is 0 (00:00:00 UTC on 1 January 
> 1970), and the table/SQL runtime runs in a machine in PST (UTC-8), the output 
> sql.Timestamp after SqlFunctions.internalToTimestamp() will become 2880 
> millisec since epoch (08:00:00 UTC on 1 January 1970); And with the same 
> input, if the table/SQL runtime runs again in a different machine in EST 
> (UTC-5), the output sql.Timestamp after SqlFunctions.internalToTimestamp() 
> will become 1800 millisec since epoch (05:00:00 UTC on 1 January 1970).
> Currently, there are unittests to test the table/SQL API event time 
> input/output (e.g., GroupWindowITCase.testEventTimeTumblingWindow() and 
> SqlITCase.testDistinctAggWithMergeOnEventTimeSessionGroupWindow()). They now 
> all passed because we are comparing the string format of the time which 
> ignores timezone. If you step into the code, the actual java.sql.Timestamp 
> value is wrong and change as the tests run in different timezone (e.g., one 
> can use -Duser.timezone=PST to change the current JVM’s default timezone)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13079) Make window state queryable

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13079:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Make window state queryable
> ---
>
> Key: FLINK-13079
> URL: https://issues.apache.org/jira/browse/FLINK-13079
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: vinoyang
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Since Flink makes Keyed state queryable and provides the queryable state 
> client. We can make the window state (keyed state) queryable, so that, the 
> users can get the latest progress of the window.
>  
> We have submitted a proposal about improving the queryable state.[1] If we 
> combine them, it will bring benefit for many use scenarios.
> [1]: 
> [http://mail-archives.apache.org/mod_mbox/flink-dev/201907.mbox/%3ctencent_35a56d6858408be2e2064...@qq.com%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13050) Counting more checkpoint failure reason in CheckpointFailureManager

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13050:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Counting more checkpoint failure reason in CheckpointFailureManager
> ---
>
> Key: FLINK-13050
> URL: https://issues.apache.org/jira/browse/FLINK-13050
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Reporter: vinoyang
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Currently, {{CheckpointFailureManager}} only counted little failure reasons 
> to keep compatible with {{setFailOnCheckpointingErrors}}. While 
> {{setFailOnCheckpointingErrors}} has been deprecated in FLINK-11662. IMO, we 
> can count more checkpoint failure reasons.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13150) defaultCatalogName and defaultDatabaseName in TableEnvImpl is not updated after they are updated in TableEnvironment

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13150:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> defaultCatalogName and defaultDatabaseName in TableEnvImpl is not updated 
> after they are updated in TableEnvironment
> 
>
> Key: FLINK-13150
> URL: https://issues.apache.org/jira/browse/FLINK-13150
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Jeff Zhang
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> defaultCatalogName and defaultDatabaseName in TableEnvImpl are initialized 
> when it is created and never changed even when they are updated in 
> TableEnvironment.
> The will cause issues that we may register table to the wrong catalog after 
> we changed the defaultCatalogName and defaultDatabaseName 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13148) Expose WindowedStream.sideOutputLateData() from CoGroupedStreams

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13148:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Expose WindowedStream.sideOutputLateData() from CoGroupedStreams
> 
>
> Key: FLINK-13148
> URL: https://issues.apache.org/jira/browse/FLINK-13148
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.9.0
>Reporter: Congxian Qiu
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> As FLINK-10050 supported {{alloedLateness}}, but we can not get the side 
> output containing the late data, this issue wants to fix it.
> For implementation, I want to add an input parameter {{OutputTag}} in 
> {{WithWindow}} as following
> {code:java}
> protected WithWindow(DataStream input1,
> DataStream input2,
> KeySelector keySelector1,
> KeySelector keySelector2,
> TypeInformation keyType,
> WindowAssigner, W> windowAssigner,
> Trigger, ? super W> trigger,
> Evictor, ? super W> evictor,
> Time allowedLateness,
> OutputTage> outputTag) {
>   ...
> }
> {code}
>  and add a function sideOutputLateData(OutputTag outputTag) in 
> {{WithWindow}}
> {code:java}
> public WithWindow 
> sideOutputLateData(OutputTag> outputTag) {
>...
> }
> {code}
> In {{WithWindow#apply}} will add outputTag if it is not null
> {code:java}
> public  DataStream apply(CoGroupFunction function, 
> TypeInfomation resultType) {
> ...
> if (outputTag != null) {
> windowedStream.sideOutputLateData(outputTag);
> }
> ...
> }{code}
> The same will apply to {{JoinedStreams}} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13172) JVM crash with dynamic netty-tcnative wrapper to openSSL on some OS

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13172:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> JVM crash with dynamic netty-tcnative wrapper to openSSL on some OS
> ---
>
> Key: FLINK-13172
> URL: https://issues.apache.org/jira/browse/FLINK-13172
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network, Tests
>Affects Versions: 1.9.0
>Reporter: Nico Kruber
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> The dynamically-linked wrapper library in 
> {{flink-shaded-netty-tcnative-dynamic}} may not work on all systems, 
> depending on how the system-provided openSSL library is built.
> As a result, when trying to run Flink with {{security.ssl.provider: OPENSSL}} 
> or just running a test based on {{SSLUtilsTest}} (which checks for openSSL 
> availability which is enough to trigger the error below), the JVM will crash, 
> e.g. with
> - on SUSE-based systems:
> {code}
> /usr/lib64/jvm/java-openjdk/bin/java: relocation error: 
> /tmp/liborg_apache_flink_shaded_netty4_netty_tcnative_linux_x86_644115489043239307863.so:
>  symbol TLSv1_2_server_method version OPENSSL_1.0.1 not defined in file 
> libssl.so.1.0.0 with link time reference
> {code}
> - on Arch Linux:
> {code}
> /usr/lib/jvm/default/bin/java: relocation error: 
> /tmp/liborg_apache_flink_shaded_netty4_netty_tcnative_linux_x86_648476498532937980008.so:
>  symbol SSLv3_method version OPENSSL_1.0.0 not defined in file 
> libssl.so.1.0.0 with link time reference
> {code}
> Possible solutions:
> # build your own OS-dependent dynamically-linked {{netty-tcnative}} library 
> and shade it in your own build of {{flink-shaded-netty-tcnative-dynamic}}, or
> # use {{flink-shaded-netty-tcnative-static}}:
> {code}
> git clone https://github.com/apache/flink-shaded.git
> cd flink-shaded
> mvn clean package -Pinclude-netty-tcnative-static -pl 
> flink-shaded-netty-tcnative-static
> {code}
> # get your OS-dependent build into netty-tcnative as a special branch similar 
> to what they currently do with Fedora-based systems



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13203) [proper fix] Deadlock occurs when requiring exclusive buffer for RemoteInputChannel

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13203:
---
Labels: auto-deprioritized-critical stale-major  (was: 
auto-deprioritized-critical)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> [proper fix] Deadlock occurs when requiring exclusive buffer for 
> RemoteInputChannel
> ---
>
> Key: FLINK-13203
> URL: https://issues.apache.org/jira/browse/FLINK-13203
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.9.0
>Reporter: Piotr Nowojski
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major
>
> The issue is during requesting exclusive buffers with a timeout. Since 
> currently the number of maximum buffers and the number of required buffers 
> are not the same for local buffer pools, there may be cases that the local 
> buffer pools of the upstream tasks occupy all the buffers while the 
> downstream tasks fail to acquire exclusive buffers to make progress. As for 
> 1.9 in https://issues.apache.org/jira/browse/FLINK-12852 deadlock was avoided 
> by adding a timeout to try to failover the current execution when the timeout 
> occurs and tips users to increase the number of buffers in the exception 
> message.
> In the discussion under the https://issues.apache.org/jira/browse/FLINK-12852 
> there were numerous proper solutions discussed and as for now there is no 
> consensus how to fix it:
> 1. Only allocate the minimum per producer, which is one buffer per channel. 
> This would be needed to keep the requirement similar to what we have at the 
> moment, but it is much less than we recommend for the credit-based network 
> data exchange (2* channels + floating)
> 2a. Coordinate the deployment sink-to-source such that receivers always have 
> their buffers first. This will be complex to implement and coordinate and 
> break with many assumptions about tasks being independent (coordination wise) 
> on the TaskManagers. Giving that assumption up will be a pretty big step and 
> cause lot's of complexity in the future.
> {quote}
> It will also increase deployment delays. Low deployment delays should be a 
> design goal in my opinion, as it will enable other features more easily, like 
> low-disruption upgrades, etc.
> {quote}
> 2b. Assign extra buffers only once all of the tasks are RUNNING. This is a 
> simplified version of 2a, without tracking the tasks sink-to-source.
> 3. Make buffers always revokable, by spilling.
> This is tricky to implement very efficiently, especially because there is the 
> logic that slices buffers for early sends for the low-latency streaming stuff
> the spilling request will come from an asynchronous call. That will probably 
> stay like that even with the mailbox, because the main thread will be 
> frequently blocked on buffer allocation when this request comes.
> 4. We allocate the recommended number for good throughput (2*numChannels + 
> floating) per consumer and per producer.
> No dynamic rebalancing any more. This would increase the number of required 
> network buffers in certain high-parallelism scenarios quite a bit with the 
> default config. Users can down-configure this by setting the per-channel 
> buffers lower. But it would break user setups and require them to adjust the 
> config when upgrading.
> 5. We make the network resource per slot and ask the scheduler to attach 
> information about how many producers and how many consumers will be in the 
> slot, worst case. We use that to pre-compute how many excess buffers the 
> producers may take.
> This will also break with some assumptions and lead us to the point that we 
> have to pre-compute network buffers in the same way as managed memory. Seeing 
> how much pain it is with the managed memory, this seems not so great.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13164) Add a listState#putAndGet benchmark for StateBenchmark

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13164:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add a listState#putAndGet benchmark for StateBenchmark
> --
>
> Key: FLINK-13164
> URL: https://issues.apache.org/jira/browse/FLINK-13164
> Project: Flink
>  Issue Type: Improvement
>  Components: Benchmarks
>Reporter: Congxian Qiu
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> As discussed in the 
> [thread|https://github.com/dataArtisans/flink-benchmarks/issues/19], we make 
> sure that there only one sst file in all list state benchmark, so there will 
> not cover the compaction scenario, this ticket wants to add a benchmark for 
> it.
> Note: perIteration result may be not stable for the newly introduced 
> benchmark, but we have to make sure that the final result stable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13103) Graceful Shutdown Handling by UDFs

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13103:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Graceful Shutdown Handling by UDFs
> --
>
> Key: FLINK-13103
> URL: https://issues.apache.org/jira/browse/FLINK-13103
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> This is an umbrella issue for 
> [FLIP-46|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-46%3A+Graceful+Shutdown+Handling+by+UDFs]]
>  and it will be broken down into more fine grained JIRAs as the discussion on 
> the FLIP evolves.
>  
> For more details on what the FLIP is about, please refer to the link above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13479) Cassandra POJO Sink - Prepared Statement query does not have deterministic ordering of columns - causing prepared statement cache overflow

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13479:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Cassandra POJO Sink - Prepared Statement query does not have deterministic 
> ordering of columns - causing prepared statement cache overflow
> --
>
> Key: FLINK-13479
> URL: https://issues.apache.org/jira/browse/FLINK-13479
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Cassandra
>Affects Versions: 1.7.2
>Reporter: Ronak Thakrar
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> While using Cassandra POJO Sink as part of Flink Jobs - prepared statements 
> query string which is automatically generated while inserting the data(using 
> Mapper.saveQuery method), Cassandra entity does not have deterministic 
> ordering enforced-so every time column position is changed a new prepared 
> statement is generated and used.  As an effect of that prepared statement 
> query cache is overflown because every time when insert statement query 
> string is generated by - columns are in random order. 
> Following is the detailed explanation for what happens inside the Datastax 
> java driver([https://datastax-oss.atlassian.net/browse/JAVA-1587]):
> The current Mapper uses random ordering of columns when it creates prepared 
> queries. This is fine when only 1 java client is accessing a cluster (and 
> assuming the application developer does the correct thing by re-using a 
> Mapper), since each Mapper will reused prepared statement. However when you 
> have many java clients accessing a cluster, they will each create their own 
> permutations of column ordering, and can thrash the prepared statement cache 
> on the cluster.
> I propose that the Mapper uses a TreeMap instead of a HashMap when it builds 
> its set of AliasedMappedProperty - sorted by the column name 
> (col.mappedProperty.getMappedName()). This would create a deterministic 
> ordering of columns, and all java processes accessing the same cluster would 
> end up with the same prepared queries for the same entities.
> This issue is already fixed in the Datastax java driver update version(3.3.1) 
> which is not used by Flink Cassandra connector (using 3.0.0).
> I upgraded the driver version to 3.3.1 locally in Flink Cassandra connector 
> and tested, it stopped creating new prepared statements with different 
> ordering of column for the same entity. I have the fix for this issue and 
> would like to contribute the change and will raise the PR request for the 
> same. 
> Flink Cassandra Connector Version: flink-connector-cassandra_2.11
> Flink Version: 1.7.1
> I am creating PR request for the same and which can be merged accordingly and 
> re released in new minor release or patch release as required.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22411) Checkpoint failed caused by Mkdirs failed to create file, the path for Flink state.checkpoints.dir in docker-compose can not work from Flink Operations Playground

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22411:
---
Labels: pull-request-available stale-major  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Checkpoint failed caused by Mkdirs failed to create file, the path for Flink 
> state.checkpoints.dir in docker-compose can not work from Flink Operations 
> Playground
> --
>
> Key: FLINK-22411
> URL: https://issues.apache.org/jira/browse/FLINK-22411
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.12.2
>Reporter: Serge
>Priority: Major
>  Labels: pull-request-available, stale-major
>
> docker-compose starting correctly starting docker-compose but after several 
> minutes of work, Apache Flink has to create checkpoints, but there is some 
> problem with access to the file system. next step in [Observing Failure & 
> Recovery|https://ci.apache.org/projects/flink/flink-docs-release-1.12/try-flink/flink-operations-playground.html#observing-failure–recovery]
>  can not operation.
> Exception:
> {code:java}
> org.apache.flink.runtime.checkpoint.CheckpointException: Could not finalize 
> the pending checkpoint 104. Failure reason: Failure to finalize checkpoint.
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1216)
>  ~[flink-dist_2.11-1.12.1.jar:1.12.1]
> …..
> Caused by: org.apache.flink.util.SerializedThrowable: Mkdirs failed to create 
> file:/tmp/flink-checkpoints-directory/d73c2f87b0d7ea6748a1913ee4b50afe/chk-104
> at 
> org.apache.flink.core.fs.local.LocalFileSystem.create(LocalFileSystem.java:262)
>  ~[flink-dist_2.11-1.12.1.jar:1.12.1]
> {code}
> it is work , add a step:
> Create the checkpoint and savepoint directories on the Docker host machine 
> (these volumes are mounted by the jobmanager and taskmanager, as specified in 
> docker-compose.yaml):
> {code:bash}
> mkdir -p /tmp/flink-checkpoints-directory
> mkdir -p /tmp/flink-savepoints-directory
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13470) Enhancements to Flink Table API for blink planner

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13470:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


>  Enhancements to Flink Table API for blink planner
> --
>
> Key: FLINK-13470
> URL: https://issues.apache.org/jira/browse/FLINK-13470
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: sunjincheng
>Priority: Major
>  Labels: stale-major
>
> For 1.9 we have already add finish the  FLIP-29 planned function development 
> for flink planner, but there also some developments for blink planner.  
> [~lzljs3620320] and I discuessed that It's better to create a new umbrella 
> JIRA for for 1.10 JIRAs List.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22450) SQL parser fails to parse subquery containing INTERSECT in a view

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22450:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> SQL parser fails to parse subquery containing INTERSECT in a view
> -
>
> Key: FLINK-22450
> URL: https://issues.apache.org/jira/browse/FLINK-22450
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.13.0
>Reporter: Caizhi Weng
>Priority: Major
>  Labels: stale-major
>
> Add the following test case to {{TableEnvironmentITCase}} to reproduce this 
> bug.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   tEnv.executeSql(
> """
>   |CREATE TABLE T1 (
>   |  a INT,
>   |  b BIGINT
>   |) WITH (
>   |  'connector'='values'
>   |)
>   |""".stripMargin)
>   tEnv.executeSql(
> """
>   |CREATE TABLE T2 (
>   |  c INT,
>   |  d BIGINT
>   |) WITH (
>   |  'connector'='values'
>   |)
>   |""".stripMargin)
>   tEnv.executeSql(
> """
>   |CREATE TABLE T3 (
>   |  c INT,
>   |  d BIGINT
>   |) WITH (
>   |  'connector'='values'
>   |)
>   |""".stripMargin)
>   tEnv.executeSql("CREATE VIEW myView AS SELECT * FROM T1, (SELECT * FROM T2 
> WHERE c > 0 INTERSECT SELECT * FROM T3 WHERE c > 0) WHERE a = c")
>   System.out.println(tEnv.explainSql("SELECT * FROM myView"))
> }
> {code}
> The exception stack is
> {code}
> org.apache.flink.table.api.SqlParserException: SQL parse failed. Encountered 
> ", SELECT" at line 2, column 10.
> Was expecting one of:
>  
> "AS" ...
> "EXCEPT" ...
> "EXTEND" ...
> "FETCH" ...
> "FOR" ...
> "GROUP" ...
> "HAVING" ...
> "INTERSECT" ...
> "LIMIT" ...
> "MATCH_RECOGNIZE" ...
> "OFFSET" ...
> "ORDER" ...
> "PIVOT" ...
> "MINUS" ...
> "TABLESAMPLE" ...
> "UNION" ...
> "WHERE" ...
> "WINDOW" ...
> "(" ...
>  ...
>  ...
>  ...
>  ...
>  ...
>  ...
> "/*+" ...
> "NATURAL" ...
> "JOIN" ...
> "INNER" ...
> "LEFT" ...
> "RIGHT" ...
> "FULL" ...
> "CROSS" ...
> ","  ...
> ","  ...
> ","  ...
> ","  ...
> ","  ...
> ","  ...
> "," "LATERAL" ...
> "," "(" ...
> "," "UNNEST" ...
> "," "TABLE" ...
> "OUTER" ...
> "." ...
> 
>   at 
> org.apache.flink.table.planner.parse.CalciteParser.parse(CalciteParser.java:56)
>   at 
> org.apache.flink.table.planner.utils.Expander.expanded(Expander.java:83)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertViewQuery(SqlToOperationConverter.java:849)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertCreateView(SqlToOperationConverter.java:819)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:248)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:99)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:723)
>   at 
> org.apache.flink.table.api.TableEnvironmentITCase.myTest(TableEnvironmentITCase.scala:116)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at 
> 

[jira] [Updated] (FLINK-22418) The currently activated tab in the same page isn't consistent

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22418:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> The currently activated tab in the same page isn't consistent
> -
>
> Key: FLINK-22418
> URL: https://issues.apache.org/jira/browse/FLINK-22418
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.13.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: stale-major
> Fix For: 1.14.0
>
> Attachments: image-2021-04-23-10-41-06-577.png
>
>
> Currently, the activated tab isn't always the same in the same page after 
> click "Java/Scala/Python" a couple of times:
>  !image-2021-04-23-10-41-06-577.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22175) some mistakes in common.md

2021-06-06 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22175:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> some mistakes in common.md 
> ---
>
> Key: FLINK-22175
> URL: https://issues.apache.org/jira/browse/FLINK-22175
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: wym_maozi
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
> Attachments: 2.png
>
>
> some sample code brackets are incomplete



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   >