[GitHub] [flink] viirya commented on pull request #17582: [FLINK-24674][kubernetes] Create corresponding resouces for task manager Pods

2021-11-10 Thread GitBox


viirya commented on pull request #17582:
URL: https://github.com/apache/flink/pull/17582#issuecomment-966070024


   Thanks for reviewing, @gyfora.
   
   This issue happens when we put the Hadoop configuration in the docker image. 
In the case, Flink job manager won't create configmap for the Hadoop 
configuration. But when task manager Pods are going to be created, it expects 
there is a configmap of Hadoop configuration but there isn't. So the pods 
cannot be created.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ruanhang1993 commented on a change in pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules

2021-11-10 Thread GitBox


ruanhang1993 commented on a change in pull request #17556:
URL: https://github.com/apache/flink/pull/17556#discussion_r747273961



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/testutils/MiniClusterExtension.java
##
@@ -0,0 +1,215 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.testutils;
+
+import org.apache.flink.api.common.time.Deadline;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.CoreOptions;
+import org.apache.flink.configuration.JobManagerOptions;
+import org.apache.flink.configuration.MemorySize;
+import org.apache.flink.configuration.RestOptions;
+import org.apache.flink.configuration.TaskManagerOptions;
+import org.apache.flink.configuration.UnmodifiableConfiguration;
+import org.apache.flink.core.testutils.CustomExtension;
+import org.apache.flink.runtime.messages.Acknowledge;
+import org.apache.flink.runtime.minicluster.MiniCluster;
+import org.apache.flink.runtime.minicluster.MiniClusterConfiguration;
+import org.apache.flink.util.ExceptionUtils;
+import org.apache.flink.util.Preconditions;
+import org.apache.flink.util.concurrent.FutureUtils;
+
+import org.junit.jupiter.api.extension.ExtensionContext;
+import org.junit.rules.TemporaryFolder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.URI;
+import java.time.Duration;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+/** Extension which starts a {@link MiniCluster} for testing purposes. */
+public class MiniClusterExtension implements CustomExtension {

Review comment:
   OK, I will wrap a MiniClusterResource in MiniClusterExtension, they have 
the same code.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ruanhang1993 commented on a change in pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules

2021-11-10 Thread GitBox


ruanhang1993 commented on a change in pull request #17556:
URL: https://github.com/apache/flink/pull/17556#discussion_r747269792



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskCancelAsyncProducerConsumerITCase.java
##
@@ -58,7 +60,8 @@
 import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 
-public class TaskCancelAsyncProducerConsumerITCase extends TestLogger {
+@ExtendWith({TestLoggerExtension.class})

Review comment:
   Yes, but I want this PR to focus on the job about changing rules to 
extensions. I change this test to show how to use  `AllCallbackWrapper`. And I 
will raise other PRs to migrate modules to JUnit5 after this PR merged.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul commented on a change in pull request #17696: [FLINK-24765][kafka] Bump Kafka version to 2.8

2021-11-10 Thread GitBox


fapaul commented on a change in pull request #17696:
URL: https://github.com/apache/flink/pull/17696#discussion_r747269811



##
File path: 
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducerITCase.java
##
@@ -53,7 +53,12 @@
 import static org.junit.Assert.assertThat;
 import static org.junit.Assert.fail;
 
-/** IT cases for the {@link FlinkKafkaProducer}. */
+/**
+ * IT cases for the {@link FlinkKafkaProducer}.
+ *
+ * Do not run this class in the same junit execution with other tests in 
your IDE. This may lead
+ * leaking threads.

Review comment:
   > If we see more tests than that, we could also introduce an annotation 
later.
   
   Good idea, hopefully, this case does not become more often




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ruanhang1993 commented on a change in pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules

2021-11-10 Thread GitBox


ruanhang1993 commented on a change in pull request #17556:
URL: https://github.com/apache/flink/pull/17556#discussion_r747269792



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskCancelAsyncProducerConsumerITCase.java
##
@@ -58,7 +60,8 @@
 import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 
-public class TaskCancelAsyncProducerConsumerITCase extends TestLogger {
+@ExtendWith({TestLoggerExtension.class})

Review comment:
   Yes, but I want this PR to focus on the job about changing rules to 
extensions. I change this test to show how to use  `AllCallbackWrapper`. And I 
will raise other PRs to migrate modules to JUnit5.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fapaul commented on a change in pull request #17696: [FLINK-24765][kafka] Bump Kafka version to 2.8

2021-11-10 Thread GitBox


fapaul commented on a change in pull request #17696:
URL: https://github.com/apache/flink/pull/17696#discussion_r747269536



##
File path: 
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducerITCase.java
##
@@ -53,7 +53,12 @@
 import static org.junit.Assert.assertThat;
 import static org.junit.Assert.fail;
 
-/** IT cases for the {@link FlinkKafkaProducer}. */
+/**
+ * IT cases for the {@link FlinkKafkaProducer}.
+ *
+ * Do not run this class in the same junit execution with other tests in 
your IDE. This may lead
+ * leaking threads.

Review comment:
   > Is this an issue because tests may be run in parallel in the same JVM 
of the IDE and thus the leak detection looks at the threads of another test?
   
   Are tests within the same module by default run in parallel? Perhaps there 
is also another test within the module leaking thread that is executed before 
the `FlinkKafkaProducerITCase` ...




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * 34ba01994ce91f93f6cf224f004e21ba4a1a505a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26340)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17653: FLINK SQL checkpoint不生效

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17653:
URL: https://github.com/apache/flink/pull/17653#issuecomment-958686553


   
   ## CI report:
   
   * 3ef637d1e3861d1de18e54317df91f56439f843c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26333)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17699: [FLINK-20370][table] Fix wrong results when sink primary key is not the same with query result's changelog upsert key

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17699:
URL: https://github.com/apache/flink/pull/17699#issuecomment-961963701


   
   ## CI report:
   
   * a3f2c4fea8e78e65319a33687b35f4934bf0850a Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26339)
 
   * 9ae0a0979b711c1d8be611faca74a84364dbc07e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26343)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ruanhang1993 commented on a change in pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules

2021-11-10 Thread GitBox


ruanhang1993 commented on a change in pull request #17556:
URL: https://github.com/apache/flink/pull/17556#discussion_r747261202



##
File path: 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/core/testutils/EachCallbackWrapper.java
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.core.testutils;
+
+import org.junit.jupiter.api.extension.AfterEachCallback;
+import org.junit.jupiter.api.extension.BeforeEachCallback;
+import org.junit.jupiter.api.extension.ExtensionContext;
+
+/** An extension wrap logic for {@link BeforeEachCallback} and {@link 
AfterEachCallback}. */
+public class EachCallbackWrapper implements BeforeEachCallback, 
AfterEachCallback {
+private final CustomExtension customExtension;
+

Review comment:
   It is reasonable, I will add the getter for `EachCallbackWrapper` and 
`AllCallbackWrapper`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise commented on pull request #17401: [FLINK-24409][connectors] Fix metrics errors with topics names with periods

2021-11-10 Thread GitBox


AHeise commented on pull request #17401:
URL: https://github.com/apache/flink/pull/17401#issuecomment-966057682


   Does it make sense to re-assign the task to someone else?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise commented on pull request #17592: [hotfix][flink-yarn] Fix typo in YarnConfigOptions.java

2021-11-10 Thread GitBox


AHeise commented on pull request #17592:
URL: https://github.com/apache/flink/pull/17592#issuecomment-966057207


   Hey, sorry for not giving more details, I was assuming @MartijnVisser to 
pick that up. 
   Please refer to 
https://github.com/apache/flink/blob/master/docs/README.md#generate-configuration-tables
 for more information. We need to make sure that the docs are in sync with the 
code. You can squash everything into one commit then.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17692: [FLINK-24489][CEP] The size of entryCache & eventsBufferCache in the SharedBuffer should be defined with a threshold to limit the num

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17692:
URL: https://github.com/apache/flink/pull/17692#issuecomment-961748638


   
   ## CI report:
   
   * a392282483107bb6b985d9ff724d65d602a86a73 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26337)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas edited a comment on pull request #17691: [FLINK-24586][table] SQL functions should return STRING instead of VARCHAR(2000)

2021-11-10 Thread GitBox


SteNicholas edited a comment on pull request #17691:
URL: https://github.com/apache/flink/pull/17691#issuecomment-965890064


   @slinkydeveloper @Airblader , the failed tests have nothing to do with this 
change. Please check this again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise commented on a change in pull request #17692: [FLINK-24489][CEP] The size of entryCache & eventsBufferCache in the SharedBuffer should be defined with a threshold to limit the

2021-11-10 Thread GitBox


AHeise commented on a change in pull request #17692:
URL: https://github.com/apache/flink/pull/17692#discussion_r747261841



##
File path: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/configuration/CEPCacheOptions.java
##
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cep.configuration;
+
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.util.TimeUtils;
+
+import java.time.Duration;
+
+/** CEP Cache Options. */
+public class CEPCacheOptions {
+
+private CEPCacheOptions() {}
+
+public static final ConfigOption 
CEP_SHARED_BUFFER_EVENT_CACHE_SLOTS =
+
ConfigOptions.key("pipeline.global-job-parameters.cep.sharedbuffer.event-cache-slots")

Review comment:
   The cache options seem to be described way too technical. How should an 
end-user understand what's going on? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   * 34ba01994ce91f93f6cf224f004e21ba4a1a505a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26340)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17698: [FLINK-24689][runtime-web] Add log's last modify time in log list view

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17698:
URL: https://github.com/apache/flink/pull/17698#issuecomment-961944542


   
   ## CI report:
   
   * ceb159afd8d17cd261ac9065d5cbe9baf745dd39 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26332)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas edited a comment on pull request #17691: [FLINK-24586][table] SQL functions should return STRING instead of VARCHAR(2000)

2021-11-10 Thread GitBox


SteNicholas edited a comment on pull request #17691:
URL: https://github.com/apache/flink/pull/17691#issuecomment-965890064


   @slinkydeveloper , the failed tests have nothing to do with this change. 
Please check this again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ruanhang1993 commented on a change in pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules

2021-11-10 Thread GitBox


ruanhang1993 commented on a change in pull request #17556:
URL: https://github.com/apache/flink/pull/17556#discussion_r747261202



##
File path: 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/core/testutils/EachCallbackWrapper.java
##
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.core.testutils;
+
+import org.junit.jupiter.api.extension.AfterEachCallback;
+import org.junit.jupiter.api.extension.BeforeEachCallback;
+import org.junit.jupiter.api.extension.ExtensionContext;
+
+/** An extension wrap logic for {@link BeforeEachCallback} and {@link 
AfterEachCallback}. */
+public class EachCallbackWrapper implements BeforeEachCallback, 
AfterEachCallback {
+private final CustomExtension customExtension;
+

Review comment:
   It is reasonable, I will add the get method for `EachCallbackWrapper` 
and `AllCallbackWrapper`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17745: [FLINK-24854][tests] StateHandleSerializationTest unit test error

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17745:
URL: https://github.com/apache/flink/pull/17745#issuecomment-964922388


   
   ## CI report:
   
   * 070e6a2ee8f928e2c7b7bfbce71d4229a0eca947 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ruanhang1993 commented on a change in pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules

2021-11-10 Thread GitBox


ruanhang1993 commented on a change in pull request #17556:
URL: https://github.com/apache/flink/pull/17556#discussion_r747258434



##
File path: 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/util/LogLevelExtension.java
##
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.util;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.core.LoggerContext;
+import org.apache.logging.log4j.core.config.Configuration;
+import org.apache.logging.log4j.core.config.LoggerConfig;
+import org.apache.logging.slf4j.Log4jLogger;
+import org.junit.jupiter.api.extension.AfterAllCallback;
+import org.junit.jupiter.api.extension.BeforeAllCallback;
+import org.junit.jupiter.api.extension.ExtensionContext;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.slf4j.event.Level;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * A extension that sets the log level for specific class/package loggers for 
a test. Logging
+ * configuration will only be extended when logging is enabled at all (so root 
logger is not OFF).
+ */
+public class LogLevelExtension implements BeforeAllCallback, AfterAllCallback {

Review comment:
   `LoggerAuditingExtension` aims to replace `TestLoggerResource`, and 
`LogLevelExtension` aims to replace `LogLevelRule`.  Yeah, we could find first 
if we can merge `TestLoggerResource` and `LogLevelRule` together. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise commented on pull request #17548: [FLINK-24383][streaming] Remove the deprecated SlidingTimeWindows, TumblingTimeWindows, BaseAlignedWindowAssigner.

2021-11-10 Thread GitBox


AHeise commented on pull request #17548:
URL: https://github.com/apache/flink/pull/17548#issuecomment-966051807


   I'll check if I find a good reviewer. Not sure why this PR has not popped up 
on my radar before.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17699: [FLINK-20370][table] Fix wrong results when sink primary key is not the same with query result's changelog upsert key

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17699:
URL: https://github.com/apache/flink/pull/17699#issuecomment-961963701


   
   ## CI report:
   
   * 1550e7e6200bfcac3cf20f3e930d77da1a5874e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26290)
 
   * a3f2c4fea8e78e65319a33687b35f4934bf0850a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26339)
 
   * 9ae0a0979b711c1d8be611faca74a84364dbc07e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26343)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17699: [FLINK-20370][table] Fix wrong results when sink primary key is not the same with query result's changelog upsert key

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17699:
URL: https://github.com/apache/flink/pull/17699#issuecomment-961963701


   
   ## CI report:
   
   * 1550e7e6200bfcac3cf20f3e930d77da1a5874e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26290)
 
   * a3f2c4fea8e78e65319a33687b35f4934bf0850a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26339)
 
   * 9ae0a0979b711c1d8be611faca74a84364dbc07e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise commented on a change in pull request #17696: [FLINK-24765][kafka] Bump Kafka version to 2.8

2021-11-10 Thread GitBox


AHeise commented on a change in pull request #17696:
URL: https://github.com/apache/flink/pull/17696#discussion_r747256716



##
File path: 
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducerITCase.java
##
@@ -53,7 +53,12 @@
 import static org.junit.Assert.assertThat;
 import static org.junit.Assert.fail;
 
-/** IT cases for the {@link FlinkKafkaProducer}. */
+/**
+ * IT cases for the {@link FlinkKafkaProducer}.
+ *
+ * Do not run this class in the same junit execution with other tests in 
your IDE. This may lead
+ * leaking threads.

Review comment:
   If we see more tests than that, we could also introduce an annotation 
later.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise commented on a change in pull request #17696: [FLINK-24765][kafka] Bump Kafka version to 2.8

2021-11-10 Thread GitBox


AHeise commented on a change in pull request #17696:
URL: https://github.com/apache/flink/pull/17696#discussion_r747256424



##
File path: 
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducerITCase.java
##
@@ -53,7 +53,12 @@
 import static org.junit.Assert.assertThat;
 import static org.junit.Assert.fail;
 
-/** IT cases for the {@link FlinkKafkaProducer}. */
+/**
+ * IT cases for the {@link FlinkKafkaProducer}.
+ *
+ * Do not run this class in the same junit execution with other tests in 
your IDE. This may lead
+ * leaking threads.

Review comment:
   Is this an issue because tests may be run in parallel in the same JVM of 
the IDE and thus the leak detection looks at the threads of another test?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] AHeise commented on a change in pull request #17696: [FLINK-24765][kafka] Bump Kafka version to 2.8

2021-11-10 Thread GitBox


AHeise commented on a change in pull request #17696:
URL: https://github.com/apache/flink/pull/17696#discussion_r747255907



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/FlinkKafkaInternalProducer.java
##
@@ -154,20 +161,6 @@ public void close() {
 "Close without timeout is now allowed because it can leave 
lingering Kafka threads.");
 }
 
-@Override
-public void close(long timeout, TimeUnit unit) {
-synchronized (producerClosingLock) {
-kafkaProducer.close(timeout, unit);
-if (LOG.isDebugEnabled()) {
-LOG.debug(
-"Closed internal KafkaProducer {}. Stacktrace: {}",
-System.identityHashCode(this),
-
Joiner.on("\n").join(Thread.currentThread().getStackTrace()));
-}
-closed = true;
-}
-}
-

Review comment:
   Ah sorry, I looked at the wrong commit.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-24741) Deprecate FileRecordFormat, use StreamFormat instead

2021-11-10 Thread Arvid Heise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arvid Heise resolved FLINK-24741.
-
Resolution: Fixed

Merged into master as 1dac395967e5870833d67c6bf1103ba874fce601.

> Deprecate FileRecordFormat, use StreamFormat instead
> 
>
> Key: FLINK-24741
> URL: https://issues.apache.org/jira/browse/FLINK-24741
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: feature, pull-request-available
> Fix For: 1.15.0
>
>
> Issue: The FileRecordFormat and StreamFormat have too much commons. This 
> makes user confused.
> Suggestion: The currently marked as PublicEvolving interface FileRecordFormat 
> should be deprecated. The StreamFormat should be extended and recommended 
> instead. All relevant usages should be refactored and informed appropriately.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] AHeise merged pull request #17656: [FLINK-24741]Deprecate FileRecordFormat

2021-11-10 Thread GitBox


AHeise merged pull request #17656:
URL: https://github.com/apache/flink/pull/17656


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ruanhang1993 commented on a change in pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules

2021-11-10 Thread GitBox


ruanhang1993 commented on a change in pull request #17556:
URL: https://github.com/apache/flink/pull/17556#discussion_r747252860



##
File path: 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/testutils/junit/SharedObjectsExtension.java
##
@@ -0,0 +1,178 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.testutils.junit;
+
+import org.junit.jupiter.api.extension.AfterEachCallback;
+import org.junit.jupiter.api.extension.BeforeEachCallback;
+import org.junit.jupiter.api.extension.ExtensionContext;
+
+import java.util.Map;
+import java.util.Objects;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicInteger;
+
+/**
+ * This rule allows objects to be used both in the main test case as well as 
in UDFs by using
+ * serializable {@link SharedReference}s. Usage:
+ *
+ * 
+ * {@literal@RegisterExtension}

Review comment:
   I think the test owner have to get some knowledge about Junit5 and be 
careful, there is not some information for us to find incorrect usages like 
this. The misuse will not make tests fail.
   
   I write a test about it. I misuse the `AllCallbackWrapper` in a non-static 
field, and the `EachCallbackWrapper` in a static field.
   ```java
   public class Test {
   @RegisterExtension
   AllCallbackWrapper allCallbackWrapper = new AllCallbackWrapper(new 
TestExtension("all"));
   
   @RegisterExtension
   static EachCallbackWrapper eachCallbackWrapper = new 
EachCallbackWrapper(new TestExtension("each"));
   
   @BeforeAll
   public static void ba() {
   System.out.println("[method] before all");
   }
   
   @BeforeEach
   public void be() {
   System.out.println("[method] before each");
   }
   
   @AfterAll
   public static void aa() {
   System.out.println("[method] after all");
   }
   
   @AfterEach
   public void ae() {
   System.out.println("[method] after each");
   }
   
   @Test
   public void test() {
   System.out.println("[self] code");
   }
   
   static class TestExtension implements CustomExtension{
   private String id;
   
   public TestExtension(String id) {
   this.id = id;
   }
   
   public void before(ExtensionContext context) throws Exception {
   System.out.println("[do before]:" + id);
   }
   
   public void after(ExtensionContext context) throws Exception {
   System.out.println("[do after]:" + id);
   }
   }
   }
   ```
   The result is as following.
   ```
   [method] before all
   [do before]:each
   [method] before each
   [self] code
   [method] after each
   [do after]:each
   [method] after all
   ```
   This misuse will not make tests fail. The misuse for `AllCallbackWrapper` in 
a non-static field causes the `beforeAll/afterAll` in `AllCallbackWrapper` not 
execute. But static fields are not limited.
   The result is the same as [the junit5 user guide about this 
usage](https://junit.org/junit5/docs/current/user-guide/#extensions-registration-programmatic-static-fields).
   
   So we should change `@Rule` to a non-static field and `@ClassRule` to a 
static field to make sure everything go well.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] yunfengzhou-hub opened a new pull request #21: [FLINK-24817] Add Naive Bayes implementation

2021-11-10 Thread GitBox


yunfengzhou-hub opened a new pull request #21:
URL: https://github.com/apache/flink-ml/pull/21


   ## What is the purpose of the change
   This PR adds the implementation of Naive Bayes algorithm to Flink ML. This 
algorithm is implemented in reference to that implemented in Alibaba Alink and 
Apache Spark, but it uses Flink ML's framework and Flink Datastream.
   
   This PR also adds classes that could convert String to indexed numbers. This 
function is needed by Naive Bayes so that it only needs to deal with numeric 
input data.
   
   ## Brief change log
   This PR adds public classes NaiveBayes and MultiStringIndexer. Users can use 
NaiveBayes to do training and inference according to the algorithm with the 
same name. Users can also use MultiStringIndexer to convert strings into 
indices.
   
   ## Verifying this change
   The changes are tested by unit tests in MultiStringIndexerTest and 
NaiveBayesTest.
   
   ## Does this pull request potentially affect one of the following parts:
   Dependencies (does it add or upgrade a dependency): (yes)
   The public API, i.e., is any changed class annotated with @Public(Evolving): 
(yes)
   
   ## Documentation
   Does this pull request introduce a new feature? (yes)
   If yes, how is the feature documented? (Java doc)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24817) Support Naive Bayes algorithm in Flink ML

2021-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24817:
---
Labels: pull-request-available  (was: )

> Support Naive Bayes algorithm in Flink ML
> -
>
> Key: FLINK-24817
> URL: https://issues.apache.org/jira/browse/FLINK-24817
> Project: Flink
>  Issue Type: New Feature
>  Components: Library / Machine Learning
>Reporter: Yunfeng Zhou
>Priority: Major
>  Labels: pull-request-available
>
> This ticket aims to add Naive Bayes algorithm to Flink ML. The algorithm will 
> use latest Flink ML API proposed in FLIP 173~176. 
>  
> Github PR link: https://github.com/apache/flink-ml/pull/21



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-ml] yunfengzhou-hub closed pull request #21: [FLINK-24817] Add Naive Bayes implementation

2021-11-10 Thread GitBox


yunfengzhou-hub closed pull request #21:
URL: https://github.com/apache/flink-ml/pull/21


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal edited a comment on pull request #17463: [Hotfix] Fix typos.

2021-11-10 Thread GitBox


RocMarshal edited a comment on pull request #17463:
URL: https://github.com/apache/flink/pull/17463#issuecomment-944947844


   Hi, @klion26 @alpinegizmo  @twalthr , here is a minor typo. Would you like 
to help me to check it ? Thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal edited a comment on pull request #17548: [FLINK-24383][streaming] Remove the deprecated SlidingTimeWindows, TumblingTimeWindows, BaseAlignedWindowAssigner.

2021-11-10 Thread GitBox


RocMarshal edited a comment on pull request #17548:
URL: https://github.com/apache/flink/pull/17548#issuecomment-950549753


   Hi,  @twalthr  @xccui   @klion26 @AHeise Could you help me to check this pr 
?Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal edited a comment on pull request #17692: [FLINK-24489][CEP] The size of entryCache & eventsBufferCache in the SharedBuffer should be defined with a threshold to limit the n

2021-11-10 Thread GitBox


RocMarshal edited a comment on pull request #17692:
URL: https://github.com/apache/flink/pull/17692#issuecomment-963105801


   @Airblader @AHeise  @klion26 Could you help me to review this pr ? Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] (FLINK-24851) KafkaSourceBuilder: auto.offset.reset is ignored

2021-11-10 Thread liwei li (Jira)


[ https://issues.apache.org/jira/browse/FLINK-24851 ]


liwei li deleted comment on FLINK-24851:
--

was (Author: liliwei):
Also, what was the background of the artificial overwriting of this value?Or 
maybe it's just a bug?

> KafkaSourceBuilder: auto.offset.reset is ignored
> 
>
> Key: FLINK-24851
> URL: https://issues.apache.org/jira/browse/FLINK-24851
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Arseniy Tashoyan
>Assignee: liwei li
>Priority: Major
>
> Creating KafkaSource like this:
> {code:scala}
> val props = new Properties()
> props.put("bootstrap.servers", "localhost:9092")
> props.put("group.id", "group1")
> props.put("auto.offset.reset", "latest")
> val kafkaSource = KafkaSource.builder[String]()
>   .setProperties(props)
>   .build()
> {code}
> The actually used value for _"auto.offset.reset"_ is *"earliest"* instead of 
> configured *"latest"*.
> This occurs because _"auto.offset.reset"_ gets overridden by 
> _startingOffsetsInitializer.getAutoOffsetResetStrategy().name().toLowerCase()_.
>  The default value for _startingOffsetsInitializer_ is _"earliest"_.
> This behavior is misleading.
> This behavior imposes an inconvenience on configuring the Kafka connector. We 
> cannot use the Kafka setting _"auto.offset.reset"_ as-is. Instead we must 
> extract this particular setting from other settings and propagate to 
> _KafkaSourceBuilder.setStartingOffsets()_:
> {code:scala}
> val kafkaSource = KafkaSource.builder[String]()
>   .setProperties(props)
>   .setStartingOffsets(
> OffsetsInitializer.committedOffsets(
>   OffsetResetStrategy.valueOf(
> props.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)
>   .asInstanceOf[String]
>   .toUpperCase(Locale.ROOT)
>   )
> )
>   )
>   .build()
> {code}
> The expected behavior is to use the value of _"auto.offset.reset"_ provided 
> by _KafkaSourceBuilder.setProperties()_ - unless overridden via 
> _KafkaSourceBuilder. setStartingOffsets()_.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24851) KafkaSourceBuilder: auto.offset.reset is ignored

2021-11-10 Thread liwei li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442107#comment-17442107
 ] 

liwei li commented on FLINK-24851:
--

Also, what was the background of the artificial overwriting of this value?Or 
maybe it's just a bug?

> KafkaSourceBuilder: auto.offset.reset is ignored
> 
>
> Key: FLINK-24851
> URL: https://issues.apache.org/jira/browse/FLINK-24851
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Arseniy Tashoyan
>Assignee: liwei li
>Priority: Major
>
> Creating KafkaSource like this:
> {code:scala}
> val props = new Properties()
> props.put("bootstrap.servers", "localhost:9092")
> props.put("group.id", "group1")
> props.put("auto.offset.reset", "latest")
> val kafkaSource = KafkaSource.builder[String]()
>   .setProperties(props)
>   .build()
> {code}
> The actually used value for _"auto.offset.reset"_ is *"earliest"* instead of 
> configured *"latest"*.
> This occurs because _"auto.offset.reset"_ gets overridden by 
> _startingOffsetsInitializer.getAutoOffsetResetStrategy().name().toLowerCase()_.
>  The default value for _startingOffsetsInitializer_ is _"earliest"_.
> This behavior is misleading.
> This behavior imposes an inconvenience on configuring the Kafka connector. We 
> cannot use the Kafka setting _"auto.offset.reset"_ as-is. Instead we must 
> extract this particular setting from other settings and propagate to 
> _KafkaSourceBuilder.setStartingOffsets()_:
> {code:scala}
> val kafkaSource = KafkaSource.builder[String]()
>   .setProperties(props)
>   .setStartingOffsets(
> OffsetsInitializer.committedOffsets(
>   OffsetResetStrategy.valueOf(
> props.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)
>   .asInstanceOf[String]
>   .toUpperCase(Locale.ROOT)
>   )
> )
>   )
>   .build()
> {code}
> The expected behavior is to use the value of _"auto.offset.reset"_ provided 
> by _KafkaSourceBuilder.setProperties()_ - unless overridden via 
> _KafkaSourceBuilder. setStartingOffsets()_.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] (FLINK-24851) KafkaSourceBuilder: auto.offset.reset is ignored

2021-11-10 Thread liwei li (Jira)


[ https://issues.apache.org/jira/browse/FLINK-24851 ]


liwei li deleted comment on FLINK-24851:
--

was (Author: liliwei):
Also, can I know the background for doing this?Why override this value 
artificially?Or maybe it's just a bug?

> KafkaSourceBuilder: auto.offset.reset is ignored
> 
>
> Key: FLINK-24851
> URL: https://issues.apache.org/jira/browse/FLINK-24851
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Arseniy Tashoyan
>Assignee: liwei li
>Priority: Major
>
> Creating KafkaSource like this:
> {code:scala}
> val props = new Properties()
> props.put("bootstrap.servers", "localhost:9092")
> props.put("group.id", "group1")
> props.put("auto.offset.reset", "latest")
> val kafkaSource = KafkaSource.builder[String]()
>   .setProperties(props)
>   .build()
> {code}
> The actually used value for _"auto.offset.reset"_ is *"earliest"* instead of 
> configured *"latest"*.
> This occurs because _"auto.offset.reset"_ gets overridden by 
> _startingOffsetsInitializer.getAutoOffsetResetStrategy().name().toLowerCase()_.
>  The default value for _startingOffsetsInitializer_ is _"earliest"_.
> This behavior is misleading.
> This behavior imposes an inconvenience on configuring the Kafka connector. We 
> cannot use the Kafka setting _"auto.offset.reset"_ as-is. Instead we must 
> extract this particular setting from other settings and propagate to 
> _KafkaSourceBuilder.setStartingOffsets()_:
> {code:scala}
> val kafkaSource = KafkaSource.builder[String]()
>   .setProperties(props)
>   .setStartingOffsets(
> OffsetsInitializer.committedOffsets(
>   OffsetResetStrategy.valueOf(
> props.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)
>   .asInstanceOf[String]
>   .toUpperCase(Locale.ROOT)
>   )
> )
>   )
>   .build()
> {code}
> The expected behavior is to use the value of _"auto.offset.reset"_ provided 
> by _KafkaSourceBuilder.setProperties()_ - unless overridden via 
> _KafkaSourceBuilder. setStartingOffsets()_.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24851) KafkaSourceBuilder: auto.offset.reset is ignored

2021-11-10 Thread liwei li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442106#comment-17442106
 ] 

liwei li commented on FLINK-24851:
--

Also, can I know the background for doing this?Why override this value 
artificially?Or maybe it's just a bug?

> KafkaSourceBuilder: auto.offset.reset is ignored
> 
>
> Key: FLINK-24851
> URL: https://issues.apache.org/jira/browse/FLINK-24851
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Arseniy Tashoyan
>Assignee: liwei li
>Priority: Major
>
> Creating KafkaSource like this:
> {code:scala}
> val props = new Properties()
> props.put("bootstrap.servers", "localhost:9092")
> props.put("group.id", "group1")
> props.put("auto.offset.reset", "latest")
> val kafkaSource = KafkaSource.builder[String]()
>   .setProperties(props)
>   .build()
> {code}
> The actually used value for _"auto.offset.reset"_ is *"earliest"* instead of 
> configured *"latest"*.
> This occurs because _"auto.offset.reset"_ gets overridden by 
> _startingOffsetsInitializer.getAutoOffsetResetStrategy().name().toLowerCase()_.
>  The default value for _startingOffsetsInitializer_ is _"earliest"_.
> This behavior is misleading.
> This behavior imposes an inconvenience on configuring the Kafka connector. We 
> cannot use the Kafka setting _"auto.offset.reset"_ as-is. Instead we must 
> extract this particular setting from other settings and propagate to 
> _KafkaSourceBuilder.setStartingOffsets()_:
> {code:scala}
> val kafkaSource = KafkaSource.builder[String]()
>   .setProperties(props)
>   .setStartingOffsets(
> OffsetsInitializer.committedOffsets(
>   OffsetResetStrategy.valueOf(
> props.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG)
>   .asInstanceOf[String]
>   .toUpperCase(Locale.ROOT)
>   )
> )
>   )
>   .build()
> {code}
> The expected behavior is to use the value of _"auto.offset.reset"_ provided 
> by _KafkaSourceBuilder.setProperties()_ - unless overridden via 
> _KafkaSourceBuilder. setStartingOffsets()_.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24871) Flink SQL hive reports IndexOutOfBoundsException when using trim in where clause

2021-11-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-24871:

Component/s: Connectors / Hive

> Flink SQL hive reports IndexOutOfBoundsException when using trim in where 
> clause
> 
>
> Key: FLINK-24871
> URL: https://issues.apache.org/jira/browse/FLINK-24871
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Reporter: Liu
>Priority: Major
>
> The problem can be reproduced as follow:
> In class HiveDialectITCase, define the test testTrimError
>  
> {code:java}
> @Test
> public void testTrimError() {
> tableEnv.executeSql("create table src (x int,y string)");
> tableEnv.executeSql("select * from src where trim(y) != ''");
> } {code}
> Executing it will throw the following exception.
>  
> {panel}
> java.lang.IndexOutOfBoundsException: index (2) must be less than size (1)
>     at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1345)
>     at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1327)
>     at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:43)
>     at 
> org.apache.calcite.rex.RexCallBinding.getOperandType(RexCallBinding.java:136)
>     at 
> org.apache.calcite.sql.type.OrdinalReturnTypeInference.inferReturnType(OrdinalReturnTypeInference.java:40)
>     at 
> org.apache.calcite.sql.type.SqlTypeTransformCascade.inferReturnType(SqlTypeTransformCascade.java:56)
>     at 
> org.apache.calcite.sql.type.SqlTypeTransformCascade.inferReturnType(SqlTypeTransformCascade.java:56)
>     at 
> org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:482)
>     at org.apache.calcite.rex.RexBuilder.deriveReturnType(RexBuilder.java:283)
>     at org.apache.calcite.rex.RexBuilder.makeCall(RexBuilder.java:257)
>     at 
> org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:107)
>     at 
> org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:56)
>     at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>     at org.apache.calcite.rex.RexShuttle.visitList(RexShuttle.java:158)
>     at 
> org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:107)
>     at 
> org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:56)
>     at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>     at 
> org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genFilterRelNode(HiveParserCalcitePlanner.java:914)
>     at 
> org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genFilterRelNode(HiveParserCalcitePlanner.java:1082)
>     at 
> org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genFilterLogicalPlan(HiveParserCalcitePlanner.java:1099)
>     at 
> org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genLogicalPlan(HiveParserCalcitePlanner.java:2736)
>     at 
> org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.logicalPlan(HiveParserCalcitePlanner.java:284)
>     at 
> org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genLogicalPlan(HiveParserCalcitePlanner.java:272)
>     at 
> org.apache.flink.table.planner.delegation.hive.HiveParser.analyzeSql(HiveParser.java:290)
>     at 
> org.apache.flink.table.planner.delegation.hive.HiveParser.processCmd(HiveParser.java:238)
>     at 
> org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:208)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:735)
>     at 
> org.apache.flink.connectors.hive.HiveDialectITCase.testTrimError(HiveDialectITCase.java:366)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 

[GitHub] [flink] ruanhang1993 commented on a change in pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules

2021-11-10 Thread GitBox


ruanhang1993 commented on a change in pull request #17556:
URL: https://github.com/apache/flink/pull/17556#discussion_r747224765



##
File path: 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/testutils/junit/SharedObjectsExtension.java
##
@@ -0,0 +1,178 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.testutils.junit;
+
+import org.junit.jupiter.api.extension.AfterEachCallback;
+import org.junit.jupiter.api.extension.BeforeEachCallback;
+import org.junit.jupiter.api.extension.ExtensionContext;
+
+import java.util.Map;
+import java.util.Objects;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicInteger;
+
+/**
+ * This rule allows objects to be used both in the main test case as well as 
in UDFs by using
+ * serializable {@link SharedReference}s. Usage:
+ *
+ * 
+ * {@literal@RegisterExtension}
+ * public final SharedObjectsExtension sharedObjects = 
SharedObjectsExtension.create();
+ *
+ * {@literal@Test}
+ * public void test() throws Exception {
+ * StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+ * {@literalSharedReference> listRef = 
sharedObjects.add(new ConcurrentLinkedQueue<>());}
+ * int n = 1;
+ * env.setParallelism(100);
+ * env.fromSequence(0, n).map(i -> listRef.get().add(i));
+ * env.execute();
+ * assertEquals(n + 1, listRef.get().size());
+ * assertEquals(
+ * LongStream.rangeClosed(0, 
n).boxed().collect(Collectors.toList()),
+ * 
listRef.get().stream().sorted().collect(Collectors.toList()));
+ * }
+ * 
+ *
+ * The main idea is that shared objects are bound to the scope of a test 
case instead of a class.
+ * That allows us to:
+ *
+ * 
+ *   Avoid all kinds of static fields in test classes that only exist 
since all fields in UDFs
+ *   need to be serializable.
+ *   Hopefully make it easier to reason about the test setup
+ *   Facilitate to share more test building blocks across test classes.
+ *   Fully allow tests to be rerun/run in parallel without worrying about 
static fields
+ * 
+ *
+ * Note that since the shared objects are accessed through multiple 
threads, they need to be
+ * thread-safe or accessed in a thread-safe manner.
+ */
+public class SharedObjectsExtension implements BeforeEachCallback, 
AfterEachCallback {

Review comment:
   OK, I will add the annotation.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ruanhang1993 commented on a change in pull request #17556: [FLINK-24627][tests] add some Junit5 extensions to replace the existed Junit4 rules

2021-11-10 Thread GitBox


ruanhang1993 commented on a change in pull request #17556:
URL: https://github.com/apache/flink/pull/17556#discussion_r747224152



##
File path: 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/util/TestLoggerExtension.java
##
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.util;
+
+import org.junit.jupiter.api.extension.BeforeEachCallback;
+import org.junit.jupiter.api.extension.ExtensionContext;
+import org.junit.jupiter.api.extension.TestWatcher;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.PrintWriter;
+import java.io.StringWriter;
+
+/** A JUnit-5-style test logger. */
+public class TestLoggerExtension implements TestWatcher, BeforeEachCallback {

Review comment:
   We don't need `nameProvider` in Junit5 test cases. If a test need 
information about the test, we could make use of `TestInfo`. For example, 
change method signature from `test()` to `test(TestInfo info)`, and the 
parameter `info` will contain information about the test.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on a change in pull request #24: Flink 24557

2021-11-10 Thread GitBox


lindong28 commented on a change in pull request #24:
URL: https://github.com/apache/flink-ml/pull/24#discussion_r747189344



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/algo/batch/knn/KnnTrainBatchOp.java
##
@@ -0,0 +1,230 @@
+package org.apache.flink.ml.algo.batch.knn;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.RichMapPartitionFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.ml.algo.batch.knn.distance.BaseFastDistance;
+import org.apache.flink.ml.algo.batch.knn.distance.BaseFastDistanceData;
+import org.apache.flink.ml.algo.batch.knn.distance.FastDistanceMatrixData;
+import org.apache.flink.ml.algo.batch.knn.distance.FastDistanceSparseData;
+import org.apache.flink.ml.algo.batch.knn.distance.FastDistanceVectorData;
+import org.apache.flink.ml.common.BatchOperator;
+import org.apache.flink.ml.common.MapPartitionFunctionWrapper;
+import org.apache.flink.ml.common.linalg.DenseVector;
+import org.apache.flink.ml.common.linalg.VectorUtil;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.StringParam;
+import org.apache.flink.ml.params.knn.HasKnnDistanceType;
+import org.apache.flink.ml.params.knn.KnnTrainParams;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.table.catalog.ResolvedSchema;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.Preconditions;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.flink.ml.algo.batch.knn.distance.BaseFastDistanceData.pGson;
+
+/**
+ * KNN is to classify unlabeled observations by assigning them to the class of 
the most similar
+ * labeled examples. Note that though there is no ``training process`` in KNN, 
we create a ``fake
+ * one`` to use in pipeline model. In this operator, we do some preparation to 
speed up the
+ * inference process.
+ */
+public final class KnnTrainBatchOp extends BatchOperator

Review comment:
   Could you help explain why we need to have both `KnnClassifier` and 
`KnnTrainBatchOp`? Would it be simpler to merge them into one class?
   
   

##
File path: 
flink-ml-lib/src/test/java/org/apache/flink/ml/algo/batch/knn/KnnBatchOpTest.java
##
@@ -0,0 +1,206 @@
+package org.apache.flink.ml.algo.batch.knn;
+
+import org.apache.flink.api.common.RuntimeExecutionMode;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.RestOptions;
+import org.apache.flink.iteration.config.IterationOptions;
+import org.apache.flink.ml.api.core.Pipeline;
+import org.apache.flink.ml.api.core.Stage;
+import org.apache.flink.ml.common.BatchOperator;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import 
org.apache.flink.streaming.api.environment.ExecutionCheckpointingOptions;
+import org.apache.flink.streaming.api.functions.sink.SinkFunction;
+import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Preconditions;
+
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+public class KnnBatchOpTest {
+private BatchOperator getSourceOp(List rows) {
+DataStream dataStream =
+MLEnvironmentFactory.getDefault()
+.getStreamExecutionEnvironment()
+.fromCollection(
+rows,
+new RowTypeInfo(
+new TypeInformation[] {
+Types.INT, Types.STRING, 
Types.DOUBLE
+},
+new String[] {"re", "vec", "label"}));
+
+Table out =
+MLEnvironmentFactory.getDefault()
+.getStreamTableEnvironment()
+.fromDataStream(dataStream);
+return new TableSourceBatchOp(out);
+}
+
+private Table 

[GitHub] [flink] flinkbot edited a comment on pull request #17674: [FLINK-24755][doc]Add guidance to solve the package sun.misc does not exist

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17674:
URL: https://github.com/apache/flink/pull/17674#issuecomment-960623713


   
   ## CI report:
   
   * 092e28e32428901c5eab1ff5c82c22ca67378ea4 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26289)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17674: [FLINK-24755][doc]Add guidance to solve the package sun.misc does not exist

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17674:
URL: https://github.com/apache/flink/pull/17674#issuecomment-960623713


   
   ## CI report:
   
   * 092e28e32428901c5eab1ff5c82c22ca67378ea4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26289)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Aitozi commented on pull request #17674: [FLINK-24755][doc]Add guidance to solve the package sun.misc does not exist

2021-11-10 Thread GitBox


Aitozi commented on pull request #17674:
URL: https://github.com/apache/flink/pull/17674#issuecomment-966001781


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17229: [FLINK-23381][state] Back-pressure on reaching state change to upload limit

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17229:
URL: https://github.com/apache/flink/pull/17229#issuecomment-916519510


   
   ## CI report:
   
   * bafd996049bf6f1a6b05c3b3b2828a480ec4b773 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26323)
 
   * 71e47b3f47d258d23dd620175a112a8b5aa73671 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26341)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17229: [FLINK-23381][state] Back-pressure on reaching state change to upload limit

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17229:
URL: https://github.com/apache/flink/pull/17229#issuecomment-916519510


   
   ## CI report:
   
   * bafd996049bf6f1a6b05c3b3b2828a480ec4b773 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26323)
 
   * 71e47b3f47d258d23dd620175a112a8b5aa73671 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24758) filesystem sink: partition.time-extractor.kind support "yyyyMMdd"

2021-11-10 Thread liwei li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442082#comment-17442082
 ] 

liwei li commented on FLINK-24758:
--

That makes sense. I will modify this pr.

> filesystem sink: partition.time-extractor.kind support "MMdd"
> -
>
> Key: FLINK-24758
> URL: https://issues.apache.org/jira/browse/FLINK-24758
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Jingsong Lee
>Assignee: liwei li
>Priority: Major
>  Labels: pull-request-available
>
> Now, only supports -mm-dd hh:mm:ss, we can add a new time-extractor kind 
> to support MMdd in a single partition field
> .



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] hililiwei removed a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: partition.time-extractor.kind support "yyyyMMdd"

2021-11-10 Thread GitBox


hililiwei removed a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965991143


   > Actually, I do not think it is a good way to add a 'basicDate', since 
'-MM-dd HH:mm:ss' is the default. The best way to have another parameters 
such as 'date-pattern', and the default values is '-MM-dd HH:mm:ss', and 
give users to self define it.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hililiwei commented on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: partition.time-extractor.kind support "yyyyMMdd"

2021-11-10 Thread GitBox


hililiwei commented on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965991143


   > Actually, I do not think it is a good way to add a 'basicDate', since 
'-MM-dd HH:mm:ss' is the default. The best way to have another parameters 
such as 'date-pattern', and the default values is '-MM-dd HH:mm:ss', and 
give users to self define it.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hililiwei closed pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: partition.time-extractor.kind support "yyyyMMdd"

2021-11-10 Thread GitBox


hililiwei closed pull request #17749:
URL: https://github.com/apache/flink/pull/17749


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24477) Add MongoDB sink

2021-11-10 Thread Lai Dai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442081#comment-17442081
 ] 

Lai Dai commented on FLINK-24477:
-

Hi, who can tell me when to make a pr for this feature?

> Add MongoDB sink
> 
>
> Key: FLINK-24477
> URL: https://issues.apache.org/jira/browse/FLINK-24477
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Nir Tsruya
>Assignee: Nir Tsruya
>Priority: Minor
>
> h2. Motivation
> *User stories:*
> As a Flink user, I’d like to use MongoDB as sink for my data pipeline.
> *Scope:*
>  * Implement an asynchronous sink for MongoDB inheriting the AsyncSinkBase 
> class. The implementation can for now reside in its own module in 
> flink-connectors.
>  * Implement an asynchornous sink writer for MongoDB by extending the 
> AsyncSinkWriter. The implemented Sink Writer will be used by the Sink class 
> that will be created as part of this story.
>  * Java / code-level docs.
>  * End to end testing



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] hililiwei commented on pull request #17754: [FLINK-24861][connector][jdbc] Fix false cache lookup for empty data

2021-11-10 Thread GitBox


hililiwei commented on pull request #17754:
URL: https://github.com/apache/flink/pull/17754#issuecomment-965982508


   look good


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24758) filesystem sink: partition.time-extractor.kind support "yyyyMMdd"

2021-11-10 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442077#comment-17442077
 ] 

Jingsong Lee commented on FLINK-24758:
--

[~nobleyd]  +1

> filesystem sink: partition.time-extractor.kind support "MMdd"
> -
>
> Key: FLINK-24758
> URL: https://issues.apache.org/jira/browse/FLINK-24758
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Jingsong Lee
>Assignee: liwei li
>Priority: Major
>  Labels: pull-request-available
>
> Now, only supports -mm-dd hh:mm:ss, we can add a new time-extractor kind 
> to support MMdd in a single partition field
> .



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] RocMarshal commented on a change in pull request #17613: [FLINK-24536][Table SQL/Planner] flink sql support bang equal '!='

2021-11-10 Thread GitBox


RocMarshal commented on a change in pull request #17613:
URL: https://github.com/apache/flink/pull/17613#discussion_r747186368



##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/BangEqualITCase.scala
##
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.runtime.stream.sql
+
+import org.apache.flink.streaming.api.scala._
+import org.apache.flink.table.api.bridge.scala._
+import org.apache.flink.table.data.RowData
+import org.apache.flink.table.planner.runtime.utils.{StreamingTestBase, 
TestingAppendRowDataSink}
+import org.apache.flink.table.runtime.typeutils.InternalTypeInfo
+import org.apache.flink.table.types.logical.{IntType, VarCharType}
+
+import org.junit.Assert._
+import org.junit.Test
+
+class BangEqualITCase extends StreamingTestBase {

Review comment:
   Could you check the case in Batched mode ?

##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/BangEqualITCase.scala
##
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.runtime.stream.sql
+
+import org.apache.flink.streaming.api.scala._
+import org.apache.flink.table.api.bridge.scala._
+import org.apache.flink.table.data.RowData
+import org.apache.flink.table.planner.runtime.utils.{StreamingTestBase, 
TestingAppendRowDataSink}
+import org.apache.flink.table.runtime.typeutils.InternalTypeInfo
+import org.apache.flink.table.types.logical.{IntType, VarCharType}
+
+import org.junit.Assert._
+import org.junit.Test
+
+class BangEqualITCase extends StreamingTestBase {
+
+  @Test
+  def testBangEqual(): Unit = {
+
+val sqlQueryBangEqual =
+  """
+|SELECT * FROM 
+| (VALUES (1, 'Bob'), (1, 'Alice'), (2, 'Lily')) T(a, b)
+|WHERE a != 2
+|
+|""".stripMargin
+
+val sqlQueryUnEqual =
+  """
+|SELECT * FROM 
+| (VALUES (1, 'Bob'), (1, 'Alice'), (2, 'Lily')) T(a, b)
+|WHERE a <> 2
+|
+|""".stripMargin
+
+val outputType = InternalTypeInfo.ofFields(
+  new IntType(),
+  new VarCharType(5))
+
+val result1 = tEnv.sqlQuery(sqlQueryBangEqual).toAppendStream[RowData]
+val sink1 = new TestingAppendRowDataSink(outputType)
+result1.addSink(sink1).setParallelism(1)
+
+val result2 = tEnv.sqlQuery(sqlQueryUnEqual).toAppendStream[RowData]
+val sink2 = new TestingAppendRowDataSink(outputType)
+result2.addSink(sink2).setParallelism(1)
+
+env.execute()
+
+val expected = List("+I(1,Alice)", "+I(1,Bob)")

Review comment:
   It would be better if you could describe the table columns by 
TableFactoryHarness and describe the table result by Row instead of string, as 
mentioned from this 
[pr](https://github.com/apache/flink/pull/17352#discussion_r715626383) by 
@Airblader .
   

##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/BangEqualITCase.scala
##
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ 

[GitHub] [flink] flinkbot edited a comment on pull request #17387: [FLINK-24864][metrics] Release TaskManagerJobMetricGroup with the last slot rather than task

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17387:
URL: https://github.com/apache/flink/pull/17387#issuecomment-930245327


   
   ## CI report:
   
   * 44c4ce9fcbdd66f25dcb82d52224e98064cea4ad Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26330)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on a change in pull request #29: [FLINK-24838] Add BaseAlgoImpl class to support link() and linkFrom()

2021-11-10 Thread GitBox


lindong28 commented on a change in pull request #29:
URL: https://github.com/apache/flink-ml/pull/29#discussion_r747186614



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/operator/BaseAlgoImpl.java
##
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.operator;
+
+import org.apache.flink.ml.api.core.AlgoOperator;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.WithParams;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.table.api.Table;
+
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * Base class for algorithm operators.
+ *
+ * Base class for the algorithm operators. It hosts the parameters and 
output tables of an
+ * algorithm operator. Each BaseAlgoImpl may have one or more output tables. 
One of the output table
+ * is the primary output table which can be obtained by calling {@link 
#getOutput}. The other output
+ * tables are side output tables that can be obtained by calling {@link 
#getSideOutputs()}.
+ *
+ * The input of an BaseAlgoImpl is defined in the subclasses of the 
BaseAlgoImpl.
+ *
+ * @param  The class type of the {@link BaseAlgoImpl} implementation itself
+ */
+public abstract class BaseAlgoImpl>
+implements AlgoOperator, WithParams {
+
+/** Params for algorithms. */
+private Map, Object> params;
+
+/** The table held by operator. */
+private transient Table output = null;
+
+/** The side outputs of operator that be similar to the stream's side 
outputs. */
+private transient Table[] sideOutputs = null;
+
+/** Construct the operator with the initial Params. */
+protected BaseAlgoImpl(Map, Object> params) {
+this.params = new HashMap<>();
+if (null != params) {
+for (Map.Entry, Object> entry : params.entrySet()) {
+this.params.put(entry.getKey(), entry.getValue());
+}
+}
+ParamUtils.initializeMapWithDefaultValues(this.params, this);
+}
+
+@Override
+public Map, Object> getParamMap() {
+return this.params;
+}
+
+/** Returns the table held by operator. */
+public Table getOutput() {
+return this.output;
+}
+
+/** Returns the side outputs. */
+public Table[] getSideOutputs() {
+return this.sideOutputs;
+}
+
+/**
+ * Set the table held by operator.
+ *
+ * @param output the output table.
+ */
+protected void setOutput(Table output) {
+this.output = output;
+}
+
+/**
+ * Set the side outputs.
+ *
+ * @param sideOutputs the side outputs set the operator.
+ */
+protected void setSideOutputs(Table[] sideOutputs) {

Review comment:
   Thanks for the explanation :)
   
   I agree there is case where differentiating between main output and side 
outputs could make the caller code simpler. On the other hand, there is also 
case this could make caller code more complex -- otherwise we would already 
different between main output and side outputs in FLIP-173, right?
   
   The pros/cons between these two design options apply to both FLIP-173 and 
this PR. So it is not clear why this PR uses side outputs but FLIP-173 does 
not. And in general, it is probably preferred to use same API flavor in a given 
project.
   
   What do you think?
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on a change in pull request #29: [FLINK-24838] Add BaseAlgoImpl class to support link() and linkFrom()

2021-11-10 Thread GitBox


lindong28 commented on a change in pull request #29:
URL: https://github.com/apache/flink-ml/pull/29#discussion_r747186614



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/operator/BaseAlgoImpl.java
##
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.operator;
+
+import org.apache.flink.ml.api.core.AlgoOperator;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.WithParams;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.table.api.Table;
+
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * Base class for algorithm operators.
+ *
+ * Base class for the algorithm operators. It hosts the parameters and 
output tables of an
+ * algorithm operator. Each BaseAlgoImpl may have one or more output tables. 
One of the output table
+ * is the primary output table which can be obtained by calling {@link 
#getOutput}. The other output
+ * tables are side output tables that can be obtained by calling {@link 
#getSideOutputs()}.
+ *
+ * The input of an BaseAlgoImpl is defined in the subclasses of the 
BaseAlgoImpl.
+ *
+ * @param  The class type of the {@link BaseAlgoImpl} implementation itself
+ */
+public abstract class BaseAlgoImpl>
+implements AlgoOperator, WithParams {
+
+/** Params for algorithms. */
+private Map, Object> params;
+
+/** The table held by operator. */
+private transient Table output = null;
+
+/** The side outputs of operator that be similar to the stream's side 
outputs. */
+private transient Table[] sideOutputs = null;
+
+/** Construct the operator with the initial Params. */
+protected BaseAlgoImpl(Map, Object> params) {
+this.params = new HashMap<>();
+if (null != params) {
+for (Map.Entry, Object> entry : params.entrySet()) {
+this.params.put(entry.getKey(), entry.getValue());
+}
+}
+ParamUtils.initializeMapWithDefaultValues(this.params, this);
+}
+
+@Override
+public Map, Object> getParamMap() {
+return this.params;
+}
+
+/** Returns the table held by operator. */
+public Table getOutput() {
+return this.output;
+}
+
+/** Returns the side outputs. */
+public Table[] getSideOutputs() {
+return this.sideOutputs;
+}
+
+/**
+ * Set the table held by operator.
+ *
+ * @param output the output table.
+ */
+protected void setOutput(Table output) {
+this.output = output;
+}
+
+/**
+ * Set the side outputs.
+ *
+ * @param sideOutputs the side outputs set the operator.
+ */
+protected void setSideOutputs(Table[] sideOutputs) {

Review comment:
   Thanks for the explanation :)
   
   I agree there is case where differentiating between main output and side 
outputs could make the caller code simpler. On the other hand, there is also 
case this could make caller code more complex -- otherwise we would already 
different between main output and side outputs in FLIP-173, right?
   
   Here are my reasoning. The pros/cons between these two design options apply 
to both FLIP-173 and this PR. So it is not clear why this PR uses side outputs 
but FLIP-173 does not. And in general, it is probably preferred to use same API 
flavor in a given project.
   
   What do you think?
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   * 34ba01994ce91f93f6cf224f004e21ba4a1a505a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26340)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   * 34ba01994ce91f93f6cf224f004e21ba4a1a505a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24855) Source Coordinator Thread already exists. There should never be more than one thread driving the actions of a Source Coordinator.

2021-11-10 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-24855:
---
Priority: Critical  (was: Blocker)

> Source Coordinator Thread already exists. There should never be more than one 
> thread driving the actions of a Source Coordinator.
> -
>
> Key: FLINK-24855
> URL: https://issues.apache.org/jira/browse/FLINK-24855
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, Runtime / Coordination
>Affects Versions: 1.13.3
> Environment: flink 1.13.3
> flink-cdc 2.1
>Reporter: WangMinChao
>Priority: Critical
>
>  
> When I am synchronizing large tables, have the following problems :
> 2021-11-09 20:33:04,222 INFO 
> com.ververica.cdc.connectors.mysql.source.enumerator.MySqlSourceEnumerator [] 
> - Assign split MySqlSnapshotSplit\{tableId=db.table, splitId='db.table:383', 
> splitKeyType=[`id` BIGINT NOT NULL], splitStart=[9798290], 
> splitEnd=[9823873], highWatermark=null} to subtask 1
> 2021-11-09 20:33:04,248 INFO 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Triggering 
> checkpoint 101 (type=CHECKPOINT) @ 1636461183945 for job 
> 3cee105643cfee78b80cd0a41143b5c1.
> 2021-11-09 20:33:10,734 ERROR 
> org.apache.flink.runtime.util.FatalExitExceptionHandler [] - FATAL: Thread 
> 'SourceCoordinator-Source: mysqlcdc-source -> Sink: kafka-sink' produced an 
> uncaught exception. Stopping the process...
> java.lang.Error: Source Coordinator Thread already exists. There should never 
> be more than one thread driving the actions of a Source Coordinator. Existing 
> Thread: Thread[SourceCoordinator-Source: mysqlcdc-source -> Sink: 
> kafka-sink,5,main]
> at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProvider$CoordinatorExecutorThreadFactory.newThread(SourceCoordinatorProvider.java:119)
>  [flink-dist_2.12-1.13.3.jar:1.13.3]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1025)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_191]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17424: [FLINK-24455][tests]FallbackAkkaRpcSystemLoader should check for mave…

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17424:
URL: https://github.com/apache/flink/pull/17424#issuecomment-938516882


   
   ## CI report:
   
   * 4bec22eb5e57d5b892a24b9253f0531df65bcf07 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26293)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24855) Source Coordinator Thread already exists. There should never be more than one thread driving the actions of a Source Coordinator.

2021-11-10 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442072#comment-17442072
 ] 

Leonard Xu commented on FLINK-24855:


Thanks [~chesnay]  for looking into this. The related issue should be 
FLINK-22545 :D

> Source Coordinator Thread already exists. There should never be more than one 
> thread driving the actions of a Source Coordinator.
> -
>
> Key: FLINK-24855
> URL: https://issues.apache.org/jira/browse/FLINK-24855
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, Runtime / Coordination
>Affects Versions: 1.13.3
> Environment: flink 1.13.3
> flink-cdc 2.1
>Reporter: WangMinChao
>Priority: Blocker
>
>  
> When I am synchronizing large tables, have the following problems :
> 2021-11-09 20:33:04,222 INFO 
> com.ververica.cdc.connectors.mysql.source.enumerator.MySqlSourceEnumerator [] 
> - Assign split MySqlSnapshotSplit\{tableId=db.table, splitId='db.table:383', 
> splitKeyType=[`id` BIGINT NOT NULL], splitStart=[9798290], 
> splitEnd=[9823873], highWatermark=null} to subtask 1
> 2021-11-09 20:33:04,248 INFO 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Triggering 
> checkpoint 101 (type=CHECKPOINT) @ 1636461183945 for job 
> 3cee105643cfee78b80cd0a41143b5c1.
> 2021-11-09 20:33:10,734 ERROR 
> org.apache.flink.runtime.util.FatalExitExceptionHandler [] - FATAL: Thread 
> 'SourceCoordinator-Source: mysqlcdc-source -> Sink: kafka-sink' produced an 
> uncaught exception. Stopping the process...
> java.lang.Error: Source Coordinator Thread already exists. There should never 
> be more than one thread driving the actions of a Source Coordinator. Existing 
> Thread: Thread[SourceCoordinator-Source: mysqlcdc-source -> Sink: 
> kafka-sink,5,main]
> at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProvider$CoordinatorExecutorThreadFactory.newThread(SourceCoordinatorProvider.java:119)
>  [flink-dist_2.12-1.13.3.jar:1.13.3]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1025)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_191]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   * 34ba01994ce91f93f6cf224f004e21ba4a1a505a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Myasuka commented on a change in pull request #17755: [FLINK-24858][core] Prevent version mismatches in TypeSerializers

2021-11-10 Thread GitBox


Myasuka commented on a change in pull request #17755:
URL: https://github.com/apache/flink/pull/17755#discussion_r747181767



##
File path: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/RowSerializer.java
##
@@ -540,14 +540,26 @@ protected int getCurrentOuterSnapshotVersion() {
 
 @Override
 protected void readOuterSnapshot(
-int readOuterSnapshotVersion, DataInputView in, ClassLoader 
userCodeClassLoader) {
-readVersion = readOuterSnapshotVersion;
+int readOuterSnapshotVersion, DataInputView in, ClassLoader 
userCodeClassLoader)
+throws IOException {
+if (readOuterSnapshotVersion <= LAST_VERSION_WITHOUT_ROW_KIND) {
+supportsRowKind = false;
+} else if (readOuterSnapshotVersion == 
LAST_VERSION_WITHOUT_ROW_KIND + 1) {

Review comment:
   I think adding another new flag instead of 
`LAST_VERSION_WITHOUT_ROW_KIND + 1` would be better to understand.

##
File path: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/RowSerializer.java
##
@@ -540,14 +540,26 @@ protected int getCurrentOuterSnapshotVersion() {
 
 @Override
 protected void readOuterSnapshot(
-int readOuterSnapshotVersion, DataInputView in, ClassLoader 
userCodeClassLoader) {
-readVersion = readOuterSnapshotVersion;
+int readOuterSnapshotVersion, DataInputView in, ClassLoader 
userCodeClassLoader)
+throws IOException {
+if (readOuterSnapshotVersion <= LAST_VERSION_WITHOUT_ROW_KIND) {
+supportsRowKind = false;
+} else if (readOuterSnapshotVersion == 
LAST_VERSION_WITHOUT_ROW_KIND + 1) {
+supportsRowKind = true;
+} else {
+supportsRowKind = in.readBoolean();
+}
+}
+
+@Override
+protected void writeOuterSnapshot(DataOutputView out) throws 
IOException {
+out.writeBoolean(supportsRowKind);
 }
 
 @Override
 protected OuterSchemaCompatibility resolveOuterSchemaCompatibility(
 RowSerializer newSerializer) {
-if (readVersion <= LAST_VERSION_WITHOUT_ROW_KIND) {
+if (supportsRowKind == newSerializer.legacyModeEnabled) {

Review comment:
   I think we should better to add description to describe the relationship 
between `supportsRowKind` with `legacyModeEnabled`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17760: [FLINK-24830][examples] Update DataStream WordCount example

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17760:
URL: https://github.com/apache/flink/pull/17760#issuecomment-965756145


   
   ## CI report:
   
   * 3633f6917d2939616c6ec07fa55c908b54a0e7d0 UNKNOWN
   * ebc7e2e66dfa470df1780945c625c32876e3dffc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26331)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24586) SQL functions should return STRING instead of VARCHAR(2000)

2021-11-10 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442071#comment-17442071
 ] 

Nicholas Jiang commented on FLINK-24586:


[~airblader], could you please assign this ticket to me? I have pushed the PR.

> SQL functions should return STRING instead of VARCHAR(2000)
> ---
>
> Key: FLINK-24586
> URL: https://issues.apache.org/jira/browse/FLINK-24586
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Ingo Bürk
>Priority: Major
>  Labels: pull-request-available
>
> There are some SQL functions which currently return VARCHAR(2000). With more 
> strict CAST behavior from FLINK-24413, this could become an issue.
> The following functions return VARCHAR(2000) and should be changed to return 
> STRING instead:
> * JSON_VALUE
> * JSON_QUERY
> * JSON_OBJECT
> * JSON_ARRAY
> There are also some more functions which should be evaluated:
> * CHR
> * REVERSE
> * SPLIT_INDEX
> * PARSE_URL
> * FROM_UNIXTIME
> * DECODE



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17699: [FLINK-20370][table] Fix wrong results when sink primary key is not the same with query result's changelog upsert key

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17699:
URL: https://github.com/apache/flink/pull/17699#issuecomment-961963701


   
   ## CI report:
   
   * 1550e7e6200bfcac3cf20f3e930d77da1a5874e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26290)
 
   * a3f2c4fea8e78e65319a33687b35f4934bf0850a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26339)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   * 34ba01994ce91f93f6cf224f004e21ba4a1a505a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17699: [FLINK-20370][table] Fix wrong results when sink primary key is not the same with query result's changelog upsert key

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17699:
URL: https://github.com/apache/flink/pull/17699#issuecomment-961963701


   
   ## CI report:
   
   * 1550e7e6200bfcac3cf20f3e930d77da1a5874e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26290)
 
   * a3f2c4fea8e78e65319a33687b35f4934bf0850a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   * 34ba01994ce91f93f6cf224f004e21ba4a1a505a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-24802) Improve cast ROW to STRING

2021-11-10 Thread Shen Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440850#comment-17440850
 ] 

Shen Zhu edited comment on FLINK-24802 at 11/11/21, 3:37 AM:
-

Hey Timo([~twalthr] ),

I was checking the unit tests and seems casting from ROW to String is not 
supported either implicitly or explicitly: 
[https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/LogicalTypeCastsTest.java#L159]

Do you mean adding one explicit conversion for this?

Thanks,
Shen


was (Author: shenzhu0127):
Hey Timo,

I was checking the unit tests and seems casting from ROW to String is not 
supported either implicitly or explicitly: 
[https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/LogicalTypeCastsTest.java#L159]

Do you mean adding one explicit conversion for this?

Thanks,
Shen

> Improve cast ROW to STRING
> --
>
> Key: FLINK-24802
> URL: https://issues.apache.org/jira/browse/FLINK-24802
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Timo Walther
>Priority: Major
>
> When casting ROW to string, we should have a space after the comma to be 
> consistent with ARRAY, MAP, etc.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24871) Flink SQL hive reports IndexOutOfBoundsException when using trim in where clause

2021-11-10 Thread Liu (Jira)
Liu created FLINK-24871:
---

 Summary: Flink SQL hive reports IndexOutOfBoundsException when 
using trim in where clause
 Key: FLINK-24871
 URL: https://issues.apache.org/jira/browse/FLINK-24871
 Project: Flink
  Issue Type: Improvement
Reporter: Liu


The problem can be reproduced as follow:

In class HiveDialectITCase, define the test testTrimError

 
{code:java}
@Test
public void testTrimError() {
tableEnv.executeSql("create table src (x int,y string)");
tableEnv.executeSql("select * from src where trim(y) != ''");
} {code}
Executing it will throw the following exception.

 
{panel}
java.lang.IndexOutOfBoundsException: index (2) must be less than size (1)

    at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1345)
    at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1327)
    at 
com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:43)
    at 
org.apache.calcite.rex.RexCallBinding.getOperandType(RexCallBinding.java:136)
    at 
org.apache.calcite.sql.type.OrdinalReturnTypeInference.inferReturnType(OrdinalReturnTypeInference.java:40)
    at 
org.apache.calcite.sql.type.SqlTypeTransformCascade.inferReturnType(SqlTypeTransformCascade.java:56)
    at 
org.apache.calcite.sql.type.SqlTypeTransformCascade.inferReturnType(SqlTypeTransformCascade.java:56)
    at org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:482)
    at org.apache.calcite.rex.RexBuilder.deriveReturnType(RexBuilder.java:283)
    at org.apache.calcite.rex.RexBuilder.makeCall(RexBuilder.java:257)
    at 
org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:107)
    at 
org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:56)
    at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
    at org.apache.calcite.rex.RexShuttle.visitList(RexShuttle.java:158)
    at 
org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:107)
    at 
org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:56)
    at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genFilterRelNode(HiveParserCalcitePlanner.java:914)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genFilterRelNode(HiveParserCalcitePlanner.java:1082)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genFilterLogicalPlan(HiveParserCalcitePlanner.java:1099)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genLogicalPlan(HiveParserCalcitePlanner.java:2736)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.logicalPlan(HiveParserCalcitePlanner.java:284)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genLogicalPlan(HiveParserCalcitePlanner.java:272)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParser.analyzeSql(HiveParser.java:290)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParser.processCmd(HiveParser.java:238)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:208)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:735)
    at 
org.apache.flink.connectors.hive.HiveDialectITCase.testTrimError(HiveDialectITCase.java:366)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    at 

[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   * 34ba01994ce91f93f6cf224f004e21ba4a1a505a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   * 34ba01994ce91f93f6cf224f004e21ba4a1a505a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17685: [FLINK-24631][Kubernetes]Use minimal selector to select jobManager and taskManager pod

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17685:
URL: https://github.com/apache/flink/pull/17685#issuecomment-961148655


   
   ## CI report:
   
   * 7690d30646bfa745d7328d28ade0acd5a1fbdacc Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26307)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24870) Cannot cast "java.util.Date" to "java.time.Instant"

2021-11-10 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442056#comment-17442056
 ] 

Jark Wu commented on FLINK-24870:
-

[~wangbaohua], could you also share the sql/code as well? Otherwise, it's hard 
for us to reproduce the problem. 

> Cannot cast "java.util.Date" to "java.time.Instant"
> ---
>
> Key: FLINK-24870
> URL: https://issues.apache.org/jira/browse/FLINK-24870
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.13.1
>Reporter: wangbaohua
>Priority: Blocker
>
>         at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:582)
>         at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>         at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:562)
>         at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647)
>         at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:537)
>         at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:759)
>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>         at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:76)
>         at 
> org.apache.flink.table.data.conversion.StructuredObjectConverter.open(StructuredObjectConverter.java:80)
>         ... 11 more
> Caused by: 
> org.apache.flink.shaded.guava18.com.google.common.util.concurrent.UncheckedExecutionException:
>  org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>         at 
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
>         at 
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache.get(LocalCache.java:3937)
>         at 
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739)
>         at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:74)
>         ... 12 more
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
>         at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:89)
>         at 
> org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$1(CompileUtils.java:74)
>         at 
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742)
>         at 
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
>         at 
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319)
>         at 
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282)
>         at 
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2197)
>         ... 15 more
> Caused by: org.codehaus.commons.compiler.CompileException: Line 120, Column 
> 101: Cannot cast "java.util.Date" to "java.time.Instant"
>         at 
> org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:12211)
>         at 
> org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5051)
>         at org.codehaus.janino.UnitCompiler.access$8600(UnitCompiler.java:215)
>         at 
> org.codehaus.janino.UnitCompiler$16.visitCast(UnitCompiler.java:4418)
>         at 
> org.codehaus.janino.UnitCompiler$16.visitCast(UnitCompiler.java:4396)
>         at org.codehaus.janino.Java$Cast.accept(Java.java:4898)
>         at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4396)
>         at 
> org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5057)
>         at org.codehaus.janino.UnitCompiler.access$8100(UnitCompiler.java:215)
>         at 
> org.codehaus.janino.UnitCompiler$16$1.visitParenthesizedExpression(UnitCompiler.java:4409)
>         at 
> org.codehaus.janino.UnitCompiler$16$1.visitParenthesizedExpression(UnitCompiler.java:4400)
>         at 
> org.codehaus.janino.Java$ParenthesizedExpression.accept(Java.java:4924)
>         at 
> org.codehaus.janino.UnitCompiler$16.visitLvalue(UnitCompiler.java:4400)
>         at 
> org.codehaus.janino.UnitCompiler$16.visitLvalue(UnitCompiler.java:4396)

[jira] [Created] (FLINK-24870) Cannot cast "java.util.Date" to "java.time.Instant"

2021-11-10 Thread wangbaohua (Jira)
wangbaohua created FLINK-24870:
--

 Summary: Cannot cast "java.util.Date" to "java.time.Instant"
 Key: FLINK-24870
 URL: https://issues.apache.org/jira/browse/FLINK-24870
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.13.1
Reporter: wangbaohua


        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:582)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:562)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:537)
        at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:759)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.util.FlinkRuntimeException: 
org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
        at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:76)
        at 
org.apache.flink.table.data.conversion.StructuredObjectConverter.open(StructuredObjectConverter.java:80)
        ... 11 more
Caused by: 
org.apache.flink.shaded.guava18.com.google.common.util.concurrent.UncheckedExecutionException:
 org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache.get(LocalCache.java:3937)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739)
        at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:74)
        ... 12 more
Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.
        at 
org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:89)
        at 
org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$1(CompileUtils.java:74)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2197)
        ... 15 more
Caused by: org.codehaus.commons.compiler.CompileException: Line 120, Column 
101: Cannot cast "java.util.Date" to "java.time.Instant"
        at 
org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:12211)
        at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5051)
        at org.codehaus.janino.UnitCompiler.access$8600(UnitCompiler.java:215)
        at org.codehaus.janino.UnitCompiler$16.visitCast(UnitCompiler.java:4418)
        at org.codehaus.janino.UnitCompiler$16.visitCast(UnitCompiler.java:4396)
        at org.codehaus.janino.Java$Cast.accept(Java.java:4898)
        at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4396)
        at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5057)
        at org.codehaus.janino.UnitCompiler.access$8100(UnitCompiler.java:215)
        at 
org.codehaus.janino.UnitCompiler$16$1.visitParenthesizedExpression(UnitCompiler.java:4409)
        at 
org.codehaus.janino.UnitCompiler$16$1.visitParenthesizedExpression(UnitCompiler.java:4400)
        at 
org.codehaus.janino.Java$ParenthesizedExpression.accept(Java.java:4924)
        at 
org.codehaus.janino.UnitCompiler$16.visitLvalue(UnitCompiler.java:4400)
        at 
org.codehaus.janino.UnitCompiler$16.visitLvalue(UnitCompiler.java:4396)
        at org.codehaus.janino.Java$Lvalue.accept(Java.java:4148)
        at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4396)
        at 
org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:5662)
        at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5182)
        at org.codehaus.janino.UnitCompiler.access$9100(UnitCompiler.java:215)
        at 
org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4423)
        at 

[jira] [Comment Edited] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join

2021-11-10 Thread yaoboxu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442045#comment-17442045
 ] 

yaoboxu edited comment on FLINK-22826 at 11/11/21, 2:57 AM:


flink 1.13.3 still appear this question.When i upgrade my flink to 1.13.3, data 
also be deleted . some tables join by  left join operations, have no any window 
operation, one day later, my result table miss 50 thousand records.


was (Author: JIRAUSER280018):
when i upgrade my flink to 1.13.3, data also be deleted . because of some 
tables join by flink-cdc.  The question still not solved.

> flink sql1.13.1 causes data loss based on change log stream data join
> -
>
> Key: FLINK-22826
> URL: https://issues.apache.org/jira/browse/FLINK-22826
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0, 1.13.1
>Reporter: 徐州州
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-blocker
> Fix For: 1.15.0
>
>
> {code:java}
> insert into dwd_order_detail
> select
>ord.Id,
>ord.Code,
>Status
>  concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id  
> as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd'))  as uuids,
>  TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date
> from
> orders ord
> left join order_extend oed on  ord.Id=oed.OrderId and oed.IsDeleted=0 and 
> oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS 
> TIMESTAMP)
> or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> ) and ord.IsDeleted=0;
> {code}
> My upsert-kafka table for PRIMARY KEY for uuids.
> This is the logic of my kafka based canal-json stream data join and write to 
> Upsert-kafka tables I confirm that version 1.12 also has this problem I just 
> upgraded from 1.12 to 1.13.
> I look up a user s order data and order number XJ0120210531004794 in 
> canal-json original table as U which is normal.
> {code:java}
> | +U | XJ0120210531004794 |  50 |
> | +U | XJ0120210531004672 |  50 |
> {code}
> But written to upsert-kakfa via join, the data consumed from upsert kafka is,
> {code:java}
> | +I | XJ0120210531004794 |  50 |
> | -U | XJ0120210531004794 |  50 |
> {code}
> The order is two records this sheet in orders and order_extend tables has not 
> changed since created -U status caused my data loss not computed and the 
> final result was wrong.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26338)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24862) The user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread xiangqiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442054#comment-17442054
 ] 

xiangqiao commented on FLINK-24862:
---

Thank you [~jark] ,remove the "temporary" keyword is to create a global 
function, which will be written to hive Metastore, which can not meet our needs.

I have solved this problem and can work normally now. Can you review it for me? 
I'm not sure if it will cause other problems. THX.

https://github.com/apache/flink/pull/17761

> The user-defined hive udaf/udtf cannot be used normally in hive dialect
> ---
>
> Key: FLINK-24862
> URL: https://issues.apache.org/jira/browse/FLINK-24862
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.0, 1.14.0
>Reporter: xiangqiao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-11-10-20-55-11-988.png, 
> image-2021-11-10-21-04-32-660.png
>
>
> Here are two questions:
> 1.First question, I added a unit test in HiveDialectITCase to reproduce this 
> question:
> {code:java}
> @Test
> public void testTemporaryFunctionUDAF() throws Exception {
> // create temp function
> tableEnv.executeSql(
> String.format(
> "create temporary function temp_count as '%s'",
> GenericUDAFCount.class.getName()));
> String[] functions = tableEnv.listUserDefinedFunctions();
> assertArrayEquals(new String[] {"temp_count"}, functions);
> // call the function
> tableEnv.executeSql("create table src(x int)");
> tableEnv.executeSql("insert into src values (1),(-1)").await();
> assertEquals(
> "[+I[2]]",
> queryResult(tableEnv.sqlQuery("select temp_count(x) from 
> src")).toString());
> // switch DB and the temp function can still be used
> tableEnv.executeSql("create database db1");
> tableEnv.useDatabase("db1");
> assertEquals(
> "[+I[2]]",
> queryResult(tableEnv.sqlQuery("select temp_count(x) from 
> `default`.src"))
> .toString());
> // drop the function
> tableEnv.executeSql("drop temporary function temp_count");
> functions = tableEnv.listUserDefinedFunctions();
> assertEquals(0, functions.length);
> tableEnv.executeSql("drop temporary function if exists foo");
> } {code}
> !image-2021-11-10-20-55-11-988.png!
> 2.When I solved the first problem, I met the second problem,I added a unit 
> test in HiveDialectITCase to reproduce this question:
> This is the compatibility of hive udtf. Refer to this 
> issue:https://issues.apache.org/jira/browse/HIVE-5737
> {code:java}
> @Test
> public void testTemporaryFunctionUDTFInitializeWithStructObjectInspector() 
> throws Exception {
> // create temp function
> tableEnv.executeSql(
> String.format(
> "create temporary function temp_split as '%s'",
> 
> HiveGenericUDTFTest.TestSplitUDTFInitializeWithStructObjectInspector.class
> .getName()));
> String[] functions = tableEnv.listUserDefinedFunctions();
> assertArrayEquals(new String[] {"temp_split"}, functions);
> // call the function
> tableEnv.executeSql("create table src(x string)");
> tableEnv.executeSql("insert into src values ('a,b,c')").await();
> assertEquals(
> "[+I[a], +I[b], +I[c]]",
> queryResult(tableEnv.sqlQuery("select temp_split(x) from 
> src")).toString());
> // switch DB and the temp function can still be used
> tableEnv.executeSql("create database db1");
> tableEnv.useDatabase("db1");
> assertEquals(
> "[+I[a], +I[b], +I[c]]",
> queryResult(tableEnv.sqlQuery("select temp_split(x) from 
> `default`.src"))
> .toString());
> // drop the function
> tableEnv.executeSql("drop temporary function temp_split");
> functions = tableEnv.listUserDefinedFunctions();
> assertEquals(0, functions.length);
> tableEnv.executeSql("drop temporary function if exists foo");
> } {code}
> !image-2021-11-10-21-04-32-660.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join

2021-11-10 Thread yaoboxu (Jira)


[ https://issues.apache.org/jira/browse/FLINK-22826 ]


yaoboxu deleted comment on FLINK-22826:
-

was (Author: JIRAUSER280018):
flink 1.13.3 still appear this question.

> flink sql1.13.1 causes data loss based on change log stream data join
> -
>
> Key: FLINK-22826
> URL: https://issues.apache.org/jira/browse/FLINK-22826
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0, 1.13.1
>Reporter: 徐州州
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-blocker
> Fix For: 1.15.0
>
>
> {code:java}
> insert into dwd_order_detail
> select
>ord.Id,
>ord.Code,
>Status
>  concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id  
> as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd'))  as uuids,
>  TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date
> from
> orders ord
> left join order_extend oed on  ord.Id=oed.OrderId and oed.IsDeleted=0 and 
> oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS 
> TIMESTAMP)
> or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> ) and ord.IsDeleted=0;
> {code}
> My upsert-kafka table for PRIMARY KEY for uuids.
> This is the logic of my kafka based canal-json stream data join and write to 
> Upsert-kafka tables I confirm that version 1.12 also has this problem I just 
> upgraded from 1.12 to 1.13.
> I look up a user s order data and order number XJ0120210531004794 in 
> canal-json original table as U which is normal.
> {code:java}
> | +U | XJ0120210531004794 |  50 |
> | +U | XJ0120210531004672 |  50 |
> {code}
> But written to upsert-kakfa via join, the data consumed from upsert kafka is,
> {code:java}
> | +I | XJ0120210531004794 |  50 |
> | -U | XJ0120210531004794 |  50 |
> {code}
> The order is two records this sheet in orders and order_extend tables has not 
> changed since created -U status caused my data loss not computed and the 
> final result was wrong.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot commented on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943621


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 (Thu Nov 11 
02:53:46 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-24862).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


flinkbot commented on pull request #17761:
URL: https://github.com/apache/flink/pull/17761#issuecomment-965943227


   
   ## CI report:
   
   * e0d50b32fe1ca368ceb34d5a21c70125bb8f5213 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24862) The user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24862:
---
Labels: pull-request-available  (was: )

> The user-defined hive udaf/udtf cannot be used normally in hive dialect
> ---
>
> Key: FLINK-24862
> URL: https://issues.apache.org/jira/browse/FLINK-24862
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.0, 1.14.0
>Reporter: xiangqiao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-11-10-20-55-11-988.png, 
> image-2021-11-10-21-04-32-660.png
>
>
> Here are two questions:
> 1.First question, I added a unit test in HiveDialectITCase to reproduce this 
> question:
> {code:java}
> @Test
> public void testTemporaryFunctionUDAF() throws Exception {
> // create temp function
> tableEnv.executeSql(
> String.format(
> "create temporary function temp_count as '%s'",
> GenericUDAFCount.class.getName()));
> String[] functions = tableEnv.listUserDefinedFunctions();
> assertArrayEquals(new String[] {"temp_count"}, functions);
> // call the function
> tableEnv.executeSql("create table src(x int)");
> tableEnv.executeSql("insert into src values (1),(-1)").await();
> assertEquals(
> "[+I[2]]",
> queryResult(tableEnv.sqlQuery("select temp_count(x) from 
> src")).toString());
> // switch DB and the temp function can still be used
> tableEnv.executeSql("create database db1");
> tableEnv.useDatabase("db1");
> assertEquals(
> "[+I[2]]",
> queryResult(tableEnv.sqlQuery("select temp_count(x) from 
> `default`.src"))
> .toString());
> // drop the function
> tableEnv.executeSql("drop temporary function temp_count");
> functions = tableEnv.listUserDefinedFunctions();
> assertEquals(0, functions.length);
> tableEnv.executeSql("drop temporary function if exists foo");
> } {code}
> !image-2021-11-10-20-55-11-988.png!
> 2.When I solved the first problem, I met the second problem,I added a unit 
> test in HiveDialectITCase to reproduce this question:
> This is the compatibility of hive udtf. Refer to this 
> issue:https://issues.apache.org/jira/browse/HIVE-5737
> {code:java}
> @Test
> public void testTemporaryFunctionUDTFInitializeWithStructObjectInspector() 
> throws Exception {
> // create temp function
> tableEnv.executeSql(
> String.format(
> "create temporary function temp_split as '%s'",
> 
> HiveGenericUDTFTest.TestSplitUDTFInitializeWithStructObjectInspector.class
> .getName()));
> String[] functions = tableEnv.listUserDefinedFunctions();
> assertArrayEquals(new String[] {"temp_split"}, functions);
> // call the function
> tableEnv.executeSql("create table src(x string)");
> tableEnv.executeSql("insert into src values ('a,b,c')").await();
> assertEquals(
> "[+I[a], +I[b], +I[c]]",
> queryResult(tableEnv.sqlQuery("select temp_split(x) from 
> src")).toString());
> // switch DB and the temp function can still be used
> tableEnv.executeSql("create database db1");
> tableEnv.useDatabase("db1");
> assertEquals(
> "[+I[a], +I[b], +I[c]]",
> queryResult(tableEnv.sqlQuery("select temp_split(x) from 
> `default`.src"))
> .toString());
> // drop the function
> tableEnv.executeSql("drop temporary function temp_split");
> functions = tableEnv.listUserDefinedFunctions();
> assertEquals(0, functions.length);
> tableEnv.executeSql("drop temporary function if exists foo");
> } {code}
> !image-2021-11-10-21-04-32-660.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] xiangqiao123 opened a new pull request #17761: [FLINK-24862][Connectors / Hive]Fix user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread GitBox


xiangqiao123 opened a new pull request #17761:
URL: https://github.com/apache/flink/pull/17761


   ## What is the purpose of the change
   
   *Fix user-defined hive udaf/udtf cannot be used normally in hive dialect*
   
   ## Brief change log
   
 - *FunctionCatalog#validateAndPrepareFunction method skip validate 
TableFunctionDefinition*
 - *HiveGenericUDTF Compatible with new udtf implementations. Refer to this 
issue:https://issues.apache.org/jira/browse/HIVE-5737*
   
   
   ## Verifying this change
   
   
   This change added tests and can be verified as follows:
 - *Added unit test HiveDialectITCase#testTemporaryFunctionUDAF for create 
temporary udaf function*
 - *Added unit test 
HiveDialectITCase#testTemporaryFunctionUDTFInitializeWithObjectInspector for 
create temporary udtf function which Initialized with ObjectInspector*
 - *Added unit test 
HiveDialectITCase#testTemporaryFunctionUDTFInitializeWithStructObjectInspector 
for create temporary udtf function which Initialized with StructObjectInspector*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22541) add json format filter params

2021-11-10 Thread liwei li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442052#comment-17442052
 ] 

liwei li commented on FLINK-22541:
--

What is the issuse conclusion?Should we support this feature? 

In my humble opinion, I'm inclined to vote for this feature because it 
simplifies table statements and makes it easier to get the nested data you want.

cc [~lirui] [~jark] 

 

> add json format filter params 
> --
>
> Key: FLINK-22541
> URL: https://issues.apache.org/jira/browse/FLINK-22541
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.11.0, 1.12.0
>Reporter: sandy du
>Priority: Minor
> Attachments: image-2021-05-06-13-37-49-160.png, 
> image-2021-05-06-13-38-37-284.png
>
>
> In my case,one kafka topic store multiple table data,for example:
>  
> \{"id":"121","source":"users","content":{"name":"test01","age":20,"addr":"addr1"}}
>  
> \{"id":"122","source":"users","content":{"name":"test02","age":23,"addr":"addr2"}}
>  
> \{"id":"124","source":"users","content":{"name":"test03","age":34,"addr":"addr3"}}
>  
> \{"id":"124","source":"order","content":{"orderId":"11","price":34,"addr":"addr1231"}}
>  
> \{"id":"125","source":"order","content":{"orderId":"12","price":34,"addr":"addr1232"}}
>  
> \{"id":"126","source":"order","content":{"orderId":"13","price":34,"addr":"addr1233"}}
>   
>  I  just want to consume data from  talbe order,flink sql ddl like this:
>  CREATE TABLE order (
>  orderId STRING,
>  age INT,
>  addr STRING
>  )
>  with (
>  'connector'='kafka',
>  'topic'='kafkatopic',
>  'properties.bootstrap.servers'='localhost:9092',
>  'properties.group.id'='testGroup',
>  'scan.startup.mode'='earliest-offset',
>  'format'='json',
>  'path-fliter'='$[?(@.source=="order")]',
>  'path-data'='$.content'
>  );
>   
>  path-fliter and path-data can use  JsonPath 
> ([https://github.com/json-path/JsonPath])
>   



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17424: [FLINK-24455][tests]FallbackAkkaRpcSystemLoader should check for mave…

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17424:
URL: https://github.com/apache/flink/pull/17424#issuecomment-938516882


   
   ## CI report:
   
   * 4bec22eb5e57d5b892a24b9253f0531df65bcf07 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26293)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17424: [FLINK-24455][tests]FallbackAkkaRpcSystemLoader should check for mave…

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17424:
URL: https://github.com/apache/flink/pull/17424#issuecomment-938516882


   
   ## CI report:
   
   * 4bec22eb5e57d5b892a24b9253f0531df65bcf07 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26293)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Aitozi commented on pull request #17424: [FLINK-24455][tests]FallbackAkkaRpcSystemLoader should check for mave…

2021-11-10 Thread GitBox


Aitozi commented on pull request #17424:
URL: https://github.com/apache/flink/pull/17424#issuecomment-965937395


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17692: [FLINK-24489][CEP] The size of entryCache & eventsBufferCache in the SharedBuffer should be defined with a threshold to limit the num

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17692:
URL: https://github.com/apache/flink/pull/17692#issuecomment-961748638


   
   ## CI report:
   
   * 59391b06ac7eb512b298dfcaa49847d20bb32c7e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26254)
 
   * a392282483107bb6b985d9ff724d65d602a86a73 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26337)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17685: [FLINK-24631][Kubernetes]Use minimal selector to select jobManager and taskManager pod

2021-11-10 Thread GitBox


flinkbot edited a comment on pull request #17685:
URL: https://github.com/apache/flink/pull/17685#issuecomment-961148655


   
   ## CI report:
   
   * 7690d30646bfa745d7328d28ade0acd5a1fbdacc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26307)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22826) flink sql1.13.1 causes data loss based on change log stream data join

2021-11-10 Thread yaoboxu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442047#comment-17442047
 ] 

yaoboxu commented on FLINK-22826:
-

flink 1.13.3 still appear this question.

> flink sql1.13.1 causes data loss based on change log stream data join
> -
>
> Key: FLINK-22826
> URL: https://issues.apache.org/jira/browse/FLINK-22826
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0, 1.13.1
>Reporter: 徐州州
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-blocker
> Fix For: 1.15.0
>
>
> {code:java}
> insert into dwd_order_detail
> select
>ord.Id,
>ord.Code,
>Status
>  concat(cast(ord.Id as String),if(oed.Id is null,'oed_null',cast(oed.Id  
> as STRING)),DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd'))  as uuids,
>  TO_DATE(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd')) as As_Of_Date
> from
> orders ord
> left join order_extend oed on  ord.Id=oed.OrderId and oed.IsDeleted=0 and 
> oed.CreateTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> where ( ord.OrderTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS 
> TIMESTAMP)
> or ord.ReviewTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> or ord.RejectTime>CAST(DATE_FORMAT(LOCALTIMESTAMP,'-MM-dd') AS TIMESTAMP)
> ) and ord.IsDeleted=0;
> {code}
> My upsert-kafka table for PRIMARY KEY for uuids.
> This is the logic of my kafka based canal-json stream data join and write to 
> Upsert-kafka tables I confirm that version 1.12 also has this problem I just 
> upgraded from 1.12 to 1.13.
> I look up a user s order data and order number XJ0120210531004794 in 
> canal-json original table as U which is normal.
> {code:java}
> | +U | XJ0120210531004794 |  50 |
> | +U | XJ0120210531004672 |  50 |
> {code}
> But written to upsert-kakfa via join, the data consumed from upsert kafka is,
> {code:java}
> | +I | XJ0120210531004794 |  50 |
> | -U | XJ0120210531004794 |  50 |
> {code}
> The order is two records this sheet in orders and order_extend tables has not 
> changed since created -U status caused my data loss not computed and the 
> final result was wrong.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   3   4   5   6   7   8   >