[jira] [Updated] (FLINK-17896) HiveCatalog can work with new table factory because of is_generic

2020-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17896:
---
Labels: pull-request-available  (was: )

> HiveCatalog can work with new table factory because of is_generic
> -
>
> Key: FLINK-17896
> URL: https://issues.apache.org/jira/browse/FLINK-17896
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Unsupported options found for 
> connector 'print'.Unsupported options:is_genericSupported options:connector
> print-identifier
> property-version
> standard-error
> {code}
> This's because HiveCatalog put is_generic property into generic tables, but 
> the new factory will not delete it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17896) HiveCatalog can work with new table factory because of is_generic

2020-05-24 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-17896:


Assignee: Jingsong Lee

> HiveCatalog can work with new table factory because of is_generic
> -
>
> Key: FLINK-17896
> URL: https://issues.apache.org/jira/browse/FLINK-17896
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.11.0
>
>
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Unsupported options found for 
> connector 'print'.Unsupported options:is_genericSupported options:connector
> print-identifier
> property-version
> standard-error
> {code}
> This's because HiveCatalog put is_generic property into generic tables, but 
> the new factory will not delete it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi opened a new pull request #12316: [FLINK-17896][hive] HiveCatalog can work with new table factory because of is_generic

2020-05-24 Thread GitBox


JingsongLi opened a new pull request #12316:
URL: https://github.com/apache/flink/pull/12316


   
   ## What is the purpose of the change
   
   ```
   [ERROR] Could not execute SQL statement. Reason:
   org.apache.flink.table.api.ValidationException: Unsupported options found 
for connector 'print'.Unsupported options:is_genericSupported options:connector
   print-identifier
   property-version
   standard-error
   ```
   
   This's because HiveCatalog put is_generic property into generic tables, but 
the new factory will not delete it.
   
   ## Brief change log
   
   - Implement `HiveDynamicTableFactory` to remove `is_generic` flag before 
create table source&sink
   
   ## Verifying this change
   
   `HiveCatalogITCase.testNewTableFactory`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] aljoscha commented on pull request #11972: [FLINK-17058] Adding ProcessingTimeoutTrigger of nested triggers.

2020-05-24 Thread GitBox


aljoscha commented on pull request #11972:
URL: https://github.com/apache/flink/pull/11972#issuecomment-633411543


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #12256: [FLINK-17018][runtime] DefaultExecutionSlotAllocator allocates slots in bulks

2020-05-24 Thread GitBox


zhuzhurk commented on a change in pull request #12256:
URL: https://github.com/apache/flink/pull/12256#discussion_r429754093



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultExecutionSlotAllocator.java
##
@@ -67,23 +76,211 @@
 
private final SlotProviderStrategy slotProviderStrategy;
 
+   private final SlotOwner slotOwner;
+
private final InputsLocationsRetriever inputsLocationsRetriever;
 
+   // temporary hack. can be removed along with the individual slot 
allocation code path
+   // once bulk slot allocation is fully functional
+   private final boolean enableBulkSlotAllocation;

Review comment:
   Good suggestion! Will give a try.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-shaded] zentol commented on pull request #83: [FLINK-17221] Remove Jersey dependencies from yarn-common exclusions

2020-05-24 Thread GitBox


zentol commented on pull request #83:
URL: https://github.com/apache/flink-shaded/pull/83#issuecomment-633410154


   The associated JIRA has been closed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-shaded] zentol closed pull request #83: [FLINK-17221] Remove Jersey dependencies from yarn-common exclusions

2020-05-24 Thread GitBox


zentol closed pull request #83:
URL: https://github.com/apache/flink-shaded/pull/83


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-16499) Flink shaded hadoop could not work when Yarn timeline service is enabled

2020-05-24 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-16499.

Fix Version/s: (was: shaded-11.0)
   Resolution: Won't Fix

The hadoop modules have been dropped from flink-shaded in FLINK-17685.

> Flink shaded hadoop could not work when Yarn timeline service is enabled
> 
>
> Key: FLINK-16499
> URL: https://issues.apache.org/jira/browse/FLINK-16499
> Project: Flink
>  Issue Type: Bug
>  Components: BuildSystem / Shaded
>Affects Versions: shaded-10.0
>Reporter: Yang Wang
>Priority: Major
>
> When the Yarn timeline service is enabled (via 
> {{yarn.timeline-service.enabled=true}} in yarn-site.xml), flink-shaded-hadoop 
> could not work to submit Flink job to Yarn cluster. The following exception 
> will be thrown.
>  
> The root cause is the {{jersey-core-xx.jar}} is not bundled into 
> {{flink-shaded-hadoop-xx}}{{.jar}}.
>  
> {code:java}
> 2020-03-09 03:35:34,396 ERROR org.apache.flink.client.cli.CliFrontend         
>              [] - Fatal error while running command line interface.2020-03-09 
> 03:35:34,396 ERROR org.apache.flink.client.cli.CliFrontend                    
>   [] - Fatal error while running command line 
> interface.java.lang.NoClassDefFoundError: javax/ws/rs/ext/MessageBodyReader 
> at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_242] at 
> java.lang.ClassLoader.defineClass(ClassLoader.java:757) ~[?:1.8.0_242] at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) 
> ~[?:1.8.0_242] at 
> java.net.URLClassLoader.defineClass(URLClassLoader.java:468) ~[?:1.8.0_242] 
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74) ~[?:1.8.0_242] 
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369) ~[?:1.8.0_242] at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:363) ~[?:1.8.0_242] at 
> java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_242] at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:362) ~[?:1.8.0_242] at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:419) ~[?:1.8.0_242] at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) ~[?:1.8.0_242] 
> at java.lang.ClassLoader.loadClass(ClassLoader.java:352) ~[?:1.8.0_242] at 
> java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_242] at 
> java.lang.ClassLoader.defineClass(ClassLoader.java:757) ~[?:1.8.0_242] at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) 
> ~[?:1.8.0_242] at 
> java.net.URLClassLoader.defineClass(URLClassLoader.java:468) ~[?:1.8.0_242] 
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74) ~[?:1.8.0_242] 
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369) ~[?:1.8.0_242] at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:363) ~[?:1.8.0_242] at 
> java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_242] at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:362) ~[?:1.8.0_242] at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:419) ~[?:1.8.0_242] at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) ~[?:1.8.0_242] 
> at java.lang.ClassLoader.loadClass(ClassLoader.java:352) ~[?:1.8.0_242] at 
> java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_242] at 
> java.lang.ClassLoader.defineClass(ClassLoader.java:757) ~[?:1.8.0_242] at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) 
> ~[?:1.8.0_242] at 
> java.net.URLClassLoader.defineClass(URLClassLoader.java:468) ~[?:1.8.0_242] 
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74) ~[?:1.8.0_242] 
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369) ~[?:1.8.0_242] at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:363) ~[?:1.8.0_242] at 
> java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_242] at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:362) ~[?:1.8.0_242] at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:419) ~[?:1.8.0_242] at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) ~[?:1.8.0_242] 
> at java.lang.ClassLoader.loadClass(ClassLoader.java:352) ~[?:1.8.0_242] at 
> org.apache.hadoop.yarn.util.timeline.TimelineUtils.(TimelineUtils.java:50)
>  ~[flink-shaded-hadoop-2-uber-2.8.3-7.0.jar:2.8.3-7.0] at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:179)
>  ~[flink-shaded-hadoop-2-uber-2.8.3-7.0.jar:2.8.3-7.0] at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 
> ~[flink-shaded-hadoop-2-uber-2.8.3-7.0.jar:2.8.3-7.0] at 
> org.apache.flink.yarn.YarnClusterClientFactory.getClusterDescriptor(YarnClusterClientFactory.java:71)
>  ~[flink-dist_2.11-1.10.0-vvr-0.1-SNAPSHOT.jar:1.10.0-vvr-0.1-SNAPSHOT] at 

[GitHub] [flink] flinkbot edited a comment on pull request #12309: [FLINK-17889][hive] Hive can not work with filesystem connector

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12309:
URL: https://github.com/apache/flink/pull/12309#issuecomment-633345845


   
   ## CI report:
   
   * c0f98fc48c889443462b8a1056e50ce8082d0314 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2095)
 
   * f6aec0584d0e5ba97e7b5c4707d93d97465b4704 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2112)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12278: [FLINK-17019][runtime] Fulfill slot requests in request order

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12278:
URL: https://github.com/apache/flink/pull/12278#issuecomment-632060550


   
   ## CI report:
   
   * 30dbfc08b3a4147d18dc00aebb2e31773f1fdbc1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1997)
 
   * a3a4256ccc6300fcac27d7625461c078a1510604 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12260:
URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314


   
   ## CI report:
   
   * a3a71c5920058f4bbbdc5f11022b8b7736b291cf UNKNOWN
   * 081a62a66ef0a1b687b10e6b41ed8066b0c7992d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2101)
 
   * cb99a301a3d1625909fbf94e8543e3a7b0726685 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2111)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12315: [FLINK-17917][yarn] Ignore the external resource with a value of 0 in…

2020-05-24 Thread GitBox


flinkbot commented on pull request #12315:
URL: https://github.com/apache/flink/pull/12315#issuecomment-633409833


   
   ## CI report:
   
   * 9108fb04b2b8cc0eebc941f7e179fcffed296138 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12314: [FLINK-17756][table-api-java] Drop table/view shouldn't take effect o…

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12314:
URL: https://github.com/apache/flink/pull/12314#issuecomment-633403476


   
   ## CI report:
   
   * b7a68f0f07ab9bb2abc3182d72c0dc80ed59dda1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2113)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12028: [FLINK-17553][table]fix plan error when constant exists in group window key

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12028:
URL: https://github.com/apache/flink/pull/12028#issuecomment-625656822


   
   ## CI report:
   
   * e51960bfa9933fc5515fde87896dbee100ef41f8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=796)
 
   * 5ffb28916f4d997ff9294f2df87389c79a7d1951 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #12256: [FLINK-17018][runtime] DefaultExecutionSlotAllocator allocates slots in bulks

2020-05-24 Thread GitBox


zhuzhurk commented on a change in pull request #12256:
URL: https://github.com/apache/flink/pull/12256#discussion_r429758351



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/PhysicalSlotRequestBulk.java
##
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobmaster.slotpool;
+
+import org.apache.flink.runtime.clusterframework.types.AllocationID;
+import org.apache.flink.runtime.jobmaster.SlotRequestId;
+
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
+/**
+ * Represents a bulk of physical slot requests.
+ */
+public class PhysicalSlotRequestBulk {
+
+   final Map pendingRequests;

Review comment:
   ok.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #12256: [FLINK-17018][runtime] DefaultExecutionSlotAllocator allocates slots in bulks

2020-05-24 Thread GitBox


zhuzhurk commented on a change in pull request #12256:
URL: https://github.com/apache/flink/pull/12256#discussion_r429758258



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/PhysicalSlotRequest.java
##
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobmaster.slotpool;
+
+import org.apache.flink.runtime.clusterframework.types.SlotProfile;
+import org.apache.flink.runtime.jobmaster.SlotRequestId;
+
+/**
+ * Represents a request for a physical slot.
+ */
+public class PhysicalSlotRequest {
+
+   private SlotRequestId slotRequestId;
+
+   private SlotProfile slotProfile;
+
+   private boolean slotWillBeOccupiedIndefinitely;
+
+   public PhysicalSlotRequest(
+   final SlotRequestId slotRequestId,
+   final SlotProfile slotProfile,
+   final boolean slotWillBeOccupiedIndefinitely) {
+
+   this.slotRequestId = slotRequestId;
+   this.slotProfile = slotProfile;
+   this.slotWillBeOccupiedIndefinitely = 
slotWillBeOccupiedIndefinitely;
+   }
+
+   public SlotRequestId getSlotRequestId() {
+   return slotRequestId;
+   }
+
+   public SlotProfile getSlotProfile() {
+   return slotProfile;
+   }
+
+   boolean willSlotBeOccupiedIndefinitely() {
+   return slotWillBeOccupiedIndefinitely;
+   }
+
+   /**
+* Result of a {@link PhysicalSlotRequest}.
+*/
+   public static class Result {
+
+   private SlotRequestId slotRequestId;

Review comment:
   ok.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #12256: [FLINK-17018][runtime] DefaultExecutionSlotAllocator allocates slots in bulks

2020-05-24 Thread GitBox


zhuzhurk commented on a change in pull request #12256:
URL: https://github.com/apache/flink/pull/12256#discussion_r429758061



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/PhysicalSlotRequest.java
##
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobmaster.slotpool;
+
+import org.apache.flink.runtime.clusterframework.types.SlotProfile;
+import org.apache.flink.runtime.jobmaster.SlotRequestId;
+
+/**
+ * Represents a request for a physical slot.
+ */
+public class PhysicalSlotRequest {
+
+   private SlotRequestId slotRequestId;

Review comment:
   ok.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #12254: [FLINK-17802][kafka] Set offset commit only if group id is configured for new Kafka Table source

2020-05-24 Thread GitBox


wuchong commented on pull request #12254:
URL: https://github.com/apache/flink/pull/12254#issuecomment-633405996


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #12252: [FLINK-17802][kafka] Set offset commit only if group id is configured for new Kafka Table source

2020-05-24 Thread GitBox


wuchong commented on pull request #12252:
URL: https://github.com/apache/flink/pull/12252#issuecomment-633405699


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #12256: [FLINK-17018][runtime] DefaultExecutionSlotAllocator allocates slots in bulks

2020-05-24 Thread GitBox


zhuzhurk commented on a change in pull request #12256:
URL: https://github.com/apache/flink/pull/12256#discussion_r429755150



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/PhysicalSlotRequestBulkTracker.java
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.jobmaster.slotpool;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.util.clock.Clock;
+
+import java.util.IdentityHashMap;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Tracks physical slot request bulks.
+ */
+class PhysicalSlotRequestBulkTracker {
+
+   private final Clock clock;
+
+   /** Timestamps indicate when bulks become unfulfillable. */
+   private final Map 
bulkUnfulfillableTimestamps;
+
+   PhysicalSlotRequestBulkTracker(final Clock clock) {
+   this.clock = clock;
+   this.bulkUnfulfillableTimestamps = new IdentityHashMap<>();
+   }
+
+   void track(final PhysicalSlotRequestBulk bulk) {
+   // a bulk is initially unfulfillable
+   bulkUnfulfillableTimestamps.put(bulk, 
clock.relativeTimeMillis());
+   }
+
+   void untrack(final PhysicalSlotRequestBulk bulk) {
+   bulkUnfulfillableTimestamps.remove(bulk);
+   }
+
+   boolean isTracked(final PhysicalSlotRequestBulk bulk) {
+   return bulkUnfulfillableTimestamps.containsKey(bulk);
+   }
+
+   void markBulkFulfillable(final PhysicalSlotRequestBulk bulk) {
+   checkState(isTracked(bulk));
+
+   bulkUnfulfillableTimestamps.put(bulk, Long.MAX_VALUE);
+   }
+
+   void markBulkUnfulfillable(final PhysicalSlotRequestBulk bulk, final 
long currentTimestamp) {

Review comment:
   This is to reduce the invocations of `System.nanoTime()` and ensures the 
timestamp is consistent in 
`SchedulerImpl#checkPhysicalSlotRequestBulkTimeout(...)`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #12301: [FLINK-16572] [pubsub,e2e] Add check to see if adding a timeout to th…

2020-05-24 Thread GitBox


rmetzger commented on pull request #12301:
URL: https://github.com/apache/flink/pull/12301#issuecomment-633404724


   Yes, that's my fault.
   The GitHub "merge" button hat an error and then produced this merge commit 
when I clicked "try again" :(
   
   Merge commits are actually disabled for this repository. I guess this is a 
Bug in GitHub.
   
   
![image](https://user-images.githubusercontent.com/89049/82785487-f275a000-9e62-11ea-83c7-f9139446d8e8.png)
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] carp84 commented on pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-24 Thread GitBox


carp84 commented on pull request #12078:
URL: https://github.com/apache/flink/pull/12078#issuecomment-633404190


   Please also rebase on the latest code base to resolve conflicts. Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] carp84 commented on a change in pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-24 Thread GitBox


carp84 commented on a change in pull request #12078:
URL: https://github.com/apache/flink/pull/12078#discussion_r429752787



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/TtlStateTestBase.java
##
@@ -208,7 +209,7 @@ public void testExactExpirationOnWrite() throws Exception {
 
timeProvider.time = 300;
assertEquals(EXPIRED_UNAVAIL, ctx().emptyValue, ctx().get());
-   assertEquals("Original state should be cleared on access", 
ctx().emptyValue, ctx().getOriginal());
+   assertTrue(ctx().isOriginalEmptyValue());

Review comment:
   ```suggestion
assertTrue("Original state should be cleared on access", 
ctx().isOriginalEmptyValue());
   ```

##
File path: 
flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/ttl/RocksDBTtlStateTestBase.java
##
@@ -161,11 +162,11 @@ private void testCompactFilter(boolean takeSnapshot, 
boolean rescaleAfterRestore
 
setTimeAndCompact(stateDesc, 170L);
sbetc.setCurrentKey("k1");
-   assertEquals("Expired original state should be unavailable", 
ctx().emptyValue, ctx().getOriginal());
+   assertTrue(ctx().isOriginalEmptyValue());

Review comment:
   ```suggestion
assertTrue("Expired original state should be unavailable", 
ctx().isOriginalEmptyValue());
   ```

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/TtlStateTestBase.java
##
@@ -222,7 +223,7 @@ public void testRelaxedExpirationOnWrite() throws Exception 
{
 
timeProvider.time = 120;
assertEquals(EXPIRED_AVAIL, ctx().getUpdateEmpty, ctx().get());
-   assertEquals("Original state should be cleared on access", 
ctx().emptyValue, ctx().getOriginal());
+   assertTrue(ctx().isOriginalEmptyValue());

Review comment:
   ```suggestion
assertTrue("Original state should be cleared on access", 
ctx().isOriginalEmptyValue());
   ```

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/TtlStateTestBase.java
##
@@ -247,7 +248,7 @@ public void testExactExpirationOnRead() throws Exception {
 
timeProvider.time = 250;
assertEquals(EXPIRED_UNAVAIL, ctx().emptyValue, ctx().get());
-   assertEquals("Original state should be cleared on access", 
ctx().emptyValue, ctx().getOriginal());
+   assertTrue(ctx().isOriginalEmptyValue());

Review comment:
   ```suggestion
assertTrue("Original state should be cleared on access", 
ctx().isOriginalEmptyValue());
   ```

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/TtlStateTestBase.java
##
@@ -509,7 +510,7 @@ public void testIncrementalCleanup() throws Exception {
private void checkExpiredKeys(int startKey, int endKey) throws 
Exception {
for (int i = startKey; i < endKey; i++) {
sbetc.setCurrentKey(Integer.toString(i));
-   assertEquals("Original state should be cleared", 
ctx().emptyValue, ctx().getOriginal());
+   assertTrue(ctx().isOriginalEmptyValue());

Review comment:
   ```suggestion
assertTrue("Original state should be cleared", 
ctx().isOriginalEmptyValue());
   ```

##
File path: 
flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/ttl/RocksDBTtlStateTestBase.java
##
@@ -161,11 +162,11 @@ private void testCompactFilter(boolean takeSnapshot, 
boolean rescaleAfterRestore
 
setTimeAndCompact(stateDesc, 170L);
sbetc.setCurrentKey("k1");
-   assertEquals("Expired original state should be unavailable", 
ctx().emptyValue, ctx().getOriginal());
+   assertTrue(ctx().isOriginalEmptyValue());
assertEquals(EXPIRED_UNAVAIL, ctx().emptyValue, ctx().get());
 
sbetc.setCurrentKey("k2");
-   assertEquals("Expired original state should be unavailable", 
ctx().emptyValue, ctx().getOriginal());
+   assertTrue(ctx().isOriginalEmptyValue());

Review comment:
   ```suggestion
assertTrue("Expired original state should be unavailable", 
ctx().isOriginalEmptyValue());
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #12256: [FLINK-17018][runtime] DefaultExecutionSlotAllocator allocates slots in bulks

2020-05-24 Thread GitBox


zhuzhurk commented on a change in pull request #12256:
URL: https://github.com/apache/flink/pull/12256#discussion_r429754093



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/DefaultExecutionSlotAllocator.java
##
@@ -67,23 +76,211 @@
 
private final SlotProviderStrategy slotProviderStrategy;
 
+   private final SlotOwner slotOwner;
+
private final InputsLocationsRetriever inputsLocationsRetriever;
 
+   // temporary hack. can be removed along with the individual slot 
allocation code path
+   // once bulk slot allocation is fully functional
+   private final boolean enableBulkSlotAllocation;

Review comment:
   Will give a try.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12313: [FLINK-17005][docs] Translate the CREATE TABLE ... LIKE syntax documentation to Chinese

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12313:
URL: https://github.com/apache/flink/pull/12313#issuecomment-633389597


   
   ## CI report:
   
   * c225fa8b2a774a8579501e136004a9d64db89244 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2108)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12314: [FLINK-17756][table-api-java] Drop table/view shouldn't take effect o…

2020-05-24 Thread GitBox


flinkbot commented on pull request #12314:
URL: https://github.com/apache/flink/pull/12314#issuecomment-633403476


   
   ## CI report:
   
   * b7a68f0f07ab9bb2abc3182d72c0dc80ed59dda1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12315: [FLINK-17917][yarn] Ignore the external resource with a value of 0 in…

2020-05-24 Thread GitBox


flinkbot commented on pull request #12315:
URL: https://github.com/apache/flink/pull/12315#issuecomment-633403452


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9108fb04b2b8cc0eebc941f7e179fcffed296138 (Mon May 25 
06:33:47 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-17917).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #12256: [FLINK-17018][runtime] DefaultExecutionSlotAllocator allocates slots in bulks

2020-05-24 Thread GitBox


zhuzhurk commented on a change in pull request #12256:
URL: https://github.com/apache/flink/pull/12256#discussion_r429753738



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/SingleLogicalSlot.java
##
@@ -168,6 +185,11 @@ public void release(Throwable cause) {
releaseFuture.complete(null);
}
 
+   @Override
+   public boolean willOccupySlotIndefinitely() {

Review comment:
   It is intentional.
   SingleLogicalSlot has 2 roles. One is logical slot and the other is physical 
slot payload.
   It will occupy a physical slot indefinitely iff it will be occupied 
indefinitely by a task.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12309: [FLINK-17889][hive] Hive can not work with filesystem connector

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12309:
URL: https://github.com/apache/flink/pull/12309#issuecomment-633345845


   
   ## CI report:
   
   * c0f98fc48c889443462b8a1056e50ce8082d0314 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2095)
 
   * f6aec0584d0e5ba97e7b5c4707d93d97465b4704 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17406) Add documentation about dynamic table options

2020-05-24 Thread Danny Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chen updated FLINK-17406:
---
Summary: Add documentation about dynamic table options  (was: add 
documentation about dynamic table options)

> Add documentation about dynamic table options
> -
>
> Key: FLINK-17406
> URL: https://issues.apache.org/jira/browse/FLINK-17406
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Kurt Young
>Assignee: Danny Chen
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #12289: [FLINK-17874][Connectors/HBase]Handling the NPE for hbase-connector

2020-05-24 Thread GitBox


wuchong commented on a change in pull request #12289:
URL: https://github.com/apache/flink/pull/12289#discussion_r429753647



##
File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/util/HBaseTypeUtils.java
##
@@ -81,13 +81,16 @@ public static Object deserializeToObject(byte[] value, int 
typeIdx, Charset stri
 * Serialize the Java Object to byte array with the given type.
 */
public static byte[] serializeFromObject(Object value, int typeIdx, 
Charset stringCharset) {
+   if (value == null){
+   return EMPTY_BYTES;

Review comment:
   Yes, I think we should better handle in this PR, because the empty bytes 
(for types exclude bytes and string) are introduced in this PR. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12260:
URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314


   
   ## CI report:
   
   * a3a71c5920058f4bbbdc5f11022b8b7736b291cf UNKNOWN
   * 081a62a66ef0a1b687b10e6b41ed8066b0c7992d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2101)
 
   * cb99a301a3d1625909fbf94e8543e3a7b0726685 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #12275: [FLINK-16021][table-common] DescriptorProperties.putTableSchema does …

2020-05-24 Thread GitBox


wuchong commented on a change in pull request #12275:
URL: https://github.com/apache/flink/pull/12275#discussion_r429746785



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/descriptors/DescriptorProperties.java
##
@@ -241,6 +244,19 @@ public void putTableSchema(String key, TableSchema schema) 
{
Arrays.asList(WATERMARK_ROWTIME, 
WATERMARK_STRATEGY_EXPR, WATERMARK_STRATEGY_DATA_TYPE),
watermarkValues);
}
+
+   if (schema.getPrimaryKey().isPresent()) {
+   final UniqueConstraint uniqueConstraint = 
schema.getPrimaryKey().get();
+   final List> uniqueConstraintValues = new 
ArrayList<>();
+   uniqueConstraintValues.add(Arrays.asList(
+   uniqueConstraint.getName(),
+   uniqueConstraint.getType().name(),
+   String.join(",", 
uniqueConstraint.getColumns(;
+   putIndexedFixedProperties(
+   key + '.' + CONSTRAINT_UNIQUE,
+   Arrays.asList(NAME, TYPE, 
CONSTRAINT_UNIQUE_COLUMNS),
+   uniqueConstraintValues);
+   }

Review comment:
   Because we only support primary key now. I think we can have a dedicate 
primary key properties, so that we don't need to handle the index. For example:
   
   ```java
   public static final String PRIMARY_KEY_NAME = "primary-key.name";
   public static final String PRIMARY_KEY_COLUMNS = "primary-key.columns";
   
   schema.getPrimaryKey().ifPresent(pk -> {
   putString(key + "." + PRIMARY_KEY_NAME, pk.getName());
   putString(key + "." + PRIMARY_KEY_COLUMNS, String.join(",", 
pk.getColumns()));
   });
   ```
   
   This is also helpful for users who write yaml: 
   
   ```
   tables:
 - name: TableNumber1
   type: source-table
   schema:
 primary-key
   name: constraint1
   columns: f1, f2
   ```

##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/descriptors/DescriptorProperties.java
##
@@ -610,7 +626,9 @@ public DataType getDataType(String key) {
public Optional getOptionalTableSchema(String key) {
// filter for number of fields
final int fieldCount = properties.keySet().stream()
-   .filter((k) -> k.startsWith(key) && k.endsWith('.' + 
TABLE_SCHEMA_NAME))
+   .filter((k) -> k.startsWith(key)
+   // "key." is the prefix.
+   && 
SCHEMA_COLUMN_NAME_SUFFIX.matcher(k.substring(key.length() + 1)).matches())

Review comment:
   We can just to exclude the primary key, then don't need the regex 
matching. 
   
   ```
   .filter((k) -> k.startsWith(key) && !k.startsWith(key + "." + PRIMARY_KEY) 
&& k.endsWith('.' + NAME))
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17917) ResourceInformationReflector#getExternalResources should ignore the external resource with a value of 0

2020-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17917:
---
Labels: pull-request-available  (was: )

> ResourceInformationReflector#getExternalResources should ignore the external 
> resource with a value of 0
> ---
>
> Key: FLINK-17917
> URL: https://issues.apache.org/jira/browse/FLINK-17917
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Yangze Guo
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> *Background*: In FLINK-17390, we leverage 
> {{WorkerSpecContainerResourceAdapter.InternalContainerResource}} to handle 
> container matching logic. In FLINK-17407, we introduce external resources in 
> {{WorkerSpecContainerResourceAdapter.InternalContainerResource}}.
>  On containers returned by Yarn, we try to get the corresponding worker specs 
> by:
>  - Convert the container to {{InternalContainerResource}}
>  - Get the WorkerResourceSpec from {{containerResourceToWorkerSpecs}} map.
> *Problem*: Container mismatch could happen in the below scenario:
>  - Flink does not allocate any external resources, the {{externalResources}} 
> of {{InternalContainerResource}} is an empty map.
>  - The returned container contains all the resources (with a value of 0) 
> defined in Yarn's {{resource-types.xml}}. The {{externalResources}} of 
> {{InternalContainerResource}} has one or more entries with a value of 0.
>  - These two {{InternalContainerResource}} do not match.
> To solve this problem, we could ignore all the external resources with a 
> value of 0 in "ResourceInformationReflector#getExternalResources".
> cc [~trohrmann] Could you assign this to me.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ opened a new pull request #12315: [FLINK-17917][yarn] Ignore the external resource with a value of 0 in…

2020-05-24 Thread GitBox


KarmaGYZ opened a new pull request #12315:
URL: https://github.com/apache/flink/pull/12315


   … ResourceInformationReflector#getExternalResources
   
   ## What is the purpose of the change
   
   ResourceInformationReflector#getExternalResources should ignore the external 
resource with a value of 0.
   
   ## Brief change log
   
   ResourceInformationReflector#getExternalResources should ignore the external 
resource with a value of 0.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   - 
`ResourceInformationReflectorTest#testGetResourceInformationIgnoreResourceWithZeroValue`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? no



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17917) ResourceInformationReflector#getExternalResources should ignore the external resource with a value of 0

2020-05-24 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo updated FLINK-17917:
---
Description: 
*Background*: In FLINK-17390, we leverage 
{{WorkerSpecContainerResourceAdapter.InternalContainerResource}} to handle 
container matching logic. In FLINK-17407, we introduce external resources in 
{{WorkerSpecContainerResourceAdapter.InternalContainerResource}}.
 On containers returned by Yarn, we try to get the corresponding worker specs 
by:
 - Convert the container to {{InternalContainerResource}}
 - Get the WorkerResourceSpec from {{containerResourceToWorkerSpecs}} map.

*Problem*: Container mismatch could happen in the below scenario:
 - Flink does not allocate any external resources, the {{externalResources}} of 
{{InternalContainerResource}} is an empty map.
 - The returned container contains all the resources (with a value of 0) 
defined in Yarn's {{resource-types.xml}}. The {{externalResources}} of 
{{InternalContainerResource}} has one or more entries with a value of 0.
 - These two {{InternalContainerResource}} do not match.

To solve this problem, we could ignore all the external resources with a value 
of 0 in "ResourceInformationReflector#getExternalResources".

cc [~trohrmann] Could you assign this to me.

  was:
*Background*: In FLINK-17390, we leverage 
{{WorkerSpecContainerResourceAdapter.InternalContainerResource}} to handle 
container matching logic. In FLINK-17407, we introduce external resources in 
{{WorkerSpecContainerResourceAdapter.InternalContainerResource}}.
On containers returned by Yarn, we try to get the corresponding worker specs by:
- Convert the container to {{InternalContainerResource}}
- Get the WorkerResourceSpec from {{containerResourceToWorkerSpecs}} map.

Container mismatch could happen in the below scenario:
- Flink does not allocate any external resources, the {{externalResources}} of 
{{InternalContainerResource}} is an empty map.
- The returned container contains all the resources (with a value of 0) defined 
in Yarn's {{resource-types.xml}}. The {{externalResources}} of 
{{InternalContainerResource}} has one or more entries with a value of 0.
- These two {{InternalContainerResource}} do not match.

To solve this problem, we could ignore all the external resources with a value 
of 0 in "ResourceInformationReflector#getExternalResources".

cc [~trohrmann] Could you assign this to me.


> ResourceInformationReflector#getExternalResources should ignore the external 
> resource with a value of 0
> ---
>
> Key: FLINK-17917
> URL: https://issues.apache.org/jira/browse/FLINK-17917
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Yangze Guo
>Priority: Blocker
> Fix For: 1.11.0
>
>
> *Background*: In FLINK-17390, we leverage 
> {{WorkerSpecContainerResourceAdapter.InternalContainerResource}} to handle 
> container matching logic. In FLINK-17407, we introduce external resources in 
> {{WorkerSpecContainerResourceAdapter.InternalContainerResource}}.
>  On containers returned by Yarn, we try to get the corresponding worker specs 
> by:
>  - Convert the container to {{InternalContainerResource}}
>  - Get the WorkerResourceSpec from {{containerResourceToWorkerSpecs}} map.
> *Problem*: Container mismatch could happen in the below scenario:
>  - Flink does not allocate any external resources, the {{externalResources}} 
> of {{InternalContainerResource}} is an empty map.
>  - The returned container contains all the resources (with a value of 0) 
> defined in Yarn's {{resource-types.xml}}. The {{externalResources}} of 
> {{InternalContainerResource}} has one or more entries with a value of 0.
>  - These two {{InternalContainerResource}} do not match.
> To solve this problem, we could ignore all the external resources with a 
> value of 0 in "ResourceInformationReflector#getExternalResources".
> cc [~trohrmann] Could you assign this to me.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17542) Unify slot request timeout handling for streaming and batch tasks

2020-05-24 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-17542.
---
Resolution: Abandoned

Superseded by FLINK-17018.

> Unify slot request timeout handling for streaming and batch tasks
> -
>
> Key: FLINK-17542
> URL: https://issues.apache.org/jira/browse/FLINK-17542
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> There are 2 different slot request timeout handling mechanism for batch and 
> streaming tasks.
> For streaming tasks, the slot request will fail if it is not fulfilled within 
> slotRequestTimeout.
> For batch tasks, the slot request will be checked periodically to see whether 
> it is fulfillable, and only fails if it has been unfulfillable for a certain 
> period(slotRequestTimeout).
> With slot marked with whether they will be occupied indefinitely, we can 
> unify these handling. See 
> [FLIP-119|https://cwiki.apache.org/confluence/display/FLINK/FLIP-119+Pipelined+Region+Scheduling#FLIP-119PipelinedRegionScheduling-ExtendedSlotProviderInterface]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17017) Implement Bulk Slot Allocation

2020-05-24 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-17017.
---
Resolution: Abandoned

Superseded by FLINK-17018.

> Implement Bulk Slot Allocation
> --
>
> Key: FLINK-17017
> URL: https://issues.apache.org/jira/browse/FLINK-17017
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> SlotProvider should support bulk slot allocation so that we can check to see 
> if the resource requirements of a slot request bulk can be fulfilled at the 
> same time.
> The SlotProvider interface should be extended with an bulk slot allocation 
> method which accepts a bulk of slot requests as one of the parameters.
> {code:java}
> CompletableFuture> allocateSlots(
>   Collection slotRequests,
>   Time allocationTimeout);
>  
> class LogicalSlotRequest {
>   SlotRequestId slotRequestId;
>   ScheduledUnit scheduledUnit;
>   SlotProfile slotProfile;
>   boolean slotWillBeOccupiedIndefinitely;
> }
> {code}
> More details see [FLIP-119#Bulk Slot 
> Allocation|https://cwiki.apache.org/confluence/display/FLINK/FLIP-119+Pipelined+Region+Scheduling#FLIP-119PipelinedRegionScheduling-BulkSlotAllocation]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ commented on pull request #12301: [FLINK-16572] [pubsub,e2e] Add check to see if adding a timeout to th…

2020-05-24 Thread GitBox


KarmaGYZ commented on pull request #12301:
URL: https://github.com/apache/flink/pull/12301#issuecomment-633401619


   It seems something went wrong with the commit message in master branch:
   `Merge pull request #12301 from Xeli/FLINK-16572-logs`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17916) Provide API to separate KafkaShuffle's Producer and Consumer to different jobs

2020-05-24 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-17916:
-
Summary: Provide API to separate KafkaShuffle's Producer and Consumer to 
different jobs  (was: Separate KafkaShuffle's Producer and Consumer to 
different jobs)

> Provide API to separate KafkaShuffle's Producer and Consumer to different jobs
> --
>
> Key: FLINK-17916
> URL: https://issues.apache.org/jira/browse/FLINK-17916
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Connectors / Kafka
>Affects Versions: 1.11.0
>Reporter: Yuan Mei
>Priority: Major
> Fix For: 1.11.0
>
>
> Follow up of FLINK-15670
> *Separate sink (producer) and source (consumer) to different jobs*
>  * In the same job, a sink and a source are recovered independently according 
> to regional failover. However, they share the same checkpoint coordinator and 
> correspondingly, share the same global checkpoint snapshot.
>  * That says if the consumer fails, the producer can not commit written data 
> because of two-phase commit set-up (the producer needs a checkpoint-complete 
> signal to complete the second stage)
>  * Same applies to the producer
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16572) CheckPubSubEmulatorTest is flaky on Azure

2020-05-24 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115740#comment-17115740
 ] 

Robert Metzger commented on FLINK-16572:


Thanks a lot. Merged as part of 
https://github.com/apache/flink/commit/50253c6b89e3c92cac23edda6556770a63643c90

> CheckPubSubEmulatorTest is flaky on Azure
> -
>
> Key: FLINK-16572
> URL: https://issues.apache.org/jira/browse/FLINK-16572
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub, Tests
>Affects Versions: 1.11.0
>Reporter: Aljoscha Krettek
>Assignee: Richard Deurwaarder
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Log: 
> https://dev.azure.com/aljoschakrettek/Flink/_build/results?buildId=56&view=logs&j=1f3ed471-1849-5d3c-a34c-19792af4ad16&t=ce095137-3e3b-5f73-4b79-c42d3d5f8283&l=7842



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17894) RowGenerator in datagen connector should be serializable

2020-05-24 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-17894.

Resolution: Fixed

master: edfb7c4d7a004fe60c8cd34ebca88d2f7cc5f212

release-1.11: 3f73694e9c005e62d661ed785c01bdd7060c1485

> RowGenerator in datagen connector should be serializable
> 
>
> Key: FLINK-17894
> URL: https://issues.apache.org/jira/browse/FLINK-17894
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17916) Separate KafkaShuffle's Producer and Consumer to different jobs

2020-05-24 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-17916:
-
Description: 
Follow up of FLINK-15670

*Separate sink (producer) and source (consumer) to different jobs*
 * In the same job, a sink and a source are recovered independently according 
to regional failover. However, they share the same checkpoint coordinator and 
correspondingly, share the same global checkpoint snapshot.
 * That says if the consumer fails, the producer can not commit written data 
because of two-phase commit set-up (the producer needs a checkpoint-complete 
signal to complete the second stage)
 * Same applies to the producer

 

  was:
Follow up of FLINK-15670

Separate sink (producer) and source (consumer) to different jobs. 
 * In the same job, a sink and a source are recovered independently according 
to regional failover. However, they share the same checkpoint coordinator and 
correspondingly, share the same global checkpoint snapshot
 * That says if the consumer fails, the producer can not commit written data 
because of two-phase commit set-up (the producer needs a checkpoint-complete 
signal to complete the second stage)


> Separate KafkaShuffle's Producer and Consumer to different jobs
> ---
>
> Key: FLINK-17916
> URL: https://issues.apache.org/jira/browse/FLINK-17916
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Connectors / Kafka
>Affects Versions: 1.11.0
>Reporter: Yuan Mei
>Priority: Major
> Fix For: 1.11.0
>
>
> Follow up of FLINK-15670
> *Separate sink (producer) and source (consumer) to different jobs*
>  * In the same job, a sink and a source are recovered independently according 
> to regional failover. However, they share the same checkpoint coordinator and 
> correspondingly, share the same global checkpoint snapshot.
>  * That says if the consumer fails, the producer can not commit written data 
> because of two-phase commit set-up (the producer needs a checkpoint-complete 
> signal to complete the second stage)
>  * Same applies to the producer
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger merged pull request #12301: [FLINK-16572] [pubsub,e2e] Add check to see if adding a timeout to th…

2020-05-24 Thread GitBox


rmetzger merged pull request #12301:
URL: https://github.com/apache/flink/pull/12301


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi merged pull request #12310: [FLINK-17894][table] RowGenerator in datagen connector should be serializable

2020-05-24 Thread GitBox


JingsongLi merged pull request #12310:
URL: https://github.com/apache/flink/pull/12310


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #12301: [FLINK-16572] [pubsub,e2e] Add check to see if adding a timeout to th…

2020-05-24 Thread GitBox


rmetzger commented on pull request #12301:
URL: https://github.com/apache/flink/pull/12301#issuecomment-633399279


   Thanks a lot! Merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17911) K8s e2e: error: timed out waiting for the condition on deployments/flink-native-k8s-session-1

2020-05-24 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115737#comment-17115737
 ] 

Yang Wang commented on FLINK-17911:
---

Totally agree with you. Let's keep this ticket open.

> K8s e2e: error: timed out waiting for the condition on 
> deployments/flink-native-k8s-session-1
> -
>
> Key: FLINK-17911
> URL: https://issues.apache.org/jira/browse/FLINK-17911
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2062&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8
> {code}
> error: timed out waiting for the condition on 
> deployments/flink-native-k8s-session-1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on pull request #12310: [FLINK-17894][table] RowGenerator in datagen connector should be serializable

2020-05-24 Thread GitBox


JingsongLi commented on pull request #12310:
URL: https://github.com/apache/flink/pull/12310#issuecomment-633399177


   Thanks @zjuwangg for the review, merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17916) Separate KafkaShuffle read/write to different environments

2020-05-24 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-17916:
-
Description: 
Follow up of FLINK-15670

Separate sink (producer) and source (consumer) to different jobs. 
 * In the same job, a sink and a source are recovered independently according 
to regional failover. However, they share the same checkpoint coordinator and 
correspondingly, share the same global checkpoint snapshot
 * That says if the consumer fails, the producer can not commit written data 
because of two-phase commit set-up (the producer needs a checkpoint-complete 
signal to complete the second stage)

> Separate KafkaShuffle read/write to different environments
> --
>
> Key: FLINK-17916
> URL: https://issues.apache.org/jira/browse/FLINK-17916
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Connectors / Kafka
>Affects Versions: 1.11.0
>Reporter: Yuan Mei
>Priority: Major
> Fix For: 1.11.0
>
>
> Follow up of FLINK-15670
> Separate sink (producer) and source (consumer) to different jobs. 
>  * In the same job, a sink and a source are recovered independently according 
> to regional failover. However, they share the same checkpoint coordinator and 
> correspondingly, share the same global checkpoint snapshot
>  * That says if the consumer fails, the producer can not commit written data 
> because of two-phase commit set-up (the producer needs a checkpoint-complete 
> signal to complete the second stage)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17916) Separate KafkaShuffle's Producer and Consumer to different jobs

2020-05-24 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-17916:
-
Summary: Separate KafkaShuffle's Producer and Consumer to different jobs  
(was: Separate KafkaShuffle read/write to different environments)

> Separate KafkaShuffle's Producer and Consumer to different jobs
> ---
>
> Key: FLINK-17916
> URL: https://issues.apache.org/jira/browse/FLINK-17916
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Connectors / Kafka
>Affects Versions: 1.11.0
>Reporter: Yuan Mei
>Priority: Major
> Fix For: 1.11.0
>
>
> Follow up of FLINK-15670
> Separate sink (producer) and source (consumer) to different jobs. 
>  * In the same job, a sink and a source are recovered independently according 
> to regional failover. However, they share the same checkpoint coordinator and 
> correspondingly, share the same global checkpoint snapshot
>  * That says if the consumer fails, the producer can not commit written data 
> because of two-phase commit set-up (the producer needs a checkpoint-complete 
> signal to complete the second stage)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17911) K8s e2e: error: timed out waiting for the condition on deployments/flink-native-k8s-session-1

2020-05-24 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115736#comment-17115736
 ] 

Robert Metzger commented on FLINK-17911:


Thanks for looking into the issue!

I would propose to keep the ticket open for some time to see how frequently 
this issue happens. Most likely, it is very rate and we should not invest time 
into it now.

> K8s e2e: error: timed out waiting for the condition on 
> deployments/flink-native-k8s-session-1
> -
>
> Key: FLINK-17911
> URL: https://issues.apache.org/jira/browse/FLINK-17911
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2062&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8
> {code}
> error: timed out waiting for the condition on 
> deployments/flink-native-k8s-session-1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zjuwangg commented on pull request #12310: [FLINK-17894][table] RowGenerator in datagen connector should be serializable

2020-05-24 Thread GitBox


zjuwangg commented on pull request #12310:
URL: https://github.com/apache/flink/pull/12310#issuecomment-633397428


   LGTM. Thanks for your fix.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17911) K8s e2e: error: timed out waiting for the condition on deployments/flink-native-k8s-session-1

2020-05-24 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115735#comment-17115735
 ] 

Yang Wang commented on FLINK-17911:
---

[~rmetzger] Thanks for creating this issue. The root cause is we failed to 
build the docker image.
{code:java}
+ wget -nv -O /usr/local/bin/gosu.asc 
https://github.com/tianon/gosu/releases/download/1.11/gosu-amd64.asc
https://github.com/tianon/gosu/releases/download/1.11/gosu-amd64.asc:
2020-05-22 21:00:29 ERROR 500: Internal Server Error.

{code}

> K8s e2e: error: timed out waiting for the condition on 
> deployments/flink-native-k8s-session-1
> -
>
> Key: FLINK-17911
> URL: https://issues.apache.org/jira/browse/FLINK-17911
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2062&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8
> {code}
> error: timed out waiting for the condition on 
> deployments/flink-native-k8s-session-1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17896) HiveCatalog can work with new table factory because of is_generic

2020-05-24 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-17896:
-
Description: 
{code:java}
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Unsupported options found for 
connector 'print'.Unsupported options:is_genericSupported options:connector
print-identifier
property-version
standard-error
{code}
This's because HiveCatalog put is_generic property into generic tables, but the 
new factory will not delete it.

> HiveCatalog can work with new table factory because of is_generic
> -
>
> Key: FLINK-17896
> URL: https://issues.apache.org/jira/browse/FLINK-17896
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / API
>Reporter: Jingsong Lee
>Priority: Blocker
> Fix For: 1.11.0
>
>
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Unsupported options found for 
> connector 'print'.Unsupported options:is_genericSupported options:connector
> print-identifier
> property-version
> standard-error
> {code}
> This's because HiveCatalog put is_generic property into generic tables, but 
> the new factory will not delete it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17916) Separate KafkaShuffle read/write to different environments

2020-05-24 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-17916:
-
Fix Version/s: (was: 1.12.0)
   1.11.0

> Separate KafkaShuffle read/write to different environments
> --
>
> Key: FLINK-17916
> URL: https://issues.apache.org/jira/browse/FLINK-17916
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Connectors / Kafka
>Affects Versions: 1.11.0
>Reporter: Yuan Mei
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17917) ResourceInformationReflector#getExternalResources should ignore the external resource with a value of 0

2020-05-24 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-17917:
--

 Summary: ResourceInformationReflector#getExternalResources should 
ignore the external resource with a value of 0
 Key: FLINK-17917
 URL: https://issues.apache.org/jira/browse/FLINK-17917
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.11.0
Reporter: Yangze Guo
 Fix For: 1.11.0


*Background*: In FLINK-17390, we leverage 
{{WorkerSpecContainerResourceAdapter.InternalContainerResource}} to handle 
container matching logic. In FLINK-17407, we introduce external resources in 
{{WorkerSpecContainerResourceAdapter.InternalContainerResource}}.
On containers returned by Yarn, we try to get the corresponding worker specs by:
- Convert the container to {{InternalContainerResource}}
- Get the WorkerResourceSpec from {{containerResourceToWorkerSpecs}} map.

Container mismatch could happen in the below scenario:
- Flink does not allocate any external resources, the {{externalResources}} of 
{{InternalContainerResource}} is an empty map.
- The returned container contains all the resources (with a value of 0) defined 
in Yarn's {{resource-types.xml}}. The {{externalResources}} of 
{{InternalContainerResource}} has one or more entries with a value of 0.
- These two {{InternalContainerResource}} do not match.

To solve this problem, we could ignore all the external resources with a value 
of 0 in "ResourceInformationReflector#getExternalResources".

cc [~trohrmann] Could you assign this to me.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17915) TransitiveClosureITCase>JavaProgramTestBase.testJobWithObjectReuse:113 Error while calling the test program: Could not retrieve JobResult

2020-05-24 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17915:
--

 Summary: 
TransitiveClosureITCase>JavaProgramTestBase.testJobWithObjectReuse:113 Error 
while calling the test program: Could not retrieve JobResult
 Key: FLINK-17915
 URL: https://issues.apache.org/jira/browse/FLINK-17915
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2096&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=45cc9205-bdb7-5b54-63cd-89fdc0983323

{code}
2020-05-25T03:26:08.8891832Z [INFO] Tests run: 3, Failures: 0, Errors: 0, 
Skipped: 0, Time elapsed: 4.287 s - in 
org.apache.flink.test.example.java.WordCountSimplePOJOITCase
2020-05-25T03:26:09.6452511Z Could not retrieve JobResult.
2020-05-25T03:26:09.6454291Z 
org.apache.flink.runtime.client.JobExecutionException: Could not retrieve 
JobResult.
2020-05-25T03:26:09.6482505Zat 
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:673)
2020-05-25T03:26:09.6483671Zat 
org.apache.flink.test.util.TestEnvironment.execute(TestEnvironment.java:115)
2020-05-25T03:26:09.6625490Zat 
org.apache.flink.examples.java.graph.TransitiveClosureNaive.main(TransitiveClosureNaive.java:120)
2020-05-25T03:26:09.6752644Zat 
org.apache.flink.test.example.java.TransitiveClosureITCase.testProgram(TransitiveClosureITCase.java:51)
2020-05-25T03:26:09.6754368Zat 
org.apache.flink.test.util.JavaProgramTestBase.testJobWithObjectReuse(JavaProgramTestBase.java:107)
2020-05-25T03:26:09.6756679Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-05-25T03:26:09.6757511Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-05-25T03:26:09.6759607Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-25T03:26:09.6760692Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-05-25T03:26:09.6761519Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-05-25T03:26:09.6762382Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-05-25T03:26:09.6763246Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-05-25T03:26:09.6778288Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-05-25T03:26:09.6779479Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2020-05-25T03:26:09.6780187Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-05-25T03:26:09.6780851Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2020-05-25T03:26:09.6781843Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2020-05-25T03:26:09.6782583Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2020-05-25T03:26:09.6783485Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2020-05-25T03:26:09.6784670Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2020-05-25T03:26:09.6785320Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2020-05-25T03:26:09.6786034Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2020-05-25T03:26:09.6786670Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2020-05-25T03:26:09.6787550Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-05-25T03:26:09.6788233Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-05-25T03:26:09.6789050Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-05-25T03:26:09.6789698Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2020-05-25T03:26:09.6790701Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2020-05-25T03:26:09.6791797Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2020-05-25T03:26:09.6792592Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
2020-05-25T03:26:09.6793535Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
2020-05-25T03:26:09.6794429Zat 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
2020-05-25T03:26:09.6795279Zat 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
2020-05-25T03:26:09.6796109Zat 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
2020-05-25T03:26:09.6797133Zat 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
2020-05-25T03:26:09.6798317Z Caused 

[jira] [Commented] (FLINK-13009) YARNSessionCapacitySchedulerITCase#testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots th

2020-05-24 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115728#comment-17115728
 ] 

Yang Wang commented on FLINK-13009:
---

This is a duplicate of FLINK-15534. Since it is a bug of hadoop 2.8.3 and a new 
bug fix version has not been released, we could not do anything now.

[~klion26] If you agree, i will close this ticket.

> YARNSessionCapacitySchedulerITCase#testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots
>  throws NPE on Travis
> -
>
> Key: FLINK-13009
> URL: https://issues.apache.org/jira/browse/FLINK-13009
> Project: Flink
>  Issue Type: Test
>  Components: Deployment / YARN, Tests
>Affects Versions: 1.8.0
>Reporter: Congxian Qiu(klion26)
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> The test 
> {{YARNSessionCapacitySchedulerITCase#testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots}}
>  throws NPE on Travis.
> NPE throws from RMAppAttemptMetrics.java#128, and the following is the code 
> from hadoop-2.8.3[1]
> {code:java}
> // Only add in the running containers if this is the active attempt.
> 128   RMAppAttempt currentAttempt = rmContext.getRMApps()
> 129   .get(attemptId.getApplicationId()).getCurrentAppAttempt();
> {code}
>  
> log [https://api.travis-ci.org/v3/job/550689578/log.txt]
> [1] 
> [https://github.com/apache/hadoop/blob/b3fe56402d908019d99af1f1f4fc65cb1d1436a2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java#L128]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17916) Separate KafkaShuffle read/write to different environments

2020-05-24 Thread Yuan Mei (Jira)
Yuan Mei created FLINK-17916:


 Summary: Separate KafkaShuffle read/write to different environments
 Key: FLINK-17916
 URL: https://issues.apache.org/jira/browse/FLINK-17916
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream, Connectors / Kafka
Affects Versions: 1.11.0
Reporter: Yuan Mei
 Fix For: 1.12.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #12314: [FLINK-17756][table-api-java] Drop table/view shouldn't take effect o…

2020-05-24 Thread GitBox


JingsongLi commented on a change in pull request #12314:
URL: https://github.com/apache/flink/pull/12314#discussion_r429745539



##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
##
@@ -682,20 +682,60 @@ public void alterTable(CatalogBaseTable table, 
ObjectIdentifier objectIdentifier
 * Drops a table in a given fully qualified path.
 *
 * @param objectIdentifier The fully qualified path of the table to 
drop.
-* @param ignoreIfNotExists If false exception will be thrown if the 
table or database or catalog to be altered
+* @param ignoreIfNotExists If false exception will be thrown if the 
table to drop
 *  does not exist.
 */
public void dropTable(ObjectIdentifier objectIdentifier, boolean 
ignoreIfNotExists) {
-   if (temporaryTables.containsKey(objectIdentifier)) {
-   throw new ValidationException(String.format(
-   "Temporary table with identifier '%s' exists. 
Drop it first before removing the permanent table.",
-   objectIdentifier));
+   dropTableInternal(
+   objectIdentifier,
+   table -> table instanceof CatalogTable,
+   ignoreIfNotExists);
+   }
+
+   /**
+* Drops a view in a given fully qualified path.
+*
+* @param objectIdentifier The fully qualified path of the view to drop.
+* @param ignoreIfNotExists If false exception will be thrown if the 
view to drop
+*  does not exist.
+*/
+   public void dropView(ObjectIdentifier objectIdentifier, boolean 
ignoreIfNotExists) {
+   dropTableInternal(
+   objectIdentifier,
+   table -> table instanceof CatalogView,
+   ignoreIfNotExists);
+   }
+
+   private void dropTableInternal(
+   ObjectIdentifier objectIdentifier,
+   Predicate filter,
+   boolean ignoreIfNotExists) {
+   final Optional resultOpt = 
getTable(objectIdentifier);

Review comment:
   Why not just `getPermanentTable`? You can keep previous 
`temporaryTables.containsKey(objectIdentifier)`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17914) HistoryServerTest.testCleanExpiredJob:158->runArchiveExpirationTest:214 expected:<2> but was:<0>

2020-05-24 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17914:
--

 Summary: 
HistoryServerTest.testCleanExpiredJob:158->runArchiveExpirationTest:214 
expected:<2> but was:<0>
 Key: FLINK-17914
 URL: https://issues.apache.org/jira/browse/FLINK-17914
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Web Frontend
Affects Versions: 1.12.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2047&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d

{code}
[ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.697 s 
<<< FAILURE! - in org.apache.flink.runtime.webmonitor.history.HistoryServerTest
[ERROR] testCleanExpiredJob[Flink version less than 1.4: 
false](org.apache.flink.runtime.webmonitor.history.HistoryServerTest)  Time 
elapsed: 0.483 s  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.flink.runtime.webmonitor.history.HistoryServerTest.runArchiveExpirationTest(HistoryServerTest.java:214)
at 
org.apache.flink.runtime.webmonitor.history.HistoryServerTest.testCleanExpiredJob(HistoryServerTest.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Myasuka commented on pull request #12282: [FLINK-17865][checkpoint] Increase default size of 'state.backend.fs.memory-threshold'

2020-05-24 Thread GitBox


Myasuka commented on pull request #12282:
URL: https://github.com/apache/flink/pull/12282#issuecomment-633393969







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #12314: [FLINK-17756][table-api-java] Drop table/view shouldn't take effect o…

2020-05-24 Thread GitBox


JingsongLi commented on a change in pull request #12314:
URL: https://github.com/apache/flink/pull/12314#discussion_r429745539



##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
##
@@ -682,20 +682,60 @@ public void alterTable(CatalogBaseTable table, 
ObjectIdentifier objectIdentifier
 * Drops a table in a given fully qualified path.
 *
 * @param objectIdentifier The fully qualified path of the table to 
drop.
-* @param ignoreIfNotExists If false exception will be thrown if the 
table or database or catalog to be altered
+* @param ignoreIfNotExists If false exception will be thrown if the 
table to drop
 *  does not exist.
 */
public void dropTable(ObjectIdentifier objectIdentifier, boolean 
ignoreIfNotExists) {
-   if (temporaryTables.containsKey(objectIdentifier)) {
-   throw new ValidationException(String.format(
-   "Temporary table with identifier '%s' exists. 
Drop it first before removing the permanent table.",
-   objectIdentifier));
+   dropTableInternal(
+   objectIdentifier,
+   table -> table instanceof CatalogTable,
+   ignoreIfNotExists);
+   }
+
+   /**
+* Drops a view in a given fully qualified path.
+*
+* @param objectIdentifier The fully qualified path of the view to drop.
+* @param ignoreIfNotExists If false exception will be thrown if the 
view to drop
+*  does not exist.
+*/
+   public void dropView(ObjectIdentifier objectIdentifier, boolean 
ignoreIfNotExists) {
+   dropTableInternal(
+   objectIdentifier,
+   table -> table instanceof CatalogView,
+   ignoreIfNotExists);
+   }
+
+   private void dropTableInternal(
+   ObjectIdentifier objectIdentifier,
+   Predicate filter,
+   boolean ignoreIfNotExists) {
+   final Optional resultOpt = 
getTable(objectIdentifier);

Review comment:
   Why not just `getPermanentTable`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #12314: [FLINK-17756][table-api-java] Drop table/view shouldn't take effect o…

2020-05-24 Thread GitBox


JingsongLi commented on a change in pull request #12314:
URL: https://github.com/apache/flink/pull/12314#discussion_r429745290



##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java
##
@@ -682,20 +682,60 @@ public void alterTable(CatalogBaseTable table, 
ObjectIdentifier objectIdentifier
 * Drops a table in a given fully qualified path.
 *
 * @param objectIdentifier The fully qualified path of the table to 
drop.
-* @param ignoreIfNotExists If false exception will be thrown if the 
table or database or catalog to be altered
+* @param ignoreIfNotExists If false exception will be thrown if the 
table to drop
 *  does not exist.
 */
public void dropTable(ObjectIdentifier objectIdentifier, boolean 
ignoreIfNotExists) {
-   if (temporaryTables.containsKey(objectIdentifier)) {
-   throw new ValidationException(String.format(
-   "Temporary table with identifier '%s' exists. 
Drop it first before removing the permanent table.",
-   objectIdentifier));
+   dropTableInternal(
+   objectIdentifier,
+   table -> table instanceof CatalogTable,
+   ignoreIfNotExists);
+   }
+
+   /**
+* Drops a view in a given fully qualified path.
+*
+* @param objectIdentifier The fully qualified path of the view to drop.
+* @param ignoreIfNotExists If false exception will be thrown if the 
view to drop
+*  does not exist.
+*/
+   public void dropView(ObjectIdentifier objectIdentifier, boolean 
ignoreIfNotExists) {
+   dropTableInternal(
+   objectIdentifier,
+   table -> table instanceof CatalogView,
+   ignoreIfNotExists);
+   }
+
+   private void dropTableInternal(
+   ObjectIdentifier objectIdentifier,
+   Predicate filter,
+   boolean ignoreIfNotExists) {
+   final Optional resultOpt = 
getTable(objectIdentifier);
+   if (resultOpt.isPresent()) {
+   final TableLookupResult result = resultOpt.get();
+   if (filter.test(result.getTable())) {
+   if (result.isTemporary()) {
+   // Same name temporary table or view 
exists.
+   throw new 
ValidationException(String.format(
+   "Temporary table or 
view with identifier '%s' exists. "
+   + "Drop 
it first before removing the permanent table or view.",
+   objectIdentifier));
+   }
+   } else if (!ignoreIfNotExists) {
+   // To drop a table but the object identifier 
represents a view(or vise versa).
+   throw new ValidationException(String.format(
+   "Table or view with identifier 
'%s' does not exist.",
+   
objectIdentifier.asSummaryString()));
+   } else {
+   // Table or view does not exist with 
ignoreIfNotExists true, do nothing.
+   return;
+   }
}
execute(

Review comment:
   Can you move this `execute` into `if (filter.test(result.getTable()))`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] cyq89051127 commented on a change in pull request #12289: [FLINK-17874][Connectors/HBase]Handling the NPE for hbase-connector

2020-05-24 Thread GitBox


cyq89051127 commented on a change in pull request #12289:
URL: https://github.com/apache/flink/pull/12289#discussion_r429744678



##
File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/util/HBaseTypeUtils.java
##
@@ -81,13 +81,16 @@ public static Object deserializeToObject(byte[] value, int 
typeIdx, Charset stri
 * Serialize the Java Object to byte array with the given type.
 */
public static byte[] serializeFromObject(Object value, int typeIdx, 
Charset stringCharset) {
+   if (value == null){
+   return EMPTY_BYTES;

Review comment:
   Yeah, i get it. f there's an emtpy value sent to deserializeToObject, 
It's not a NPE exception.
   
   Do you think we should handle the exception in this PR? 
If not, i could create another issue to track the exception in 
deserializeToObject. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-24 Thread GitBox


rmetzger commented on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-633392055


   Looks like this PR introduced a test instability? 
https://issues.apache.org/jira/browse/FLINK-17912



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] cyq89051127 commented on a change in pull request #12289: [FLINK-17874][Connectors/HBase]Handling the NPE for hbase-connector

2020-05-24 Thread GitBox


cyq89051127 commented on a change in pull request #12289:
URL: https://github.com/apache/flink/pull/12289#discussion_r429731685



##
File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/util/HBaseTypeUtils.java
##
@@ -81,13 +81,16 @@ public static Object deserializeToObject(byte[] value, int 
typeIdx, Charset stri
 * Serialize the Java Object to byte array with the given type.
 */
public static byte[] serializeFromObject(Object value, int typeIdx, 
Charset stringCharset) {
+   if (value == null){
+   return EMPTY_BYTES;

Review comment:
   @wuchong ,hi , i check the code, when deserializeToObject called, there 
could be a same problem. But when i dig deeper, It seems ok. 
   
   There're two place where `deserializeToObject` is called.
   One(I think it's ok) is like the following in 
`HBaseReadWriteHelper.parseToRow(Result result, Object rowKey`):
   ```java
   if (value != null) { 
 familyRow.setField(q, HBaseTypeUtils.deserializeToObject(value, 
typeIdx, charset));
} else {
familyRow.setField(q, null);
   }
   ```
   
   The other one also looks good (in `HBaseReadWriteHelper.parseToRow(Result 
result)`) : 
   
   ```java 
   public Row parseToRow(Result result) {
if (rowKeyIndex == -1) {
return parseToRow(result, null);
} else {
Object rowkey = 
HBaseTypeUtils.deserializeToObject(result.getRow(), rowKeyType, charset);
return parseToRow(result, rowkey);
}
}
   ```
   
   I can't think of a scenario that result.getRow could return a null value 
because this code is used to parse the returned value form hbase with a scan 
command. 
   
   Bug as you see, if there's an emtpy value sent to `deserializeToObject`,  
It's not a NPE exception. 
Do you think we should handle the exception in this PR?  If not, i could 
create another issue to track the exception in `deserializeToObject`.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17912) KafkaShuffleITCase.testAssignedToPartitionEventTime: "Watermark should always increase"

2020-05-24 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17912:
--

 Summary: KafkaShuffleITCase.testAssignedToPartitionEventTime: 
"Watermark should always increase"
 Key: FLINK-17912
 URL: https://issues.apache.org/jira/browse/FLINK-17912
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2062&view=logs&j=1fc6e7bf-633c-5081-c32a-9dea24b05730&t=0d9ad4c1-5629-5ffc-10dc-113ca91e23c5

{code}
2020-05-22T21:16:24.7188044Z 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
2020-05-22T21:16:24.7188796Zat 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
2020-05-22T21:16:24.7189596Zat 
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:677)
2020-05-22T21:16:24.7190352Zat 
org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:81)
2020-05-22T21:16:24.7191261Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1673)
2020-05-22T21:16:24.7191824Zat 
org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:35)
2020-05-22T21:16:24.7192325Zat 
org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testAssignedToPartition(KafkaShuffleITCase.java:296)
2020-05-22T21:16:24.7192962Zat 
org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testAssignedToPartitionEventTime(KafkaShuffleITCase.java:126)
2020-05-22T21:16:24.7193436Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-05-22T21:16:24.7193999Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-05-22T21:16:24.7194720Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-22T21:16:24.7195226Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-05-22T21:16:24.7195864Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-05-22T21:16:24.7196574Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-05-22T21:16:24.7197511Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-05-22T21:16:24.7198020Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-05-22T21:16:24.7198494Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2020-05-22T21:16:24.7199128Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
2020-05-22T21:16:24.7199689Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
2020-05-22T21:16:24.7200308Zat 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
2020-05-22T21:16:24.7200645Zat java.lang.Thread.run(Thread.java:748)
2020-05-22T21:16:24.7201029Z Caused by: org.apache.flink.runtime.JobException: 
Recovery is suppressed by NoRestartBackoffTimeStrategy
2020-05-22T21:16:24.7201643Zat 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
2020-05-22T21:16:24.7202275Zat 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
2020-05-22T21:16:24.7202863Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
2020-05-22T21:16:24.7203525Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
2020-05-22T21:16:24.7204072Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
2020-05-22T21:16:24.7204618Zat 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
2020-05-22T21:16:24.7205255Zat 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
2020-05-22T21:16:24.7205716Zat 
sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
2020-05-22T21:16:24.7206191Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-22T21:16:24.7206585Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-05-22T21:16:24.7207261Zat 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
2020-05-22T21:16:24.7207736Zat 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
2020-05-22T21:16:24.7208234Zat 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
2020-05-22T21:

[jira] [Updated] (FLINK-17909) Make the GenericInMemoryCatalog to hold the serialized meta data to uncover more potential bugs

2020-05-24 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-17909:

Description: 
Currently, the builtin {{GenericInMemoryCatalog}} hold the meta objects in 
HashMap. However, this lead to many bugs when users switch to some persisted 
catalogs, e.g. Hive Metastore. For example, FLINK-17189, FLINK-17868, 
FLINK-16021. 

That is because the builtin {{GenericInMemoryCatalog}} doesn't cover the 
important path of serialization and deserialization of meta data. We missed 
some important meta information (PK, time attributes) when serialization and 
deserialization which lead to bugs. 

So I propose to hold the serialized meta data in {{GenericInMemoryCatalog}} to 
cover the serialization and deserializtion path. The serialized meta data may 
be in the {{Map}} properties format. 

We may lose some performance here, but {{GenericInMemoryCatalog}} is mostly 
used in demo/experiment/testing, so I think it's fine. 






  was:
Currently, the builtin {{GenericInMemoryCatalog}} hold the meta objects in 
HashMap. However, this lead to many bugs when users switch to some persisted 
catalogs, e.g. Hive Metastore. For example, FLINK-17189, FLINK-17868, 
FLINK-16021. 

That is because the builtin {{GenericInMemoryCatalog}} doesn't cover the 
important path of serialization and deserialization of meta data. We missed 
some important meta information (PK, time attributes) when serialization and 
deserialization which lead to bugs. 

So I propose to hold the serialized meta data in {{GenericInMemoryCatalog}} to 
cover the serialization and deserializtion path. 

We may lose some performance here, but {{GenericInMemoryCatalog}} is mostly 
used in demo/experiment/testing, so I think it's fine. 







> Make the GenericInMemoryCatalog to hold the serialized meta data to uncover 
> more potential bugs
> ---
>
> Key: FLINK-17909
> URL: https://issues.apache.org/jira/browse/FLINK-17909
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.11.0
>
>
> Currently, the builtin {{GenericInMemoryCatalog}} hold the meta objects in 
> HashMap. However, this lead to many bugs when users switch to some persisted 
> catalogs, e.g. Hive Metastore. For example, FLINK-17189, FLINK-17868, 
> FLINK-16021. 
> That is because the builtin {{GenericInMemoryCatalog}} doesn't cover the 
> important path of serialization and deserialization of meta data. We missed 
> some important meta information (PK, time attributes) when serialization and 
> deserialization which lead to bugs. 
> So I propose to hold the serialized meta data in {{GenericInMemoryCatalog}} 
> to cover the serialization and deserializtion path. The serialized meta data 
> may be in the {{Map}} properties format. 
> We may lose some performance here, but {{GenericInMemoryCatalog}} is mostly 
> used in demo/experiment/testing, so I think it's fine. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17909) Make the GenericInMemoryCatalog to hold the serialized meta data to uncover more potential bugs

2020-05-24 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115719#comment-17115719
 ] 

Jark Wu commented on FLINK-17909:
-

What do you think about this [~twalthr], [~phoenixjiangnan]? 

> Make the GenericInMemoryCatalog to hold the serialized meta data to uncover 
> more potential bugs
> ---
>
> Key: FLINK-17909
> URL: https://issues.apache.org/jira/browse/FLINK-17909
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.11.0
>
>
> Currently, the builtin {{GenericInMemoryCatalog}} hold the meta objects in 
> HashMap. However, this lead to many bugs when users switch to some persisted 
> catalogs, e.g. Hive Metastore. For example, FLINK-17189, FLINK-17868, 
> FLINK-16021. 
> That is because the builtin {{GenericInMemoryCatalog}} doesn't cover the 
> important path of serialization and deserialization of meta data. We missed 
> some important meta information (PK, time attributes) when serialization and 
> deserialization which lead to bugs. 
> So I propose to hold the serialized meta data in {{GenericInMemoryCatalog}} 
> to cover the serialization and deserializtion path. 
> We may lose some performance here, but {{GenericInMemoryCatalog}} is mostly 
> used in demo/experiment/testing, so I think it's fine. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12307: [FLINK-15621] Remove deprecated option and method to disable TTL compaction filter

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12307:
URL: https://github.com/apache/flink/pull/12307#issuecomment-64512


   
   ## CI report:
   
   * a353b26e154ac85af664bbac5a1cfdd95286ab8b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2099)
 
   * 1414be06762d3238c4138d9fab4daf26f661f58d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2104)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12308: [FLINK-17721][filesystems][test] Fix test instability for AbstractHadoopFileSystemITTest.

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12308:
URL: https://github.com/apache/flink/pull/12308#issuecomment-633345808


   
   ## CI report:
   
   * da01a91ea8eccf9bdc1dee6e32b1f5a9b06b0cc4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2094)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12312: [FLINK-15507][state backends] Activate local recovery by default

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12312:
URL: https://github.com/apache/flink/pull/12312#issuecomment-633384542


   
   ## CI report:
   
   * 0593cfda3ac278603f23e95f2ac07ec1cd20f3a7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2105)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12313: [FLINK-17005][docs] Translate the CREATE TABLE ... LIKE syntax documentation to Chinese

2020-05-24 Thread GitBox


flinkbot commented on pull request #12313:
URL: https://github.com/apache/flink/pull/12313#issuecomment-633389597


   
   ## CI report:
   
   * c225fa8b2a774a8579501e136004a9d64db89244 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17911) K8s e2e: error: timed out waiting for the condition on deployments/flink-native-k8s-session-1

2020-05-24 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17911:
--

 Summary: K8s e2e: error: timed out waiting for the condition on 
deployments/flink-native-k8s-session-1
 Key: FLINK-17911
 URL: https://issues.apache.org/jira/browse/FLINK-17911
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Kubernetes, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2062&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8

{code}
error: timed out waiting for the condition on 
deployments/flink-native-k8s-session-1
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] cyq89051127 commented on a change in pull request #12289: [FLINK-17874][Connectors/HBase]Handling the NPE for hbase-connector

2020-05-24 Thread GitBox


cyq89051127 commented on a change in pull request #12289:
URL: https://github.com/apache/flink/pull/12289#discussion_r429731685



##
File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/util/HBaseTypeUtils.java
##
@@ -81,13 +81,16 @@ public static Object deserializeToObject(byte[] value, int 
typeIdx, Charset stri
 * Serialize the Java Object to byte array with the given type.
 */
public static byte[] serializeFromObject(Object value, int typeIdx, 
Charset stringCharset) {
+   if (value == null){
+   return EMPTY_BYTES;

Review comment:
   @wuchong ,hi , i check the code, when deserializeToObject called, there 
could be a same problem. But when i dig deeper, It seems ok. 
   
   There're two place where `deserializeToObject` is called.
   One(I think it's ok) is like the following in 
`HBaseReadWriteHelper.parseToRow(Result result, Object rowKey`):
   ```java
   if (value != null) { 
 familyRow.setField(q, HBaseTypeUtils.deserializeToObject(value, 
typeIdx, charset));
} else {
familyRow.setField(q, null);
   }
   ```
   
   The other one also looks good (in `HBaseReadWriteHelper.parseToRow(Result 
result)`) : 
   
   ```java 
   public Row parseToRow(Result result) {
if (rowKeyIndex == -1) {
return parseToRow(result, null);
} else {
Object rowkey = 
HBaseTypeUtils.deserializeToObject(result.getRow(), rowKeyType, charset);
return parseToRow(result, rowkey);
}
}
   ```
   
   I can't think of a scenario that result.getRow could return a null value 
because this code is used to parse the returned value form hbase with a scan 
command. 
   
   Bug as you see, if there's an emtpy value sent to `deserializeToObject`,  
It's not a NPE exception ?  Do you think we should handle the exception in this 
PR?
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17911) K8s e2e: error: timed out waiting for the condition on deployments/flink-native-k8s-session-1

2020-05-24 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17911:
---
Issue Type: Bug  (was: Improvement)

> K8s e2e: error: timed out waiting for the condition on 
> deployments/flink-native-k8s-session-1
> -
>
> Key: FLINK-17911
> URL: https://issues.apache.org/jira/browse/FLINK-17911
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2062&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8
> {code}
> error: timed out waiting for the condition on 
> deployments/flink-native-k8s-session-1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #12289: [FLINK-17874][Connectors/HBase]Handling the NPE for hbase-connector

2020-05-24 Thread GitBox


wuchong commented on a change in pull request #12289:
URL: https://github.com/apache/flink/pull/12289#discussion_r429740951



##
File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/util/HBaseTypeUtils.java
##
@@ -81,13 +81,16 @@ public static Object deserializeToObject(byte[] value, int 
typeIdx, Charset stri
 * Serialize the Java Object to byte array with the given type.
 */
public static byte[] serializeFromObject(Object value, int typeIdx, 
Charset stringCharset) {
+   if (value == null){
+   return EMPTY_BYTES;

Review comment:
   The rowkey from HBase will never be null or emtpy bytes, but the 
qualifiers will be an empty bytes because that's what we write in the 
`serializeFromObject`.

##
File path: 
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/util/HBaseTypeUtils.java
##
@@ -81,13 +81,16 @@ public static Object deserializeToObject(byte[] value, int 
typeIdx, Charset stri
 * Serialize the Java Object to byte array with the given type.
 */
public static byte[] serializeFromObject(Object value, int typeIdx, 
Charset stringCharset) {
+   if (value == null){
+   return EMPTY_BYTES;

Review comment:
   The rowkey from HBase will never be null or emtpy bytes, but the 
qualifiers might be an empty bytes because that's what we write in the 
`serializeFromObject`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16947) ArtifactResolutionException: Could not transfer artifact. Entry [...] has not been leased from this pool

2020-05-24 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115714#comment-17115714
 ] 

Robert Metzger commented on FLINK-16947:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2061&view=logs&j=8fd975ef-f478-511d-4997-6f15fe8a1fd3&t=6f8201e9-1579-595a-9d2b-7158b26b4c57

> ArtifactResolutionException: Could not transfer artifact.  Entry [...] has 
> not been leased from this pool
> -
>
> Key: FLINK-16947
> URL: https://issues.apache.org/jira/browse/FLINK-16947
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Piotr Nowojski
>Assignee: Robert Metzger
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6982&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> Build of flink-metrics-availability-test failed with:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-metrics-availability-test: Unable to generate classpath: 
> org.apache.maven.artifact.resolver.ArtifactResolutionException: Could not 
> transfer artifact org.apache.maven.surefire:surefire-grouper:jar:2.22.1 
> from/to google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/): Entry 
> [id:13][route:{s}->https://maven-central-eu.storage-download.googleapis.com:443][state:null]
>  has not been leased from this pool
> [ERROR] org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] 
> [ERROR] from the specified remote repositories:
> [ERROR] google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/, 
> releases=true, snapshots=false),
> [ERROR] apache.snapshots (https://repository.apache.org/snapshots, 
> releases=false, snapshots=true)
> [ERROR] Path to dependency:
> [ERROR] 1) dummy:dummy:jar:1.0
> [ERROR] 2) org.apache.maven.surefire:surefire-junit47:jar:2.22.1
> [ERROR] 3) org.apache.maven.surefire:common-junit48:jar:2.22.1
> [ERROR] 4) org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :flink-metrics-availability-test
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12314: [FLINK-17756][table-api-java] Drop table/view shouldn't take effect o…

2020-05-24 Thread GitBox


flinkbot commented on pull request #12314:
URL: https://github.com/apache/flink/pull/12314#issuecomment-633387995


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b7a68f0f07ab9bb2abc3182d72c0dc80ed59dda1 (Mon May 25 
05:45:02 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17756) Drop table/view shouldn't take effect on each other

2020-05-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17756:
---
Labels: pull-request-available  (was: )

> Drop table/view shouldn't take effect on each other
> ---
>
> Key: FLINK-17756
> URL: https://issues.apache.org/jira/browse/FLINK-17756
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Kurt Young
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently "DROP VIEW" can successfully drop a table, and "DROP TABLE" can 
> successfully a view. We should disable this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] danny0405 opened a new pull request #12314: [FLINK-17756][table-api-java] Drop table/view shouldn't take effect o…

2020-05-24 Thread GitBox


danny0405 opened a new pull request #12314:
URL: https://github.com/apache/flink/pull/12314


   …n each other
   
   Throws exception when we drop table with object identifier that represents a 
view(and vice versa).
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17892) Dynamic option may not be a part of the table digest

2020-05-24 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115712#comment-17115712
 ] 

Jark Wu commented on FLINK-17892:
-

I see. Thanks for the explanation. The default start position is 
"GROUP_OFFSETS", which uses the committed offset of this group id. If the 
source is used by multiple pipline without changing the default start position, 
the reading offset will be un-deterministed. 

I'm not sure why we use "GROUP_OFFSETS" as default, is it the most commonly 
case? I know it is the default behavior of {{FlinkKafkaConsumer}}, but I think 
Table/SQL is a little different, where the registered kafka source will be used 
in different job and may forgot to change the default behavior. Shall we change 
it to "LATEST_OFFSET"? It is also the default behavior of 
{{kafka-console-consumer.sh}}.

What do you think [~twalthr], [~aljoscha]?

> Dynamic option may not be a part of the table digest
> 
>
> Key: FLINK-17892
> URL: https://issues.apache.org/jira/browse/FLINK-17892
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Major
> Attachments: image-2020-05-25-10-47-18-829.png, 
> image-2020-05-25-10-50-53-033.png
>
>
> For now, Table properties not be a part of table digest, but dynamic option 
> will be included.
> This will lead to an error when plan reused.
> if I defines a kafka table:
> {code:java}
> CREATE TABLE KAFKA (
> ……
> ) with (
> topic = 'xx',
> groupid = 'xxx'
> ……
> )
> Insert into sinktable select * from KAFKA;
> Insert into sinktable1 select * from KAFKA;{code}
> KAFKA source will be reused according to the SQL above.
> But if i add different table hint to dml, like:
> {code:java}
> Insert into sinktable select * from KAFKA /*+ OPTIONS('k1' = 'v1')*/;
> Insert into sinktable1 select * from KAFKA /*+ OPTIONS('k2' = 'v2')*/;
> {code}
> There will be two kafka tableSources  use the same groupid to  consumer the 
> same topic.
> So I think dynamic option may not be a part of the table digest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17651) DecomposeGroupingSetsRule generates wrong plan when there exist distinct agg and simple agg with same filter

2020-05-24 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-17651.

Resolution: Fixed

release-1.11: 197f3a3c481a3dcdc0fa1661d2606fbae2fe3ef9

master: 9ccdf061f45e635c7e205bbacd9a2972a765dcde

> DecomposeGroupingSetsRule generates wrong plan when there exist distinct agg 
> and simple agg with same filter
> 
>
> Key: FLINK-17651
> URL: https://issues.apache.org/jira/browse/FLINK-17651
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.1
>Reporter: Shuo Cheng
>Assignee: Shuo Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Consider adding the following test case to 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateITCaseBase. As 
> you can see, the actual result is wrong.
>  
> {code:java}
> @Test
> def testSimpleAndDistinctAggWithCommonFilter(): Unit = {
>   val sql =
> """
>   |SELECT
>   |   h,
>   |   COUNT(1) FILTER(WHERE d > 1),
>   |   COUNT(1) FILTER(WHERE d < 2),
>   |   COUNT(DISTINCT e) FILTER(WHERE d > 1)
>   |FROM Table5
>   |GROUP BY h
>   |""".stripMargin
>   checkResult(
> sql,
> Seq(
>   row(1,4,1,4),
>   row(2,7,0,7),
>   row(3,3,0,3)
> )
>   )
> }
> Results
>  == Correct Result ==   == Actual Result ==
>  1,4,1,41,0,1,4
>  2,7,0,72,0,0,7
>  3,3,0,33,0,0,3
> {code}
> The problem lies in `DecomposeGroupingSetsRule`, which omits filter arg of 
> aggregate call when doing some processing.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17651) DecomposeGroupingSetsRule generates wrong plan when there exist distinct agg and simple agg with same filter

2020-05-24 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-17651:


Assignee: Shuo Cheng

> DecomposeGroupingSetsRule generates wrong plan when there exist distinct agg 
> and simple agg with same filter
> 
>
> Key: FLINK-17651
> URL: https://issues.apache.org/jira/browse/FLINK-17651
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.1
>Reporter: Shuo Cheng
>Assignee: Shuo Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Consider adding the following test case to 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateITCaseBase. As 
> you can see, the actual result is wrong.
>  
> {code:java}
> @Test
> def testSimpleAndDistinctAggWithCommonFilter(): Unit = {
>   val sql =
> """
>   |SELECT
>   |   h,
>   |   COUNT(1) FILTER(WHERE d > 1),
>   |   COUNT(1) FILTER(WHERE d < 2),
>   |   COUNT(DISTINCT e) FILTER(WHERE d > 1)
>   |FROM Table5
>   |GROUP BY h
>   |""".stripMargin
>   checkResult(
> sql,
> Seq(
>   row(1,4,1,4),
>   row(2,7,0,7),
>   row(3,3,0,3)
> )
>   )
> }
> Results
>  == Correct Result ==   == Actual Result ==
>  1,4,1,41,0,1,4
>  2,7,0,72,0,0,7
>  3,3,0,33,0,0,3
> {code}
> The problem lies in `DecomposeGroupingSetsRule`, which omits filter arg of 
> aggregate call when doing some processing.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-05-24 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115700#comment-17115700
 ] 

Robert Metzger commented on FLINK-17274:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2074&view=logs&j=baf26b34-3c6a-54e8-f93f-cf269b32f802&t=d6363642-ea4a-5c73-7edb-c00d4548b58e

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
> Fix For: 1.12.0
>
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786&view=logs&j=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5&t=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-05-24 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115699#comment-17115699
 ] 

Robert Metzger commented on FLINK-17274:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2074&view=logs&j=af0c3dd6-ccea-53d1-d352-344c568905e4&t=051eae9e-dba7-57eb-a6b4-dc83cb8281a8

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
> Fix For: 1.12.0
>
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786&view=logs&j=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5&t=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-05-24 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115696#comment-17115696
 ] 

Robert Metzger commented on FLINK-17274:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2084&view=logs&j=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89&t=e4f347ab-2a29-5d7c-3685-b0fcd2b6b051

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
> Fix For: 1.12.0
>
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786&view=logs&j=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5&t=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-05-24 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115697#comment-17115697
 ] 

Robert Metzger commented on FLINK-17274:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2084&view=logs&j=8fd975ef-f478-511d-4997-6f15fe8a1fd3&t=6f8201e9-1579-595a-9d2b-7158b26b4c57

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
> Fix For: 1.12.0
>
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786&view=logs&j=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5&t=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #12208: [FLINK-17651][table-planner-blink] DecomposeGroupingSetsRule generat…

2020-05-24 Thread GitBox


JingsongLi merged pull request #12208:
URL: https://github.com/apache/flink/pull/12208


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-05-24 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115695#comment-17115695
 ] 

Robert Metzger commented on FLINK-17274:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2084&view=logs&j=961f8f81-6b52-53df-09f6-7291a2e4af6a&t=2f99feaa-7a9b-5916-4c1c-5e61f395079e

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
> Fix For: 1.12.0
>
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786&view=logs&j=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5&t=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17510) StreamingKafkaITCase. testKafka timeouts on downloading Kafka

2020-05-24 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115694#comment-17115694
 ] 

Robert Metzger commented on FLINK-17510:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2084&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=375367d9-d72e-5c21-3be0-b45149130f6b

> StreamingKafkaITCase. testKafka timeouts on downloading Kafka
> -
>
> Key: FLINK-17510
> URL: https://issues.apache.org/jira/browse/FLINK-17510
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Connectors / Kafka, Tests
>Reporter: Robert Metzger
>Priority: Major
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=585&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}
> 2020-05-05T00:06:49.7268716Z [INFO] 
> ---
> 2020-05-05T00:06:49.7268938Z [INFO]  T E S T S
> 2020-05-05T00:06:49.7269282Z [INFO] 
> ---
> 2020-05-05T00:06:50.5336315Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-05-05T00:11:26.8603439Z [ERROR] Tests run: 3, Failures: 0, Errors: 2, 
> Skipped: 0, Time elapsed: 276.323 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-05-05T00:11:26.8604882Z [ERROR] testKafka[1: 
> kafka-version:0.11.0.2](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)
>   Time elapsed: 120.024 s  <<< ERROR!
> 2020-05-05T00:11:26.8605942Z java.io.IOException: Process ([wget, -q, -P, 
> /tmp/junit2815750531595874769/downloads/1290570732, 
> https://archive.apache.org/dist/kafka/0.11.0.2/kafka_2.11-0.11.0.2.tgz]) 
> exceeded timeout (12) or number of retries (3).
> 2020-05-05T00:11:26.8606732Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:132)
> 2020-05-05T00:11:26.8607321Z  at 
> org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:127)
> 2020-05-05T00:11:26.8607826Z  at 
> org.apache.flink.tests.util.cache.LolCache.getOrDownload(LolCache.java:31)
> 2020-05-05T00:11:26.8608343Z  at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.setupKafkaDist(LocalStandaloneKafkaResource.java:98)
> 2020-05-05T00:11:26.8608892Z  at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.before(LocalStandaloneKafkaResource.java:92)
> 2020-05-05T00:11:26.8609602Z  at 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46)
> 2020-05-05T00:11:26.8610026Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-05T00:11:26.8610553Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-05T00:11:26.8610958Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-05T00:11:26.8611388Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-05T00:11:26.8612214Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-05T00:11:26.8612706Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-05T00:11:26.8613109Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-05T00:11:26.8613551Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-05T00:11:26.8614019Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-05T00:11:26.8614442Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-05T00:11:26.8614869Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-05-05T00:11:26.8615251Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-05-05T00:11:26.8615654Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-05-05T00:11:26.8616060Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-05T00:11:26.8616465Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-05T00:11:26.8616893Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-05T00:11:26.8617893Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-05T00:11:26.8618490Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-05T00:11:26.8619056Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-05-05T00:11:26.8619589Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-05-05T00:11:26.8620073Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-05-05T00:11:26.8620745Z  at 
> org.junit.runners.Pa

[GitHub] [flink] yangyichao-mango commented on a change in pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…

2020-05-24 Thread GitBox


yangyichao-mango commented on a change in pull request #12237:
URL: https://github.com/apache/flink/pull/12237#discussion_r429011221



##
File path: docs/training/streaming_analytics.zh.md
##
@@ -27,125 +27,101 @@ under the License.
 * This will be replaced by the TOC
 {:toc}
 
-## Event Time and Watermarks
+## 事件时间和水印
 
-### Introduction
+### 简介
 
-Flink explicitly supports three different notions of time:
+Flink 明确的支持以下三种事件时间:
 
-* _event time:_ the time when an event occurred, as recorded by the device 
producing (or storing) the event
+* _事件时间:_ 事件产生的时间,记录的是设备生产(或者存储)事件的时间
 
-* _ingestion time:_ a timestamp recorded by Flink at the moment it ingests the 
event
+* _摄取时间:_ Flink 提取事件时记录的时间戳
 
-* _processing time:_ the time when a specific operator in your pipeline is 
processing the event
+* _处理时间:_ Flink 中通过特定的操作处理事件的时间
 
-For reproducible results, e.g., when computing the maximum price a stock 
reached during the first
-hour of trading on a given day, you should use event time. In this way the 
result won't depend on
-when the calculation is performed. This kind of real-time application is 
sometimes performed using
-processing time, but then the results are determined by the events that happen 
to be processed
-during that hour, rather than the events that occurred then. Computing 
analytics based on processing
-time causes inconsistencies, and makes it difficult to re-analyze historic 
data or test new
-implementations.
+为了获得可重现的结果,例如在计算过去的特定一天里第一个小时股票的最高价格时,我们应该使用事件时间。这样的话,无论
+什么时间去计算都不会影响输出结果。然而有些人,在实时计算应用时使用处理时间,这样的话,输出结果就会被处理时间点所决
+定,而不是事件的生成时间。基于处理时间会导致多次计算的结果不一致,也可能会导致重新分析历史数据和测试变得异常困难。
 
-### Working with Event Time
+### 使用事件时间
 
-By default, Flink will use processing time. To change this, you can set the 
Time Characteristic:
+Flink 在默认情况下使用处理时间。也可以通过如下配置来告诉 Flink 选择哪种事件时间:
 
 {% highlight java %}
 final StreamExecutionEnvironment env =
 StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
 {% endhighlight %}
 
-If you want to use event time, you will also need to supply a Timestamp 
Extractor and Watermark
-Generator that Flink will use to track the progress of event time. This will 
be covered in the
-section below on [Working with Watermarks]({% link
-training/streaming_analytics.zh.md %}#working-with-watermarks), but first we 
should explain what
-watermarks are.
+如果想要使用事件时间,则需要额外给 Flink 提供一个时间戳的提取器和水印,Flink 将使用它们来跟踪事件时间的进度。这
+将在选节[使用水印]({% linktutorials/streaming_analytics.md %}#使用水印)中介绍,但是首先我们需要解释一下
+水印是什么。
 
-### Watermarks
+### 水印
 
-Let's work through a simple example that will show why watermarks are needed, 
and how they work.
+让我们通过一个简单的示例来演示,该示例将说明为什么需要水印及其工作方式。
 
-In this example you have a stream of timestamped events that arrive somewhat 
out of order, as shown
-below. The numbers shown are timestamps that indicate when these events 
actually occurred. The first
-event to arrive happened at time 4, and it is followed by an event that 
happened earlier, at time 2,
-and so on:
+在此示例中,我们将看到带有混乱时间戳的事件流,如下所示。显示的数字表达的是这些事件实际发生时间的时间戳。到达的
+第一个事件发生在时间4,随后发生的事件发生在更早的时间2,依此类推:
 
 
 ··· 23 19 22 24 21 14 17 13 12 15 9 11 7 2 4 →
 
 
-Now imagine that you are trying create a stream sorter. This is meant to be an 
application that
-processes each event from a stream as it arrives, and emits a new stream 
containing the same events,
-but ordered by their timestamps.
+假设我们要对数据流排序,我们想要达到的目的是:应用程序应该在数据流里的事件到达时就处理每个事件,并发出包含相同
+事件但按其时间戳排序的新流。
 
-Some observations:
+让我们重新审视这些数据:
 
-(1) The first element your stream sorter sees is the 4, but you can't just 
immediately release it as
-the first element of the sorted stream. It may have arrived out of order, and 
an earlier event might
-yet arrive. In fact, you have the benefit of some god-like knowledge of this 
stream's future, and
-you can see that your stream sorter should wait at least until the 2 arrives 
before producing any
-results.
+(1) 我们的排序器第一个看到的数据是4,但是我们不能立即将其作为已排序流的第一个元素释放。因为我们并不能确定它是
+有序的,并且较早的事件有可能并未到达。事实上,如果站在上帝视角,我们知道,必须要等到2到来时,排序器才可以有事件输出。
 
-*Some buffering, and some delay, is necessary.*
+*需要一些缓冲,需要一些时间,但这都是值得的*
 
-(2) If you do this wrong, you could end up waiting forever. First the sorter 
saw an event from time
-4, and then an event from time 2. Will an event with a timestamp less than 2 
ever arrive? Maybe.
-Maybe not. You could wait forever and never see a 1.
+(2) 接下来的这一步,如果我们选择的是固执的等待,我们永远不会有结果。首先,我们从时间4看到了一个事件,然后从时
+间2看到了一个事件。可是,时间戳小于2的事件接下来会不会到来呢?可能会,也可能不会。再次站在上帝视角,我们知道,我
+们永远不会看到1。
 
-*Eventually you have to be courageous and emit the 2 as the start of the 
sorted stream.*
+*最终,我们必须勇于承担责任,并发出指令,把2作为已排序的事件流的开始*
 
-(3) What you need then is some sort of policy that defines when, for any given 
timestamped event, to
-stop waiting for the arrival of earlier events.
+(3)然后,我们需要一种策略,该策略定义:对于任何给定时间戳的事件,Flink何时停止等待较早事件的到来。
 
-*This is precisely what watermarks do* — they define when to stop waiting for 
earlier events.
+*这正是水印的作用* — 它们定义何时停止

[jira] [Reopened] (FLINK-15661) JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure failed because of Could not find Flink job

2020-05-24 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reopened FLINK-15661:


I observed another failure of this test: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2085&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=45cc9205-bdb7-5b54-63cd-89fdc0983323

{code}
2020-05-24T20:47:19.2825741Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 31.901 s <<< FAILURE! - in 
org.apache.flink.test.recovery.JobManagerHAProcessFailureRecoveryITCase
2020-05-24T20:47:19.2826917Z [ERROR] testDispatcherProcessFailure[ExecutionMode 
PIPELINED](org.apache.flink.test.recovery.JobManagerHAProcessFailureRecoveryITCase)
  Time elapsed: 15.971 s  <<< ERROR!
2020-05-24T20:47:19.2827780Z java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.rpc.exceptions.RpcConnectionException: Could not 
connect to rpc endpoint under address 
akka.tcp://flink@127.0.0.1:45907/user/rpc/dispatcher_1.
2020-05-24T20:47:19.2828444Zat 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
2020-05-24T20:47:19.2829276Zat 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
2020-05-24T20:47:19.2829840Zat 
org.apache.flink.test.recovery.JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure(JobManagerHAProcessFailureRecoveryITCase.java:296)
2020-05-24T20:47:19.2830366Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-05-24T20:47:19.2830750Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-05-24T20:47:19.2831190Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-24T20:47:19.2831592Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-05-24T20:47:19.2832038Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-05-24T20:47:19.2832502Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-05-24T20:47:19.2832958Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-05-24T20:47:19.2833405Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-05-24T20:47:19.2833899Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2020-05-24T20:47:19.2834319Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-05-24T20:47:19.2834693Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2020-05-24T20:47:19.2835056Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-05-24T20:47:19.2835402Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2020-05-24T20:47:19.2835814Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2020-05-24T20:47:19.2836404Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2020-05-24T20:47:19.2836824Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2020-05-24T20:47:19.2837200Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2020-05-24T20:47:19.2847121Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2020-05-24T20:47:19.2847541Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2020-05-24T20:47:19.2847920Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2020-05-24T20:47:19.2848299Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2020-05-24T20:47:19.2848709Zat 
org.junit.runners.Suite.runChild(Suite.java:128)
2020-05-24T20:47:19.2849046Zat 
org.junit.runners.Suite.runChild(Suite.java:27)
2020-05-24T20:47:19.2849399Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2020-05-24T20:47:19.2849766Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2020-05-24T20:47:19.2850156Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2020-05-24T20:47:19.2850531Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2020-05-24T20:47:19.2850920Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2020-05-24T20:47:19.2851334Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2020-05-24T20:47:19.2851773Zat 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2020-05-24T20:47:19.2852179Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2020-05-24T20:47:19.2852576Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2020-05-24T20:47:19.2853042Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2020-05-24T20:47:19.2853508Zat 
org.apache.maven.surefire.junit4.

[GitHub] [flink] flinkbot commented on pull request #12312: [FLINK-15507][state backends] Activate local recovery by default

2020-05-24 Thread GitBox


flinkbot commented on pull request #12312:
URL: https://github.com/apache/flink/pull/12312#issuecomment-633384542


   
   ## CI report:
   
   * 0593cfda3ac278603f23e95f2ac07ec1cd20f3a7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12313: [FLINK-17005][docs] Translate the CREATE TABLE ... LIKE syntax documentation to Chinese

2020-05-24 Thread GitBox


flinkbot commented on pull request #12313:
URL: https://github.com/apache/flink/pull/12313#issuecomment-633384424


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c225fa8b2a774a8579501e136004a9d64db89244 (Mon May 25 
05:32:10 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12307: [FLINK-15621] Remove deprecated option and method to disable TTL compaction filter

2020-05-24 Thread GitBox


flinkbot edited a comment on pull request #12307:
URL: https://github.com/apache/flink/pull/12307#issuecomment-64512


   
   ## CI report:
   
   * a353b26e154ac85af664bbac5a1cfdd95286ab8b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2099)
 
   * 1414be06762d3238c4138d9fab4daf26f661f58d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17510) StreamingKafkaITCase. testKafka timeouts on downloading Kafka

2020-05-24 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115690#comment-17115690
 ] 

Robert Metzger commented on FLINK-17510:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2085&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5

> StreamingKafkaITCase. testKafka timeouts on downloading Kafka
> -
>
> Key: FLINK-17510
> URL: https://issues.apache.org/jira/browse/FLINK-17510
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Connectors / Kafka, Tests
>Reporter: Robert Metzger
>Priority: Major
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=585&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}
> 2020-05-05T00:06:49.7268716Z [INFO] 
> ---
> 2020-05-05T00:06:49.7268938Z [INFO]  T E S T S
> 2020-05-05T00:06:49.7269282Z [INFO] 
> ---
> 2020-05-05T00:06:50.5336315Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-05-05T00:11:26.8603439Z [ERROR] Tests run: 3, Failures: 0, Errors: 2, 
> Skipped: 0, Time elapsed: 276.323 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-05-05T00:11:26.8604882Z [ERROR] testKafka[1: 
> kafka-version:0.11.0.2](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)
>   Time elapsed: 120.024 s  <<< ERROR!
> 2020-05-05T00:11:26.8605942Z java.io.IOException: Process ([wget, -q, -P, 
> /tmp/junit2815750531595874769/downloads/1290570732, 
> https://archive.apache.org/dist/kafka/0.11.0.2/kafka_2.11-0.11.0.2.tgz]) 
> exceeded timeout (12) or number of retries (3).
> 2020-05-05T00:11:26.8606732Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:132)
> 2020-05-05T00:11:26.8607321Z  at 
> org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:127)
> 2020-05-05T00:11:26.8607826Z  at 
> org.apache.flink.tests.util.cache.LolCache.getOrDownload(LolCache.java:31)
> 2020-05-05T00:11:26.8608343Z  at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.setupKafkaDist(LocalStandaloneKafkaResource.java:98)
> 2020-05-05T00:11:26.8608892Z  at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.before(LocalStandaloneKafkaResource.java:92)
> 2020-05-05T00:11:26.8609602Z  at 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46)
> 2020-05-05T00:11:26.8610026Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-05T00:11:26.8610553Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-05T00:11:26.8610958Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-05T00:11:26.8611388Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-05T00:11:26.8612214Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-05T00:11:26.8612706Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-05T00:11:26.8613109Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-05T00:11:26.8613551Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-05T00:11:26.8614019Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-05T00:11:26.8614442Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-05T00:11:26.8614869Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-05-05T00:11:26.8615251Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-05-05T00:11:26.8615654Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-05-05T00:11:26.8616060Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-05T00:11:26.8616465Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-05T00:11:26.8616893Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-05T00:11:26.8617893Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-05T00:11:26.8618490Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-05T00:11:26.8619056Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-05-05T00:11:26.8619589Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-05-05T00:11:26.8620073Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-05-05T00:11:26.8620745Z  at 
> org.junit.runners.Pa

[jira] [Closed] (FLINK-17892) Dynamic option may not be a part of the table digest

2020-05-24 Thread hailong wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hailong wang closed FLINK-17892.

Resolution: Not A Problem

> Dynamic option may not be a part of the table digest
> 
>
> Key: FLINK-17892
> URL: https://issues.apache.org/jira/browse/FLINK-17892
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Major
> Attachments: image-2020-05-25-10-47-18-829.png, 
> image-2020-05-25-10-50-53-033.png
>
>
> For now, Table properties not be a part of table digest, but dynamic option 
> will be included.
> This will lead to an error when plan reused.
> if I defines a kafka table:
> {code:java}
> CREATE TABLE KAFKA (
> ……
> ) with (
> topic = 'xx',
> groupid = 'xxx'
> ……
> )
> Insert into sinktable select * from KAFKA;
> Insert into sinktable1 select * from KAFKA;{code}
> KAFKA source will be reused according to the SQL above.
> But if i add different table hint to dml, like:
> {code:java}
> Insert into sinktable select * from KAFKA /*+ OPTIONS('k1' = 'v1')*/;
> Insert into sinktable1 select * from KAFKA /*+ OPTIONS('k2' = 'v2')*/;
> {code}
> There will be two kafka tableSources  use the same groupid to  consumer the 
> same topic.
> So I think dynamic option may not be a part of the table digest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17892) Dynamic option may not be a part of the table digest

2020-05-24 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17115689#comment-17115689
 ] 

hailong wang commented on FLINK-17892:
--

Thanks [~lzljs3620320] [~jark], I close this  isssue. 

> Dynamic option may not be a part of the table digest
> 
>
> Key: FLINK-17892
> URL: https://issues.apache.org/jira/browse/FLINK-17892
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Major
> Attachments: image-2020-05-25-10-47-18-829.png, 
> image-2020-05-25-10-50-53-033.png
>
>
> For now, Table properties not be a part of table digest, but dynamic option 
> will be included.
> This will lead to an error when plan reused.
> if I defines a kafka table:
> {code:java}
> CREATE TABLE KAFKA (
> ……
> ) with (
> topic = 'xx',
> groupid = 'xxx'
> ……
> )
> Insert into sinktable select * from KAFKA;
> Insert into sinktable1 select * from KAFKA;{code}
> KAFKA source will be reused according to the SQL above.
> But if i add different table hint to dml, like:
> {code:java}
> Insert into sinktable select * from KAFKA /*+ OPTIONS('k1' = 'v1')*/;
> Insert into sinktable1 select * from KAFKA /*+ OPTIONS('k2' = 'v2')*/;
> {code}
> There will be two kafka tableSources  use the same groupid to  consumer the 
> same topic.
> So I think dynamic option may not be a part of the table digest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   >