[jira] [Created] (FLINK-16112) KafkaDeserializationSchema#isEndOfStream is ill-defined

2020-02-16 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-16112:
---

 Summary: KafkaDeserializationSchema#isEndOfStream is ill-defined
 Key: FLINK-16112
 URL: https://issues.apache.org/jira/browse/FLINK-16112
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Reporter: Tzu-Li (Gordon) Tai


Motivated by this email thread: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/KafkaFetcher-closed-before-end-of-stream-is-received-for-all-partitions-td32827.html

In general, the {{isEndOfStream}} method is ill-defined. An possible 
improvement is to redefine the semantics and method signature, to:

{code}
boolean isEndOfPartition(T nextElement, String topic, int partition);
{code}

so that it indicates end of a Kafka partition. With this, the Kafka consumer is 
able to properly define end-of-subtask to be when all of its assigned 
partitions have signaled EOF.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on issue #11093: [FLINK-16055][hive] Avoid catalog functions when listing Hive built-i…

2020-02-16 Thread GitBox
lirui-apache commented on issue #11093: [FLINK-16055][hive] Avoid catalog 
functions when listing Hive built-i…
URL: https://github.com/apache/flink/pull/11093#issuecomment-586856518
 
 
   @bowenli86 PR updated. Please have a look, thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11106: [FLINK-15708] Add MigrationVersion.v1_10

2020-02-16 Thread GitBox
flinkbot commented on issue #11106: [FLINK-15708] Add MigrationVersion.v1_10
URL: https://github.com/apache/flink/pull/11106#issuecomment-586854778
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit a01a223e6dbf5c94c41630445828a25883913699 (Mon Feb 17 
07:35:13 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15708).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16111) Kubernetes deployment does not respect "taskmanager.cpu.cores".

2020-02-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038127#comment-17038127
 ] 

Xintong Song commented on FLINK-16111:
--

cc [~fly_in_gis] [~azagrebin]

> Kubernetes deployment does not respect "taskmanager.cpu.cores".
> ---
>
> Key: FLINK-16111
> URL: https://issues.apache.org/jira/browse/FLINK-16111
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0
>Reporter: Xintong Song
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> The Kubernetes deployment uses `kubernetes.taskmanager.cpu` for configuring 
> TM cpu cores, and will fallback to number-of-slots if not specified.
> FLINK-14188 introduced a common option `taskmanager.cpu.cores` (ATM not 
> exposed to users and for internal usage only). A common logic is to decide 
> the TM cpu cores following the fallback order of "common option -> 
> K8s/Yarn/Mesos specific option -> numberOfSlot".
> The above fallback rules are not respected by the Kubernetes deployment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] yanghua opened a new pull request #11106: [FLINK-15708] Add MigrationVersion.v1_10

2020-02-16 Thread GitBox
yanghua opened a new pull request #11106: [FLINK-15708] Add 
MigrationVersion.v1_10
URL: https://github.com/apache/flink/pull/11106
 
 
   
   
   ## What is the purpose of the change
   
   *This pull request adds MigrationVersion.v1_10*
   
   
   ## Brief change log
   
 - *Add MigrationVersion.v1_10*
   
   
   ## Verifying this change
   
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16111) Kubernetes deployment does not respect "taskmanager.cpu.cores".

2020-02-16 Thread Xintong Song (Jira)
Xintong Song created FLINK-16111:


 Summary: Kubernetes deployment does not respect 
"taskmanager.cpu.cores".
 Key: FLINK-16111
 URL: https://issues.apache.org/jira/browse/FLINK-16111
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes
Affects Versions: 1.10.0
Reporter: Xintong Song
 Fix For: 1.10.1, 1.11.0


The Kubernetes deployment uses `kubernetes.taskmanager.cpu` for configuring TM 
cpu cores, and will fallback to number-of-slots if not specified.

FLINK-14188 introduced a common option `taskmanager.cpu.cores` (ATM not exposed 
to users and for internal usage only). A common logic is to decide the TM cpu 
cores following the fallback order of "common option -> K8s/Yarn/Mesos specific 
option -> numberOfSlot".

The above fallback rules are not respected by the Kubernetes deployment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15708) Add MigrationVersion.v1_10

2020-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15708:
---
Labels: pull-request-available  (was: )

> Add MigrationVersion.v1_10
> --
>
> Key: FLINK-15708
> URL: https://issues.apache.org/jira/browse/FLINK-15708
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.11.0
>Reporter: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Add MigrationVersion.v1_10



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on a change in pull request #11093: [FLINK-16055][hive] Avoid catalog functions when listing Hive built-i…

2020-02-16 Thread GitBox
lirui-apache commented on a change in pull request #11093: [FLINK-16055][hive] 
Avoid catalog functions when listing Hive built-i…
URL: https://github.com/apache/flink/pull/11093#discussion_r380019248
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/client/HiveShimV120.java
 ##
 @@ -148,6 +151,9 @@ public CatalogColumnStatisticsDataDate 
toFlinkDateColStats(ColumnStatisticsData
 
@Override
public Optional getBuiltInFunctionInfo(String name) {
+   if (isCatalogFunctionName(name)) {
+   return Optional.empty();
+   }
Optional functionInfo = getFunctionInfo(name);
 
if (functionInfo.isPresent() && 
isBuiltInFunctionInfo(functionInfo.get())) {
 
 Review comment:
   Since this is a public API, I'd rather not to assume the name has been 
filtered. I'll do some refactor to avoid duplicated code


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] shuai-xu commented on a change in pull request #11068: [FLINK-15964] [cep] fix getting event of previous stage in notFollowedBy may throw exception bug

2020-02-16 Thread GitBox
shuai-xu commented on a change in pull request #11068: [FLINK-15964] [cep] fix 
getting event of previous stage in notFollowedBy may throw exception bug
URL: https://github.com/apache/flink/pull/11068#discussion_r380015480
 
 

 ##
 File path: 
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/nfa/NFAITCase.java
 ##
 @@ -2844,4 +2845,35 @@ public void testSharedBufferClearing() throws Exception 
{
Mockito.verify(accessor, 
Mockito.times(1)).advanceTime(2);
}
}
+
+   /**
+* Test that can access the value of the previous stage directly in 
notFollowedBy.
+*
 
 Review comment:
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] shuai-xu commented on a change in pull request #11068: [FLINK-15964] [cep] fix getting event of previous stage in notFollowedBy may throw exception bug

2020-02-16 Thread GitBox
shuai-xu commented on a change in pull request #11068: [FLINK-15964] [cep] fix 
getting event of previous stage in notFollowedBy may throw exception bug
URL: https://github.com/apache/flink/pull/11068#discussion_r380015457
 
 

 ##
 File path: 
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/nfa/NFAITCase.java
 ##
 @@ -2844,4 +2845,35 @@ public void testSharedBufferClearing() throws Exception 
{
Mockito.verify(accessor, 
Mockito.times(1)).advanceTime(2);
}
}
+
+   /**
+* Test that can access the value of the previous stage directly in 
notFollowedBy.
+*
+* @see https://issues.apache.org/jira/browse/FLINK-15964";>FLINK-15964
+* @throws Exception
+*/
+   @Test
+   public void testAccessPreviousStageInNotFollowedBy() throws Exception {
 
 Review comment:
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-16 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r380004457
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/batch/sql/TableFactoryTest.scala
 ##
 @@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.batch.sql
+
+import org.apache.flink.table.catalog.GenericInMemoryCatalog
+import org.apache.flink.table.factories.TableFactory
+import org.apache.flink.table.planner.plan.utils.TestContextTableFactory
+import org.apache.flink.table.planner.utils.TableTestBase
+
+import org.junit.{Assert, Test}
+
+import java.util.Optional
+
+class TableFactoryTest extends TableTestBase {
 
 Review comment:
   We can merge this test and the streaming test into one using `Parameterized` 
test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-16 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r38717
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/TableSinkFactory.java
 ##
 @@ -49,8 +53,46 @@
 * @param table {@link CatalogTable} instance.
 * @return the configured table sink.
 */
+   @Deprecated
 
 Review comment:
   Please add a deprecate javadoc to explain the deprecated reason and which 
method is recommended. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-16 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r380005079
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/utils/TestContextTableFactory.scala
 ##
 @@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.utils
+
+import org.apache.flink.configuration.{ConfigOption, ConfigOptions}
+import org.apache.flink.table.factories.{StreamTableSinkFactory, 
StreamTableSourceFactory, TableFactoryUtil, TableSinkFactory, 
TableSourceFactory}
+import org.apache.flink.table.sinks.{StreamTableSink, TableSink}
+import org.apache.flink.table.sources.{StreamTableSource, TableSource}
+
+import org.junit.Assert
+
+import java.{lang, util}
+
+/**
+  * Test [[TableSourceFactory]] and [[TableSinkFactory]] for context.
+  */
+class TestContextTableFactory[T]
+extends StreamTableSourceFactory[T] with StreamTableSinkFactory[T] {
+
+  val needContain: ConfigOption[lang.Boolean] =
+ConfigOptions.key("need.contain").booleanType().defaultValue(false)
+  var hasInvokedSource = false
+  var hasInvokedSink = false
+
+  override def createStreamTableSource(
+  properties: util.Map[String, String]): StreamTableSource[T] = {
+throw new UnsupportedOperationException
 
 Review comment:
   Can be removed now, to verify this method is not requried to override 
anymore. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-16 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r379996777
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/HiveTableFactoryTest.java
 ##
 @@ -78,9 +83,39 @@ public void testGenericTable() throws Exception {
Optional opt = catalog.getTableFactory();
assertTrue(opt.isPresent());
HiveTableFactory tableFactory = (HiveTableFactory) opt.get();
-   TableSource tableSource = tableFactory.createTableSource(path, 
table);
+   TableSource tableSource = tableFactory.createTableSource(new 
TableSourceFactory.Context() {
+   @Override
+   public ObjectIdentifier getObjectIdentifier() {
+   return ObjectIdentifier.of("mycatalog", "mydb", 
"mytable");
+   }
+
+   @Override
+   public CatalogTable getTable() {
+   return table;
+   }
+
+   @Override
+   public ReadableConfig getConfiguration() {
+   return new Configuration();
+   }
+   });
assertTrue(tableSource instanceof StreamTableSource);
-   TableSink tableSink = tableFactory.createTableSink(path, table);
+   TableSink tableSink = tableFactory.createTableSink(new 
TableSinkFactory.Context() {
 
 Review comment:
   Can we introduce a `TableSinkFactoryContextImpl` class to reduce so many 
anonymous classes?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-16 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r37056
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogTableImpl.java
 ##
 @@ -81,4 +81,31 @@ public CatalogBaseTable copy() {
 
return descriptor.asMap();
}
+
+   /**
+* Construct a {@link CatalogTableImpl} from complete properties that 
contains table schema.
+*/
+   public static CatalogTableImpl fromProperties(Map 
properties) {
+   Map newProperties = new HashMap<>(properties);
+   TableSchema tableSchema = getSchema(newProperties);
+   
schemaToProperties(tableSchema).keySet().forEach(newProperties::remove);
 
 Review comment:
   I think we can add a method to `DescriptorProperties#removeKeyPrefix(String 
prefix)` to drop schema properties. This can also be used in other place. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] shuai-xu commented on issue #11009: [Flink-15873] [cep] fix matches result not returned when existing earlier partial matches

2020-02-16 Thread GitBox
shuai-xu commented on issue #11009: [Flink-15873] [cep] fix matches result not 
returned when existing earlier partial matches
URL: https://github.com/apache/flink/pull/11009#issuecomment-586844458
 
 
   @dianfu Thank you very much for review. I didn't see any docs to say that 
this is a design. And if so, the result will never be omitted if earlier 
partial failed to match. I agree that the after skip strategy needs refined to 
have a definite action.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-16 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r380001446
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/factories/StreamTableSourceFactory.java
 ##
 @@ -39,7 +39,20 @@
 * @param properties normalized properties describing a stream table 
source.
 * @return the configured stream table source.
 */
-   StreamTableSource createStreamTableSource(Map 
properties);
+   default StreamTableSource createStreamTableSource(Map properties) {
+   return null;
+   }
+
+   /**
+* Creates and configures a {@link StreamTableSource} based on the given
+{@link Context}.
+*
+* @param context context of this table source.
+* @return the configured table source.
+*/
+   default StreamTableSource createStreamTableSource(Context context) {
 
 Review comment:
   Do we need this interface? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-16 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r38855
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/TableSinkFactory.java
 ##
 @@ -40,7 +42,9 @@
 * @param properties normalized properties describing a table sink.
 * @return the configured table sink.
 */
-   TableSink createTableSink(Map properties);
+   default TableSink createTableSink(Map properties) {
 
 Review comment:
   I think we can also deprecate this method now, as what we do for 
`TableSink#getOutputType` when `getConsumedDataType` is introduced.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-16 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r380009278
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/utils/TestContextTableFactory.scala
 ##
 @@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.utils
+
+import org.apache.flink.configuration.{ConfigOption, ConfigOptions}
+import org.apache.flink.table.factories.{StreamTableSinkFactory, 
StreamTableSourceFactory, TableFactoryUtil, TableSinkFactory, 
TableSourceFactory}
+import org.apache.flink.table.sinks.TableSink
+import org.apache.flink.table.sources.TableSource
+
+import org.junit.Assert
+
+import java.{lang, util}
+
+/**
+  * Test [[TableSourceFactory]] and [[TableSinkFactory]] for context.
+  */
+class TestContextTableFactory[T]
+extends StreamTableSourceFactory[T] with StreamTableSinkFactory[T] {
+
+  val needContain: ConfigOption[lang.Boolean] =
+ConfigOptions.key("need.contain").booleanType().defaultValue(false)
 
 Review comment:
   Can we give a more literal name for the key? e.g. 
   
   ```scala
   val REQUIRED_KEY: ConfigOption[lang.Boolean] = ConfigOptions
   .key("testing.required.key").booleanType().defaultValue(false)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-16 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r380001365
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/factories/StreamTableSourceFactory.java
 ##
 @@ -48,4 +61,12 @@
default TableSource createTableSource(Map 
properties) {
return createStreamTableSource(properties);
}
+
+   /**
+* Only create a stream table source.
+*/
+   @Override
+   default TableSource createTableSource(Context context) {
+   return createStreamTableSource(context);
 
 Review comment:
   We should validate the return value of `createStreamTableSource` is not 
null. If is null, we should throw an exception to indicate users to implement 
`createTableSource(context)`, because `createStreamTableSource(Map)` is default 
implemented now, and users may not override it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-16 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r380009489
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/utils/TestContextTableFactory.scala
 ##
 @@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.utils
+
+import org.apache.flink.configuration.{ConfigOption, ConfigOptions}
+import org.apache.flink.table.factories.{StreamTableSinkFactory, 
StreamTableSourceFactory, TableFactoryUtil, TableSinkFactory, 
TableSourceFactory}
+import org.apache.flink.table.sinks.TableSink
+import org.apache.flink.table.sources.TableSource
+
+import org.junit.Assert
+
+import java.{lang, util}
+
+/**
+  * Test [[TableSourceFactory]] and [[TableSinkFactory]] for context.
+  */
+class TestContextTableFactory[T]
+extends StreamTableSourceFactory[T] with StreamTableSinkFactory[T] {
+
+  val needContain: ConfigOption[lang.Boolean] =
+ConfigOptions.key("need.contain").booleanType().defaultValue(false)
+  var hasInvokedSource = false
+  var hasInvokedSink = false
+
+  override def requiredContext(): util.Map[String, String] = {
+throw new UnsupportedOperationException
+  }
+
+  override def supportedProperties(): util.List[String] = {
+throw new UnsupportedOperationException
+  }
+
+  override def createTableSource(context: TableSourceFactory.Context): 
TableSource[T] = {
+Assert.assertTrue(context.getConfiguration.get(needContain))
+hasInvokedSource = true
+TableFactoryUtil.findAndCreateTableSource(context)
+  }
+
+  override def createTableSink(context: TableSinkFactory.Context): 
TableSink[T] = {
+Assert.assertTrue(context.getConfiguration.get(needContain))
 
 Review comment:
   Would be better to verify the `ObjectIdentifier` too. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16106) Add PersistedList to the SDK

2020-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16106:
---
Labels: pull-request-available  (was: )

> Add PersistedList to the SDK
> 
>
> Key: FLINK-16106
> URL: https://issues.apache.org/jira/browse/FLINK-16106
> Project: Flink
>  Issue Type: Task
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>  Labels: pull-request-available
>
> Now that statefun is not multiplexing state in a single column family,
> We can add a PersistedList to the SDK.
> A persisted list would support addition, (add and addAll) and iteration over 
> the items.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] tzulitai commented on issue #25: [FLINK-16106] Add PersistedList state primitive

2020-02-16 Thread GitBox
tzulitai commented on issue #25: [FLINK-16106] Add PersistedList state primitive
URL: https://github.com/apache/flink-statefun/pull/25#issuecomment-586843804
 
 
   cc @igalshilman 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai opened a new pull request #25: [FLINK-16106] Add PersistedList state primitive

2020-02-16 Thread GitBox
tzulitai opened a new pull request #25: [FLINK-16106] Add PersistedList state 
primitive
URL: https://github.com/apache/flink-statefun/pull/25
 
 
   This PR introduces a `PersistedList` state primitive to Stateful Functions.
   
   - 3491fea: Adds the user-facing SDK classes
   - 69e8ce6: Introduces the Flink-based accessor for `PersistedList`
   - fc190ca: Completes the addition by letting the `StateBinder` support 
`PersistedList`
   
   ---
   
   This can be verified by the new test `StateBinderTest#bindPersistedList()`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16110) LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) *PROCTIME*"

2020-02-16 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038113#comment-17038113
 ] 

Jark Wu commented on FLINK-16110:
-

{{TIMESTAMP(3) ROWTIME}} is the summary string of TimestampType which is used 
for logging. The serialization string representation doesn't include ROWTIME 
keywords. It's just a metadata so that I don't think we should support to parse 
{{TIMESTAMP(3) ROWTIME}}. In other words, it's now allowed to declare a type 
via {{TIMESTAMP(3) ROWTIME}} .

> LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) 
> *PROCTIME*"
> 
>
> Key: FLINK-16110
> URL: https://issues.apache.org/jira/browse/FLINK-16110
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: godfrey he
>Priority: Major
>
>  {{TIMESTAMP(3) *ROWTIME*}} is the string representation of 
> {{TimestampType(true, TimestampKind.ROWTIME, 3)}} , however 
> {{LogicalTypeParser}} can't convert it to  {{TimestampType(true, 
> TimestampKind.ROWTIME, 3)}}. 
> TIMESTAMP(3) *PROCTIME* is the same case.
> the exception looks like:
> {code}
> org.apache.flink.table.api.ValidationException: Could not parse type at 
> position 12: Unexpected token: *ROWTIME*
>  Input type string: TIMESTAMP(3) *ROWTIME*
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:371)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:380)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parseTokens(LogicalTypeParser.java:357)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.access$000(LogicalTypeParser.java:333)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:106)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:116)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-16 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r379996093
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/ByteBufCumulator.java
 ##
 @@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBufAllocator;
+
+/**
+ * Utilities to cumulates data until the specified size is reached.
+ */
+public class ByteBufCumulator {
+
+   /** The buffer used to cumulate data. */
+   private final ByteBuf cumulationBuf;
+
+   /**
+* The flag indicating whether we are in the middle of cumulating. If
+* it is true, we could continue copying the data, otherwise we should
+* first reset the state of the cumulation buffer and then start copying
+* data.
+*/
+   private boolean inCumulating;
+
+   public ByteBufCumulator(ByteBufAllocator alloc, int initBufferSize) {
+   this.cumulationBuf = alloc.buffer(initBufferSize);
+   this.inCumulating = false;
+   }
+
+   public ByteBuf cumulate(ByteBuf src, int expectedSize) {
 
 Review comment:
   This method is more like a utility to be used in other three classes. We can 
also pass the `cumulationBuf` as an argument which can be allocated and 
maintained by upper component, and the state `inCumulating` is actually can be 
got via judging the passed `cumulationBuf` position. Then it is feasible to 
refactor it as a utility.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-16 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r379993355
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/DefaultMessageParser.java
 ##
 @@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBufAllocator;
+
+import java.net.ProtocolException;
+
+/**
+ * The parser for messages without specific parser. It receives the whole
+ * messages and then delegate the parsing to the targeted messages.
+ */
+public class DefaultMessageParser implements NettyMessageParser {
+
+   /** The initial size of the message header cumulator buffer. */
+   private static final int INITIAL_MESSAGE_HEADER_BUFFER_LENGTH = 128;
+
+   /** The cumulator of message header. */
+   private ByteBufCumulator messageCumulator;
+
+   /** The type of messages under processing. */
+   private int msgId = -1;
+
+   /** The length of messages under processing. */
+   private int messageLength;
+
+   @Override
+   public void onChannelActive(ByteBufAllocator alloc) {
+   messageCumulator = new ByteBufCumulator(alloc, 
INITIAL_MESSAGE_HEADER_BUFFER_LENGTH);
+   }
+
+   @Override
+   public void startParsingMessage(int msgId, int messageLength) {
+   this.msgId = msgId;
+   this.messageLength = messageLength;
+   }
+
+   @Override
+   public ParseResult onData(ByteBuf data) throws Exception {
+   ByteBuf toDecode = messageCumulator.cumulate(data, 
messageLength);
+
+   if (toDecode == null) {
+   return ParseResult.notFinished();
+   }
+
+   switch (msgId) {
+   case NettyMessage.ErrorResponse.ID:
 
 Review comment:
   I guess it is not necessary to judge the message id here, otherwise we also 
need to touch this place when introducing new message type future.
   If removing this, we can also simplify the `startParsingMessage` and reduce 
the class-level `msgId` field.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16005) Propagate yarn.application.classpath from client to TaskManager Classpath

2020-02-16 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038101#comment-17038101
 ] 

Zhenqiu Huang commented on FLINK-16005:
---

[~fly_in_gis]
Spark actually support the end to end yarn.application.classpath override. It 
is more convenient for applications that need classpath isolation with default 
yarn classpath. 

>From the implementation perspective, you are right ship yarn configuration is 
>one of the solutions. But ship yarn-site is not genetic for other usages. 
>Probably pass it as a YARNClusterEntryPoint environment variable is a simpler 
>solution.

> Propagate yarn.application.classpath from client to TaskManager Classpath
> -
>
> Key: FLINK-16005
> URL: https://issues.apache.org/jira/browse/FLINK-16005
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Zhenqiu Huang
>Priority: Critical
>
> When Flink users what to override the hadoop yarn container classpath, they 
> should just specify the yarn.application.classpath in yarn-site.xml from cli 
> side. But currently, the classpath setting can only be used in flink 
> application master, the classpath of TM is still determined by the setting in 
> yarn host.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-16 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r379991763
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/ZeroCopyNettyMessageDecoder.java
 ##
 @@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.runtime.io.network.NetworkClientHandler;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.FRAME_HEADER_LENGTH;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.MAGIC_NUMBER;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Decodes messages from the fragmentary netty buffers. This decoder assumes 
the
+ * messages have the following format:
+ * +---++
+ * | FRAME_HEADER ||  MESSAGE_HEADER   | DATA BUFFER (Optional) |
+ * +---++
+ *
+ * This decoder decodes the frame header and delegates the other following work
+ * to corresponding message parsers according to the message type. During the 
process
+ * of decoding, the decoder and parsers try best to eliminate copying. For the 
frame
+ * header and message header, it only cumulates data when they span multiple 
input buffers.
+ * For the buffer part, it copies directly to the input channels to avoid 
future copying.
+ *
+ * The format of the frame header is
+ * +--+--++
+ * | FRAME LENGTH (4) | MAGIC NUMBER (4) | ID (1) |
+ * +--+--++
+ */
+public class ZeroCopyNettyMessageDecoder extends ChannelInboundHandlerAdapter {
+
+   /** The message parser for buffer response. */
+private final NettyMessageParser bufferResponseParser;
+
+/** The message parser for other messages other than buffer response. */
+   private final NettyMessageParser defaultMessageParser;
+
+   /** Cumulator for the frame header part. */
+   private ByteBufCumulator frameHeaderCumulator;
 
 Review comment:
   frameHeaderCumulator -> frameHeaderDecoder


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-16 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r379991124
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/ZeroCopyNettyMessageDecoder.java
 ##
 @@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.runtime.io.network.NetworkClientHandler;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+import 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter;
+
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.FRAME_HEADER_LENGTH;
+import static 
org.apache.flink.runtime.io.network.netty.NettyMessage.MAGIC_NUMBER;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Decodes messages from the fragmentary netty buffers. This decoder assumes 
the
+ * messages have the following format:
+ * +---++
+ * | FRAME_HEADER ||  MESSAGE_HEADER   | DATA BUFFER (Optional) |
+ * +---++
+ *
+ * This decoder decodes the frame header and delegates the other following work
+ * to corresponding message parsers according to the message type. During the 
process
+ * of decoding, the decoder and parsers try best to eliminate copying. For the 
frame
+ * header and message header, it only cumulates data when they span multiple 
input buffers.
+ * For the buffer part, it copies directly to the input channels to avoid 
future copying.
+ *
+ * The format of the frame header is
+ * +--+--++
+ * | FRAME LENGTH (4) | MAGIC NUMBER (4) | ID (1) |
+ * +--+--++
+ */
+public class ZeroCopyNettyMessageDecoder extends ChannelInboundHandlerAdapter {
+
+   /** The message parser for buffer response. */
+private final NettyMessageParser bufferResponseParser;
+
+/** The message parser for other messages other than buffer response. */
+   private final NettyMessageParser defaultMessageParser;
+
+   /** Cumulator for the frame header part. */
+   private ByteBufCumulator frameHeaderCumulator;
 
 Review comment:
   we can make it as final as said here 
https://github.com/apache/flink/pull/7368#issuecomment-586823140


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW edited a comment on issue #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-16 Thread GitBox
zhijiangW edited a comment on issue #7368: [FLINK-10742][network] Let Netty use 
Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#issuecomment-586823140
 
 
   Let's uniform the new class naming and refactor the structure a bit firstly.
   
   - `ZeroCopyNettyMessageDecoder` -> `NettyMessageClientDecoder`: `ZeroCopy` 
is not an easy understood term or glossary by other persons. From the semantic 
of itself, actually we have not achieved the goal of zero copy yet. 
   
   - The previous `NettyMessageDecoder` refactors to respective 
`NettyMessageServerDecoder` 
   
   - `NettyMessageParser` -> `NettyMessageDeocderDelegate`: The motivation is 
to unify the `Parser` and `Decoder` terms. Further we can define it as abstract 
`NettyMessageDeocderDelegate` to extend `ChannelInboundHandlerAdapter`, then it 
can make use of existing `channelActive` and `channelInactive` methods to avoid 
re-define them now.
   
   - `BufferResponseParser` -> `BufferResponseDecoderDelegate`
   
   - `DefaultMessageParser` -> `NonBufferResponseDecoderDelegate`?  The 
`Default` term is misleading and we should give a more precise semantic. 
   
   - `ByteBufCumulator` -> `LengthBasedHeaderDecoder` and also extend 
`ChannelInboundHandlerAdapter`, so we can make it as final field inside 
`NettyMessageClientDecoder`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on issue #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-16 Thread GitBox
zhijiangW commented on issue #7368: [FLINK-10742][network] Let Netty use 
Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#issuecomment-586823140
 
 
   Let's uniform the new class naming and refactor the structure a bit firstly.
   
   - `ZeroCopyNettyMessageDecoder` -> `NettyMessageClientDecoder`: `ZeroCopy` 
is not an easy understood term or glossary by other persons. From the semantic 
of itself, actually we have not achieved the goal of zero copy yet. 
   
   - The previous `NettyMessageDecoder` refactors to respective 
`NettyMessageServerDecoder` 
   
   - `NettyMessageParser` -> `NettyMessageDeocderDelegate`: The motivation is 
to unify the `Parser` and `Decoder` terms. Further we can define it as abstract 
`NettyMessageDeocderDelegate` to extend `ChannelInboundHandlerAdapter`, then it 
can make use of existing `channelActive` and `channelInactive` methods to avoid 
re-define them now.
   
   - `BufferResponseParser` -> `BufferResponseDecoderDelegate`
   
   - `DefaultMessageParser` -> `NonBufferResponseDecoderDelegate`?  The 
`Default` term is misleading and we should give a more precise semantic. 
   
   - `ByteBufCumulator` -> `LengthBasedHeaderDecoder`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu merged pull request #11103: [hotfix][docs] Regenerate documentation

2020-02-16 Thread GitBox
dianfu merged pull request #11103: [hotfix][docs] Regenerate documentation
URL: https://github.com/apache/flink/pull/11103
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu commented on issue #11103: [hotfix][docs] Regenerate documentation

2020-02-16 Thread GitBox
dianfu commented on issue #11103: [hotfix][docs] Regenerate documentation
URL: https://github.com/apache/flink/pull/11103#issuecomment-586818230
 
 
   LGTM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16110) LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) *PROCTIME*"

2020-02-16 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-16110:
---
Description: 
 {{TIMESTAMP(3) *ROWTIME*}} is the string representation of 
{{TimestampType(true, TimestampKind.ROWTIME, 3)}} , however 
{{LogicalTypeParser}} can't convert it to  {{TimestampType(true, 
TimestampKind.ROWTIME, 3)}}. 
TIMESTAMP(3) *PROCTIME* is the same case.

the exception looks like:

{panel:title=exception}
org.apache.flink.table.api.ValidationException: Could not parse type at 
position 12: Unexpected token: *ROWTIME*
 Input type string: TIMESTAMP(3) *ROWTIME*

at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:371)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:380)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parseTokens(LogicalTypeParser.java:357)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.access$000(LogicalTypeParser.java:333)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:106)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:116)
{panel}


  was:
 {{TIMESTAMP(3) *ROWTIME*}} is the string representation of 
{{TimestampType(true, TimestampKind.ROWTIME, 3)}} , however 
{{LogicalTypeParser}} can't convert it to  {{TimestampType(true, 
TimestampKind.ROWTIME, 3)}}. 
TIMESTAMP(3) *PROCTIME* is the same case.


> LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) 
> *PROCTIME*"
> 
>
> Key: FLINK-16110
> URL: https://issues.apache.org/jira/browse/FLINK-16110
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: godfrey he
>Priority: Major
>
>  {{TIMESTAMP(3) *ROWTIME*}} is the string representation of 
> {{TimestampType(true, TimestampKind.ROWTIME, 3)}} , however 
> {{LogicalTypeParser}} can't convert it to  {{TimestampType(true, 
> TimestampKind.ROWTIME, 3)}}. 
> TIMESTAMP(3) *PROCTIME* is the same case.
> the exception looks like:
> {panel:title=exception}
> org.apache.flink.table.api.ValidationException: Could not parse type at 
> position 12: Unexpected token: *ROWTIME*
>  Input type string: TIMESTAMP(3) *ROWTIME*
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:371)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:380)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parseTokens(LogicalTypeParser.java:357)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.access$000(LogicalTypeParser.java:333)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:106)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:116)
> {panel}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16110) LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) *PROCTIME*"

2020-02-16 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-16110:
---
Description: 
 {{TIMESTAMP(3) *ROWTIME*}} is the string representation of 
{{TimestampType(true, TimestampKind.ROWTIME, 3)}} , however 
{{LogicalTypeParser}} can't convert it to  {{TimestampType(true, 
TimestampKind.ROWTIME, 3)}}. 
TIMESTAMP(3) *PROCTIME* is the same case.

the exception looks like:

{code}
org.apache.flink.table.api.ValidationException: Could not parse type at 
position 12: Unexpected token: *ROWTIME*
 Input type string: TIMESTAMP(3) *ROWTIME*

at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:371)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:380)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parseTokens(LogicalTypeParser.java:357)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.access$000(LogicalTypeParser.java:333)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:106)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:116)
{code}


  was:
 {{TIMESTAMP(3) *ROWTIME*}} is the string representation of 
{{TimestampType(true, TimestampKind.ROWTIME, 3)}} , however 
{{LogicalTypeParser}} can't convert it to  {{TimestampType(true, 
TimestampKind.ROWTIME, 3)}}. 
TIMESTAMP(3) *PROCTIME* is the same case.

the exception looks like:

{panel:title=exception}
org.apache.flink.table.api.ValidationException: Could not parse type at 
position 12: Unexpected token: *ROWTIME*
 Input type string: TIMESTAMP(3) *ROWTIME*

at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:371)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:380)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parseTokens(LogicalTypeParser.java:357)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.access$000(LogicalTypeParser.java:333)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:106)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:116)
{panel}



> LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) 
> *PROCTIME*"
> 
>
> Key: FLINK-16110
> URL: https://issues.apache.org/jira/browse/FLINK-16110
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: godfrey he
>Priority: Major
>
>  {{TIMESTAMP(3) *ROWTIME*}} is the string representation of 
> {{TimestampType(true, TimestampKind.ROWTIME, 3)}} , however 
> {{LogicalTypeParser}} can't convert it to  {{TimestampType(true, 
> TimestampKind.ROWTIME, 3)}}. 
> TIMESTAMP(3) *PROCTIME* is the same case.
> the exception looks like:
> {code}
> org.apache.flink.table.api.ValidationException: Could not parse type at 
> position 12: Unexpected token: *ROWTIME*
>  Input type string: TIMESTAMP(3) *ROWTIME*
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:371)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:380)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parseTokens(LogicalTypeParser.java:357)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.access$000(LogicalTypeParser.java:333)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:106)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:116)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16110) LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) *PROCTIME*"

2020-02-16 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-16110:
---
Summary: LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and 
"TIMESTAMP(3) *PROCTIME*"  (was: LogicalTypeParser can't parse "TIMESTAMP(3) 
*ROWTIME*" and TIMESTAMP(3) *PROCTIME*)

> LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) 
> *PROCTIME*"
> 
>
> Key: FLINK-16110
> URL: https://issues.apache.org/jira/browse/FLINK-16110
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: godfrey he
>Priority: Major
>
>  {{TIMESTAMP(3) *ROWTIME*}} is the string representation of 
> {{TimestampType(true, TimestampKind.ROWTIME, 3)}} , however 
> {{LogicalTypeParser}} can't convert it to  {{TimestampType(true, 
> TimestampKind.ROWTIME, 3)}}. 
> TIMESTAMP(3) *PROCTIME* is the same case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16110) LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and TIMESTAMP(3) *PROCTIME*

2020-02-16 Thread godfrey he (Jira)
godfrey he created FLINK-16110:
--

 Summary: LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" 
and TIMESTAMP(3) *PROCTIME*
 Key: FLINK-16110
 URL: https://issues.apache.org/jira/browse/FLINK-16110
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.10.0
Reporter: godfrey he


 {{TIMESTAMP(3) *ROWTIME*}} is the string representation of 
{{TimestampType(true, TimestampKind.ROWTIME, 3)}} , however 
{{LogicalTypeParser}} can't convert it to  {{TimestampType(true, 
TimestampKind.ROWTIME, 3)}}. 
TIMESTAMP(3) *PROCTIME* is the same case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15961) Introduce Python Physical Correlate RelNodes which are containers for Python TableFunction

2020-02-16 Thread Hequn Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng reassigned FLINK-15961:
---

Assignee: Huang Xingbo

> Introduce Python Physical Correlate RelNodes  which are containers for Python 
> TableFunction
> ---
>
> Key: FLINK-15961
> URL: https://issues.apache.org/jira/browse/FLINK-15961
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Dedicated Python Physical Correlate RelNodes should be introduced for Python 
> TableFunction execution. These nodes exists as containers for Python 
> TableFunctions which could be executed in a batch and then we can employ 
> PythonTableFunctionOperator for Python TableFunction execution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15959) Add min/max number of slots configuration to limit total number of slots

2020-02-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038086#comment-17038086
 ] 

Xintong Song edited comment on FLINK-15959 at 2/17/20 4:18 AM:
---

bq. How about put it into ClusterOptions, start with "cluster.*".

I personally prefer "slotmanager.\*", but I'm also ok with "cluster.\*".

I would try to keep {{ClusterOptions}} as concise as possible, maybe only 
common configurations that are related to all distributed components. But my 
opinion on this is not strong. As I said, I'm ok with either of the two ways.


was (Author: xintongsong):
bq. How about put it into ClusterOptions, start with "cluster.*".

I personally prefer "slotmanager.*", but I'm also ok with "cluster.*".

I would try to keep {{ClusterOptions}} as concise as possible, maybe only 
common configurations that are related to all distributed components. But my 
opinion on this is not strong. As I said, I'm ok with either of the two ways.

> Add min/max number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Priority: Major
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15959) Add min/max number of slots configuration to limit total number of slots

2020-02-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038086#comment-17038086
 ] 

Xintong Song commented on FLINK-15959:
--

bq. How about put it into ClusterOptions, start with "cluster.*".

I personally prefer "slotmanager.*", but I'm also ok with "cluster.*".

I would try to keep {{ClusterOptions}} as concise as possible, maybe only 
common configurations that are related to all distributed components. But my 
opinion on this is not strong. As I said, I'm ok with either of the two ways.

> Add min/max number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Priority: Major
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11105: [FLINK-16109][python] Move the Python scalar operators and table operators to separate package

2020-02-16 Thread GitBox
flinkbot commented on issue #11105: [FLINK-16109][python] Move the Python 
scalar operators and table operators to separate package
URL: https://github.com/apache/flink/pull/11105#issuecomment-586810362
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit ede5137a9d727b3a85c040ab7733da31b62e713d (Mon Feb 17 
04:17:07 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-6726) Allow setting Timers in ProcessWindowFunction

2020-02-16 Thread Manas Kale (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038084#comment-17038084
 ] 

Manas Kale commented on FLINK-6726:
---

Hi, is there any progress on this issue? I have a similar use case.

> Allow setting Timers in ProcessWindowFunction
> -
>
> Key: FLINK-6726
> URL: https://issues.apache.org/jira/browse/FLINK-6726
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Vishnu Viswanath
>Assignee: Vishnu Viswanath
>Priority: Minor
>
> Allow registration of timers in ProcessWindowFunction.
> {code}
> public abstract void registerEventTimeTimer(long time);
> public abstract void registerProcessingTimeTimer(long time);
> {code}
> This is based on one of the use case that I have, where I need to register an 
> EventTimeTimer that will clean the elements in the Window State based on some 
> condition. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16058) Could not start TaskManager in flink 1.10.0

2020-02-16 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038083#comment-17038083
 ] 

Yang Wang commented on FLINK-16058:
---

[~blablabla123] Flink depends the commons-cli with version 1.3.1. So this 
exception only happens when you ship an old version(e.g. 1.2). I think this is 
not a bug. All the jars in FLINK_HOME/lib path will be added to classpath. If 
your user jar depends on the specified version of common-cli, i think you need 
to bundle it in your user uber jar.

 

Note: Be careful to put some jars in lib directory since it will bring some 
unknown risks for Flink system class loading.

> Could not start TaskManager  in flink 1.10.0
> 
>
> Key: FLINK-16058
> URL: https://issues.apache.org/jira/browse/FLINK-16058
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: BlaBlabla
>Priority: Major
>
> Hello ,
>  
> When I submit a  app on yarn in Flink 1.10.0:
> But there is a error could not find commons-cli package jar:
> {code:java}
> 2020-02-14 18:07:28,045 ERROR org.apache.flink.yarn.YarnResourceManager - 
> Could not start TaskManager in container 
> container_e28_1578502086570_2319694_01_02.
> java.lang.NoSuchMethodError: 
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;
> at 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions.(CommandLineOptions.java:28)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:647)
> at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545)
> at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
> at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.getDynamicPropertiesAsString(BootstrapTools.java:653)
> at 
> org.apache.flink.yarn.YarnResourceManager.createTaskExecutorLaunchContext(YarnResourceManager.java:578)
> at 
> org.apache.flink.yarn.YarnResourceManager.startTaskExecutorInContainer(YarnResourceManager.java:384)
> at java.lang.Iterable.forEach(Iterable.java:75)
> at 
> org.apache.flink.yarn.YarnResourceManager.lambda$onContainersAllocated$1(YarnResourceManager.java:366)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2020-02-14 18:07:28,046 INFO org.apache.flink.yarn.YarnResourceManager - 
> Requesting new TaskExecutor container with resources . 
> Number pending requests 1.
> 2020-02-14 18:07:28,047 INFO org.apache.flink.yarn.YarnResourceManager - 
> TaskExecutor container_e28_1578502086570_2319694_01_03 will be started on 
> ip-10-128-158-97.idata-server.shopee.io with TaskExecutorProcessSpec 
> {cpuC

[jira] [Updated] (FLINK-16109) Move the Python scalar operators and table operators to separate package

2020-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16109:
---
Labels: pull-request-available  (was: )

> Move the Python scalar operators and table operators to separate package
> 
>
> Key: FLINK-16109
> URL: https://issues.apache.org/jira/browse/FLINK-16109
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently both the Python scalar operators and table operators are under the 
> same package org.apache.flink.table.runtime.operators.python. There are 
> already many operators under this package. After introducing the aggregate 
> function support and Vectorized Python function support in the future, there 
> will be more and more operators under the same package. 
> We could improve it by the following package structure: 
> org.apache.flink.table.runtime.operators.python.scalar
>  org.apache.flink.table.runtime.operators.python.table
> org.apache.flink.table.runtime.operators.python.aggregate (in the future)
> org.apache.flink.table.runtime.operators.python.scalar.arrow (in the future)
> As these classes are internal, it's safe to do so and there are no backwards 
> compatibility issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo opened a new pull request #11105: [FLINK-16109][python] Move the Python scalar operators and table operators to separate package

2020-02-16 Thread GitBox
HuangXingBo opened a new pull request #11105: [FLINK-16109][python] Move the 
Python scalar operators and table operators to separate package
URL: https://github.com/apache/flink/pull/11105
 
 
   ## What is the purpose of the change
   
   *This pull request will move the Python scalar operators and table operators 
to separate package*
   
   
   ## Brief change log
   
 - *move the Python scalar operators and table operators to separate 
packaget*
   
   
   ## Verifying this change
   
 - *without change logic of the code, tox test is enough*
 
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangXingBo commented on issue #11051: [FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical Correlate RelNodes which are containers for Python TableFunction

2020-02-16 Thread GitBox
HuangXingBo commented on issue #11051: 
[FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical 
Correlate RelNodes which are containers for Python TableFunction
URL: https://github.com/apache/flink/pull/11051#issuecomment-586809475
 
 
   > @HuangXingBo Thanks a lot for the update. The test failed due to the 
problems in `BatchExecCorrelateBase` and `StreamExecCorrelateBase`. See the 
detailed comments below.
   > 
   > As for the last commit that adding implementations for the Python 
Correlate RelNode, maybe it's better to add the commit later. Because in this 
PR, there are no ways to write tests to cover the implementation.
   > 
   > What do you think?
   
   Thanks a lot for @hequn8128 , I will move the last commit to FLINK-15972.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on a change in pull request #11051: [FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical Correlate RelNodes which are containers for Python

2020-02-16 Thread GitBox
hequn8128 commented on a change in pull request #11051: 
[FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical 
Correlate RelNodes which are containers for Python TableFunction
URL: https://github.com/apache/flink/pull/11051#discussion_r379227379
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamCorrelateBase.scala
 ##
 @@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.plan.nodes.datastream
+
+import org.apache.calcite.plan.{RelOptCluster, RelTraitSet}
+import org.apache.calcite.rel.{RelNode, RelWriter, SingleRel}
+import org.apache.calcite.rel.core.JoinRelType
+import org.apache.calcite.rex.{RexCall, RexNode}
+import org.apache.flink.table.functions.utils.TableSqlFunction
+import org.apache.flink.table.plan.nodes.CommonCorrelate
+import org.apache.flink.table.plan.nodes.logical.FlinkLogicalTableFunctionScan
+import org.apache.flink.table.plan.schema.RowSchema
+
+/**
+  * Base RelNode for data stream correlate.
+  */
+abstract class DataStreamCorrelateBase(
 
 Review comment:
   Hi @twalthr It seems we can't use the implemented methods of a Scala trait 
from Java([see 
details](https://alvinalexander.com/scala/how-to-wrap-scala-traits-used-accessed-java-classes-methods))
 which prevents us from turning this class to a Java one. For this class, it 
needs to extend two traits, i.e., CommonCorrelate and DataStreamRel. Maybe we 
can keep this class as a Scala one for now. What do you think?
   
   I have checked that the classes of Rule has been implemented in Java in this 
PR. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on a change in pull request #11051: [FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical Correlate RelNodes which are containers for Python

2020-02-16 Thread GitBox
hequn8128 commented on a change in pull request #11051: 
[FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical 
Correlate RelNodes which are containers for Python TableFunction
URL: https://github.com/apache/flink/pull/11051#discussion_r379227379
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamCorrelateBase.scala
 ##
 @@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.plan.nodes.datastream
+
+import org.apache.calcite.plan.{RelOptCluster, RelTraitSet}
+import org.apache.calcite.rel.{RelNode, RelWriter, SingleRel}
+import org.apache.calcite.rel.core.JoinRelType
+import org.apache.calcite.rex.{RexCall, RexNode}
+import org.apache.flink.table.functions.utils.TableSqlFunction
+import org.apache.flink.table.plan.nodes.CommonCorrelate
+import org.apache.flink.table.plan.nodes.logical.FlinkLogicalTableFunctionScan
+import org.apache.flink.table.plan.schema.RowSchema
+
+/**
+  * Base RelNode for data stream correlate.
+  */
+abstract class DataStreamCorrelateBase(
 
 Review comment:
   Hi @twalthr It seems we can't use the implemented methods of a Scala trait 
from Java([see 
details](https://alvinalexander.com/scala/how-to-wrap-scala-traits-used-accessed-java-classes-methods))
 which prevents us from turning this class to a Java one. For this class, it 
needs to extend two traits, i.e., CommonCorrelate and DataStreamRel. Maybe we 
can keep this class as a Scala one for now?
   
   I have checked that the classes of Rule has been implemented in Java in this 
PR. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16109) Move the Python scalar operators and table operators to separate package

2020-02-16 Thread Dian Fu (Jira)
Dian Fu created FLINK-16109:
---

 Summary: Move the Python scalar operators and table operators to 
separate package
 Key: FLINK-16109
 URL: https://issues.apache.org/jira/browse/FLINK-16109
 Project: Flink
  Issue Type: Improvement
  Components: API / Python
Reporter: Dian Fu
Assignee: Huang Xingbo
 Fix For: 1.11.0


Currently both the Python scalar operators and table operators are under the 
same package org.apache.flink.table.runtime.operators.python. There are already 
many operators under this package. After introducing the aggregate function 
support and Vectorized Python function support in the future, there will be 
more and more operators under the same package. 

We could improve it by the following package structure: 
org.apache.flink.table.runtime.operators.python.scalar
 org.apache.flink.table.runtime.operators.python.table
org.apache.flink.table.runtime.operators.python.aggregate (in the future)
org.apache.flink.table.runtime.operators.python.scalar.arrow (in the future)

As these classes are internal, it's safe to do so and there are no backwards 
compatibility issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16108) StreamSQLExample is failed if running in blink planner

2020-02-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16108:
---

Assignee: Jark Wu

> StreamSQLExample is failed if running in blink planner
> --
>
> Key: FLINK-16108
> URL: https://issues.apache.org/jira/browse/FLINK-16108
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Critical
> Fix For: 1.10.1
>
>
> {{StreamSQLExample}} in flink-example will fail if the specified planner is 
> blink planner. Exception is as following:
> {code}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Field types of query result and registered TableSink  do not match.
> Query schema: [user: BIGINT, product: STRING, amount: INT]
> Sink schema: [amount: INT, product: STRING, user: BIGINT]
>   at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSchemaAndApplyImplicitCast(TableSinkUtils.scala:96)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:229)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
>   at 
> org.apache.flink.table.api.java.internal.StreamTableEnvironmentImpl.toDataStream(StreamTableEnvironmentImpl.java:361)
>   at 
> org.apache.flink.table.api.java.internal.StreamTableEnvironmentImpl.toAppendStream(StreamTableEnvironmentImpl.java:269)
>   at 
> org.apache.flink.table.api.java.internal.StreamTableEnvironmentImpl.toAppendStream(StreamTableEnvironmentImpl.java:260)
>   at 
> org.apache.flink.table.examples.java.StreamSQLExample.main(StreamSQLExample.java:90)
> Process finished with exit code 1
> {code}
> That's because blink planner will also validate the sink schema even if it is 
> come from {{toAppendStream()}}. However, the 
> {{TableSinkUtils#inferSinkPhysicalDataType}} should derive sink schema from 
> query schema when the requested type is POJO [1], because fields order of 
> POJO is not deterministic.
> [1]: 
> https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/sinks/TableSinkUtils.scala#L237



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 commented on a change in pull request #11093: [FLINK-16055][hive] Avoid catalog functions when listing Hive built-i…

2020-02-16 Thread GitBox
bowenli86 commented on a change in pull request #11093: [FLINK-16055][hive] 
Avoid catalog functions when listing Hive built-i…
URL: https://github.com/apache/flink/pull/11093#discussion_r379973331
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/client/HiveShimV120.java
 ##
 @@ -148,6 +151,9 @@ public CatalogColumnStatisticsDataDate 
toFlinkDateColStats(ColumnStatisticsData
 
@Override
public Optional getBuiltInFunctionInfo(String name) {
+   if (isCatalogFunctionName(name)) {
+   return Optional.empty();
+   }
Optional functionInfo = getFunctionInfo(name);
 
if (functionInfo.isPresent() && 
isBuiltInFunctionInfo(functionInfo.get())) {
 
 Review comment:
   isn't built in functions filtered here already?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #11093: [FLINK-16055][hive] Avoid catalog functions when listing Hive built-i…

2020-02-16 Thread GitBox
bowenli86 commented on a change in pull request #11093: [FLINK-16055][hive] 
Avoid catalog functions when listing Hive built-i…
URL: https://github.com/apache/flink/pull/11093#discussion_r379973331
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/client/HiveShimV120.java
 ##
 @@ -148,6 +151,9 @@ public CatalogColumnStatisticsDataDate 
toFlinkDateColStats(ColumnStatisticsData
 
@Override
public Optional getBuiltInFunctionInfo(String name) {
+   if (isCatalogFunctionName(name)) {
+   return Optional.empty();
+   }
Optional functionInfo = getFunctionInfo(name);
 
if (functionInfo.isPresent() && 
isBuiltInFunctionInfo(functionInfo.get())) {
 
 Review comment:
   aren't built in functions filtered here already?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15959) Add min/max number of slots configuration to limit total number of slots

2020-02-16 Thread YufeiLiu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038072#comment-17038072
 ] 

YufeiLiu commented on FLINK-15959:
--

[~xintongsong]Got it, check total Executors less than maximum before 
startNewWorker, return a empty list if exceed the limitation.
How about put it into ClusterOptions, start with "cluster.*".

> Add min/max number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Priority: Major
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15959) Add min/max number of slots configuration to limit total number of slots

2020-02-16 Thread YufeiLiu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038071#comment-17038071
 ] 

YufeiLiu commented on FLINK-15959:
--

[~xintongsong] Got it, check total Executors less than maximum before 
startNewWorker, return a empty list if exceed the limitation.

> Add min/max number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Priority: Major
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-15959) Add min/max number of slots configuration to limit total number of slots

2020-02-16 Thread YufeiLiu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YufeiLiu updated FLINK-15959:
-
Comment: was deleted

(was: [~xintongsong] Got it, check total Executors less than maximum before 
startNewWorker, return a empty list if exceed the limitation.)

> Add min/max number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Priority: Major
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16108) StreamSQLExample is failed if running in blink planner

2020-02-16 Thread Jark Wu (Jira)
Jark Wu created FLINK-16108:
---

 Summary: StreamSQLExample is failed if running in blink planner
 Key: FLINK-16108
 URL: https://issues.apache.org/jira/browse/FLINK-16108
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Jark Wu
 Fix For: 1.10.1


{{StreamSQLExample}} in flink-example will fail if the specified planner is 
blink planner. Exception is as following:

{code}
Exception in thread "main" org.apache.flink.table.api.ValidationException: 
Field types of query result and registered TableSink  do not match.
Query schema: [user: BIGINT, product: STRING, amount: INT]
Sink schema: [amount: INT, product: STRING, user: BIGINT]
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateSchemaAndApplyImplicitCast(TableSinkUtils.scala:96)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:229)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
at 
org.apache.flink.table.api.java.internal.StreamTableEnvironmentImpl.toDataStream(StreamTableEnvironmentImpl.java:361)
at 
org.apache.flink.table.api.java.internal.StreamTableEnvironmentImpl.toAppendStream(StreamTableEnvironmentImpl.java:269)
at 
org.apache.flink.table.api.java.internal.StreamTableEnvironmentImpl.toAppendStream(StreamTableEnvironmentImpl.java:260)
at 
org.apache.flink.table.examples.java.StreamSQLExample.main(StreamSQLExample.java:90)

Process finished with exit code 1
{code}

That's because blink planner will also validate the sink schema even if it is 
come from {{toAppendStream()}}. However, the 
{{TableSinkUtils#inferSinkPhysicalDataType}} should derive sink schema from 
query schema when the requested type is POJO [1], because fields order of POJO 
is not deterministic.


[1]: 
https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/sinks/TableSinkUtils.scala#L237






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11104: [FLINK-16051] make Overview-Subtasks ID starts from 1

2020-02-16 Thread GitBox
flinkbot commented on issue #11104: [FLINK-16051] make Overview-Subtasks ID 
starts from 1
URL: https://github.com/apache/flink/pull/11104#issuecomment-586800028
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 19d97a6442b75fe125a7ee0f6cda2da213143c2d (Mon Feb 17 
03:14:54 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16051) Subtask ID in Overview-Subtasks should start from 1

2020-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16051:
---
Labels: pull-request-available  (was: )

> Subtask ID in Overview-Subtasks should start from 1
> ---
>
> Key: FLINK-16051
> URL: https://issues.apache.org/jira/browse/FLINK-16051
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.10.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>  Labels: pull-request-available
> Attachments: backpressureui.png, checkpointui.png, taskui.png, 
> watermarkui.png
>
>
> The subtask id in Subtask UI starts from 0 which is not consistent with other 
> ID in backpressure / checkpoint / watermark.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] buptljy opened a new pull request #11104: [FLINK-16051] make Overview-Subtasks ID starts from 1

2020-02-16 Thread GitBox
buptljy opened a new pull request #11104: [FLINK-16051] make Overview-Subtasks 
ID starts from 1
URL: https://github.com/apache/flink/pull/11104
 
 
   ## What is the purpose of the change
   
   The subtask id in Subtask UI starts from 0 which is not consistent with 
other ID in backpressure / checkpoint / watermark.
   
   ## Brief change log
   
   subtasks ID + 1
   
   ## Verifying this change
   
   ## Does this pull request potentially affect one of the following parts
   
   ## Documentation
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16051) Subtask ID in Overview-Subtasks should start from 1

2020-02-16 Thread Jiayi Liao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Liao updated FLINK-16051:
---
Description: 
The subtask id in Subtask UI starts from 0 which is not consistent with other 
ID in backpressure / checkpoint / watermark.


  was:
The subtask id in Subtask UI starts from 0, but in Checkpoint UI subtask id 
starts from 1.




> Subtask ID in Overview-Subtasks should start from 1
> ---
>
> Key: FLINK-16051
> URL: https://issues.apache.org/jira/browse/FLINK-16051
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.10.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
> Attachments: backpressureui.png, checkpointui.png, taskui.png, 
> watermarkui.png
>
>
> The subtask id in Subtask UI starts from 0 which is not consistent with other 
> ID in backpressure / checkpoint / watermark.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 commented on a change in pull request #11051: [FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical Correlate RelNodes which are containers for Python

2020-02-16 Thread GitBox
hequn8128 commented on a change in pull request #11051: 
[FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical 
Correlate RelNodes which are containers for Python TableFunction
URL: https://github.com/apache/flink/pull/11051#discussion_r379968783
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecCorrelateBase.scala
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.planner.plan.nodes.physical.stream
+
+import org.apache.flink.table.dataformat.BaseRow
+import org.apache.flink.table.planner.delegation.StreamPlanner
+import org.apache.flink.table.planner.functions.utils.TableSqlFunction
+import org.apache.flink.table.planner.plan.nodes.exec.{ExecNode, 
StreamExecNode}
+import 
org.apache.flink.table.planner.plan.nodes.logical.FlinkLogicalTableFunctionScan
+import org.apache.flink.table.planner.plan.utils.RelExplainUtil
+
+import org.apache.calcite.plan.{RelOptCluster, RelTraitSet}
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.rel.core.JoinRelType
+import org.apache.calcite.rel.{RelNode, RelWriter, SingleRel}
+import org.apache.calcite.rex.{RexCall, RexNode, RexProgram}
+
+import java.util
+
+import scala.collection.JavaConversions._
+
+/**
+  * Base Flink RelNode which matches along with join a user defined table 
function.
+  */
+abstract class StreamExecCorrelateBase(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+inputRel: RelNode,
+val projectProgram: Option[RexProgram],
+scan: FlinkLogicalTableFunctionScan,
+condition: Option[RexNode],
+outputRowType: RelDataType,
+joinType: JoinRelType)
+  extends SingleRel(cluster, traitSet, inputRel)
+  with StreamPhysicalRel
+  with StreamExecNode[BaseRow] {
+
+  require(joinType == JoinRelType.INNER || joinType == JoinRelType.LEFT)
+
+  override def producesUpdates: Boolean = false
+
+  override def needsUpdatesAsRetraction(input: RelNode): Boolean = false
+
+  override def consumesRetractions: Boolean = false
+
+  override def producesRetractions: Boolean = false
+
+  override def requireWatermark: Boolean = false
+
+  override def deriveRowType(): RelDataType = outputRowType
+
+  override def copy(traitSet: RelTraitSet, inputs: java.util.List[RelNode]): 
RelNode = {
+copy(traitSet, inputs.get(0), projectProgram, outputRowType)
+  }
+
+  /**
+* Note: do not passing member 'child' because singleRel.replaceInput may 
update 'input' rel.
+*/
+  def copy(
+  traitSet: RelTraitSet,
+  newChild: RelNode,
+  projectProgram: Option[RexProgram],
+  outputType: RelDataType): RelNode
+
+  override def explainTerms(pw: RelWriter): RelWriter = {
+val rexCall = scan.getCall.asInstanceOf[RexCall]
+val sqlFunction = rexCall.getOperator.asInstanceOf[TableSqlFunction]
 
 Review comment:
   The test failed due to this line. We can remove this line as it has never 
been used. 
   Remember to remove the useless import after removing this line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16051) Subtask ID in Overview-Subtasks should start from 1

2020-02-16 Thread Jiayi Liao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Liao updated FLINK-16051:
---
Summary: Subtask ID in Overview-Subtasks should start from 1  (was: Subtask 
id in Checkpoint UI not consistent with Subtask UI)

> Subtask ID in Overview-Subtasks should start from 1
> ---
>
> Key: FLINK-16051
> URL: https://issues.apache.org/jira/browse/FLINK-16051
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.10.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
> Attachments: backpressureui.png, checkpointui.png, taskui.png, 
> watermarkui.png
>
>
> The subtask id in Subtask UI starts from 0, but in Checkpoint UI subtask id 
> starts from 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 commented on a change in pull request #11051: [FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical Correlate RelNodes which are containers for Python

2020-02-16 Thread GitBox
hequn8128 commented on a change in pull request #11051: 
[FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical 
Correlate RelNodes which are containers for Python TableFunction
URL: https://github.com/apache/flink/pull/11051#discussion_r379968783
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecCorrelateBase.scala
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.planner.plan.nodes.physical.stream
+
+import org.apache.flink.table.dataformat.BaseRow
+import org.apache.flink.table.planner.delegation.StreamPlanner
+import org.apache.flink.table.planner.functions.utils.TableSqlFunction
+import org.apache.flink.table.planner.plan.nodes.exec.{ExecNode, 
StreamExecNode}
+import 
org.apache.flink.table.planner.plan.nodes.logical.FlinkLogicalTableFunctionScan
+import org.apache.flink.table.planner.plan.utils.RelExplainUtil
+
+import org.apache.calcite.plan.{RelOptCluster, RelTraitSet}
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.rel.core.JoinRelType
+import org.apache.calcite.rel.{RelNode, RelWriter, SingleRel}
+import org.apache.calcite.rex.{RexCall, RexNode, RexProgram}
+
+import java.util
+
+import scala.collection.JavaConversions._
+
+/**
+  * Base Flink RelNode which matches along with join a user defined table 
function.
+  */
+abstract class StreamExecCorrelateBase(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+inputRel: RelNode,
+val projectProgram: Option[RexProgram],
+scan: FlinkLogicalTableFunctionScan,
+condition: Option[RexNode],
+outputRowType: RelDataType,
+joinType: JoinRelType)
+  extends SingleRel(cluster, traitSet, inputRel)
+  with StreamPhysicalRel
+  with StreamExecNode[BaseRow] {
+
+  require(joinType == JoinRelType.INNER || joinType == JoinRelType.LEFT)
+
+  override def producesUpdates: Boolean = false
+
+  override def needsUpdatesAsRetraction(input: RelNode): Boolean = false
+
+  override def consumesRetractions: Boolean = false
+
+  override def producesRetractions: Boolean = false
+
+  override def requireWatermark: Boolean = false
+
+  override def deriveRowType(): RelDataType = outputRowType
+
+  override def copy(traitSet: RelTraitSet, inputs: java.util.List[RelNode]): 
RelNode = {
+copy(traitSet, inputs.get(0), projectProgram, outputRowType)
+  }
+
+  /**
+* Note: do not passing member 'child' because singleRel.replaceInput may 
update 'input' rel.
+*/
+  def copy(
+  traitSet: RelTraitSet,
+  newChild: RelNode,
+  projectProgram: Option[RexProgram],
+  outputType: RelDataType): RelNode
+
+  override def explainTerms(pw: RelWriter): RelWriter = {
+val rexCall = scan.getCall.asInstanceOf[RexCall]
+val sqlFunction = rexCall.getOperator.asInstanceOf[TableSqlFunction]
 
 Review comment:
   The test failed due to this line. We can remove this line as it has never 
been used. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on a change in pull request #11051: [FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical Correlate RelNodes which are containers for Python

2020-02-16 Thread GitBox
hequn8128 commented on a change in pull request #11051: 
[FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical 
Correlate RelNodes which are containers for Python TableFunction
URL: https://github.com/apache/flink/pull/11051#discussion_r379968881
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/batch/BatchExecCorrelateBase.scala
 ##
 @@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.planner.plan.nodes.physical.batch
+
+import org.apache.flink.runtime.operators.DamBehavior
+import org.apache.flink.table.dataformat.BaseRow
+import org.apache.flink.table.planner.delegation.BatchPlanner
+import org.apache.flink.table.planner.functions.utils.TableSqlFunction
+import org.apache.flink.table.planner.plan.`trait`.{FlinkRelDistribution, 
FlinkRelDistributionTraitDef, TraitUtil}
+import org.apache.flink.table.planner.plan.nodes.exec.{BatchExecNode, ExecNode}
+import 
org.apache.flink.table.planner.plan.nodes.logical.FlinkLogicalTableFunctionScan
+import org.apache.flink.table.planner.plan.utils.RelExplainUtil
+
+import org.apache.calcite.plan.{RelOptCluster, RelOptRule, RelTraitSet}
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.rel.core.{Correlate, JoinRelType}
+import org.apache.calcite.rel.{RelCollationTraitDef, RelDistribution, 
RelFieldCollation, RelNode, RelWriter, SingleRel}
+import org.apache.calcite.rex.{RexCall, RexInputRef, RexNode, RexProgram}
+import org.apache.calcite.sql.SqlKind
+import org.apache.calcite.util.mapping.{Mapping, MappingType, Mappings}
+
+import java.util
+
+import scala.collection.JavaConversions._
+
+/**
+  * Base Batch physical RelNode for [[Correlate]] (user defined table 
function).
+  */
+abstract class BatchExecCorrelateBase(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+inputRel: RelNode,
+scan: FlinkLogicalTableFunctionScan,
+condition: Option[RexNode],
+projectProgram: Option[RexProgram],
+outputRowType: RelDataType,
+joinType: JoinRelType)
+  extends SingleRel(cluster, traitSet, inputRel)
+  with BatchPhysicalRel
+  with BatchExecNode[BaseRow] {
+
+  require(joinType == JoinRelType.INNER || joinType == JoinRelType.LEFT)
+
+  override def deriveRowType(): RelDataType = outputRowType
+
+  override def copy(traitSet: RelTraitSet, inputs: java.util.List[RelNode]): 
RelNode = {
+copy(traitSet, inputs.get(0), projectProgram, outputRowType)
+  }
+
+  /**
+* Note: do not passing member 'child' because singleRel.replaceInput may 
update 'input' rel.
+*/
+  def copy(
+  traitSet: RelTraitSet,
+  child: RelNode,
+  projectProgram: Option[RexProgram],
+  outputType: RelDataType): RelNode
+
+  override def explainTerms(pw: RelWriter): RelWriter = {
+val rexCall = scan.getCall.asInstanceOf[RexCall]
+val sqlFunction = rexCall.getOperator.asInstanceOf[TableSqlFunction]
 
 Review comment:
   ditto


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16051) Subtask id in Checkpoint UI not consistent with Subtask UI

2020-02-16 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-16051:
-
Issue Type: Improvement  (was: Bug)

> Subtask id in Checkpoint UI not consistent with Subtask UI
> --
>
> Key: FLINK-16051
> URL: https://issues.apache.org/jira/browse/FLINK-16051
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.10.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
> Attachments: backpressureui.png, checkpointui.png, taskui.png, 
> watermarkui.png
>
>
> The subtask id in Subtask UI starts from 0, but in Checkpoint UI subtask id 
> starts from 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16051) Subtask id in Checkpoint UI not consistent with Subtask UI

2020-02-16 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-16051:


Assignee: Jiayi Liao

> Subtask id in Checkpoint UI not consistent with Subtask UI
> --
>
> Key: FLINK-16051
> URL: https://issues.apache.org/jira/browse/FLINK-16051
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.10.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
> Attachments: backpressureui.png, checkpointui.png, taskui.png, 
> watermarkui.png
>
>
> The subtask id in Subtask UI starts from 0, but in Checkpoint UI subtask id 
> starts from 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16082) Translate "Overview" page of "Streaming Concepts" into Chinese

2020-02-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16082:
---

Assignee: Benchao Li

> Translate "Overview" page of "Streaming Concepts" into Chinese
> --
>
> Key: FLINK-16082
> URL: https://issues.apache.org/jira/browse/FLINK-16082
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Benchao Li
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/streaming/
> The markdown file is located in {{flink/docs/dev/table/streaming/index.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16081) Translate "Overview" page of "Table API & SQL" into Chinese

2020-02-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16081:
---

Assignee: Benchao Li

> Translate "Overview" page of "Table API & SQL" into Chinese
> ---
>
> Key: FLINK-16081
> URL: https://issues.apache.org/jira/browse/FLINK-16081
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Benchao Li
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/
> The markdown file is located in {{flink/docs/dev/table/index.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 commented on a change in pull request #10995: [FLINK-15847][ml] Include flink-ml-api and flink-ml-lib in opt

2020-02-16 Thread GitBox
hequn8128 commented on a change in pull request #10995: [FLINK-15847][ml] 
Include flink-ml-api and flink-ml-lib in opt
URL: https://github.com/apache/flink/pull/10995#discussion_r379951593
 
 

 ##
 File path: flink-dist/src/main/assemblies/opt.xml
 ##
 @@ -168,6 +168,14 @@

flink-python_${scala.binary.version}-${project.version}.jar
0644

+
+   
+   
 
 Review comment:
   For the table module, it makes no sense to let any sub-modules to bundle 
cep, so a uber module is used to create the bundled jar. but for ml module, it 
makes sense to let flink-ml-lib to bundle the flink-ml-api module, so that we 
can simply put the flink-ml-lib jar into the opt. 
   
   However, as I said, I'm also fine with an uber module and have created an 
uber module for the flink-ml in the latest pr. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on a change in pull request #10995: [FLINK-15847][ml] Include flink-ml-api and flink-ml-lib in opt

2020-02-16 Thread GitBox
hequn8128 commented on a change in pull request #10995: [FLINK-15847][ml] 
Include flink-ml-api and flink-ml-lib in opt
URL: https://github.com/apache/flink/pull/10995#discussion_r379951593
 
 

 ##
 File path: flink-dist/src/main/assemblies/opt.xml
 ##
 @@ -168,6 +168,14 @@

flink-python_${scala.binary.version}-${project.version}.jar
0644

+
+   
+   
 
 Review comment:
   For the table module, it makes no sense to let any sub-modules to bundle 
cep, so a uber module is used to create the bundled jar. but for ml module, it 
makes sense to let flink-ml-lib to bundle the flink-ml-api module, so that we 
can simply put the flink-ml-lib jar into the opt. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11103: [hotfix][docs] Regenerate documentation

2020-02-16 Thread GitBox
flinkbot commented on issue #11103: [hotfix][docs] Regenerate documentation
URL: https://github.com/apache/flink/pull/11103#issuecomment-586789425
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 874b88f319fd15672d931e25c187392b0719f135 (Mon Feb 17 
02:14:26 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangXingBo opened a new pull request #11103: [hotfix][docs] Regenerate documentation

2020-02-16 Thread GitBox
HuangXingBo opened a new pull request #11103: [hotfix][docs] Regenerate 
documentation
URL: https://github.com/apache/flink/pull/11103
 
 
   ## What is the purpose of the change
   
   *Regenerate documentation of k8 configuration*
   
   
   ## Brief change log
   
 - *kubernetes_config_configuration.html*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   *docs without test*
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on a change in pull request #10995: [FLINK-15847][ml] Include flink-ml-api and flink-ml-lib in opt

2020-02-16 Thread GitBox
hequn8128 commented on a change in pull request #10995: [FLINK-15847][ml] 
Include flink-ml-api and flink-ml-lib in opt
URL: https://github.com/apache/flink/pull/10995#discussion_r379951593
 
 

 ##
 File path: flink-dist/src/main/assemblies/opt.xml
 ##
 @@ -168,6 +168,14 @@

flink-python_${scala.binary.version}-${project.version}.jar
0644

+
+   
+   
 
 Review comment:
   For the table module, it makes no sense to let any sub-modules to bundle 
cep, so a uber module is used to create the bundled jar. but for ml module, it 
makes sense to let flink-ml-lib to bundle the flink-ml-api module, so that we 
can simply put the flink-ml-lib jar into the opt. 
   
   Anyway, I have created an uber module for the flink-ml in the latest pr. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16081) Translate "Overview" page of "Table API & SQL" into Chinese

2020-02-16 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038038#comment-17038038
 ] 

Benchao Li commented on FLINK-16081:


[~jark] I can help translate this one.

> Translate "Overview" page of "Table API & SQL" into Chinese
> ---
>
> Key: FLINK-16081
> URL: https://issues.apache.org/jira/browse/FLINK-16081
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/
> The markdown file is located in {{flink/docs/dev/table/index.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16082) Translate "Overview" page of "Streaming Concepts" into Chinese

2020-02-16 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038040#comment-17038040
 ] 

Benchao Li commented on FLINK-16082:


[~jark] I can help translate this one.

> Translate "Overview" page of "Streaming Concepts" into Chinese
> --
>
> Key: FLINK-16082
> URL: https://issues.apache.org/jira/browse/FLINK-16082
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/streaming/
> The markdown file is located in {{flink/docs/dev/table/streaming/index.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11102: [FLINK-16084][docs] Translate /dev/table/streaming/time_attributes.zh.md

2020-02-16 Thread GitBox
flinkbot commented on issue #11102: [FLINK-16084][docs] Translate 
/dev/table/streaming/time_attributes.zh.md
URL: https://github.com/apache/flink/pull/11102#issuecomment-586788166
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7a5de5c091f7f5245b7d0a82328aff36d3a835cb (Mon Feb 17 
02:06:46 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16084) Translate "Time Attributes" page of "Streaming Concepts" into Chinese

2020-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16084:
---
Labels: pull-request-available  (was: )

> Translate "Time Attributes" page of "Streaming Concepts" into Chinese 
> --
>
> Key: FLINK-16084
> URL: https://issues.apache.org/jira/browse/FLINK-16084
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Benchao Li
>Priority: Major
>  Labels: pull-request-available
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/streaming/time_attributes.html
> The markdown file is located in 
> {{flink/docs/dev/table/streaming/time_attributes.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] libenchao opened a new pull request #11102: [FLINK-16084][docs] Translate /dev/table/streaming/time_attributes.zh.md

2020-02-16 Thread GitBox
libenchao opened a new pull request #11102: [FLINK-16084][docs] Translate 
/dev/table/streaming/time_attributes.zh.md
URL: https://github.com/apache/flink/pull/11102
 
 
   
   
   ## What is the purpose of the change
   
Translate /dev/table/streaming/time_attributes.zh.md
   
   ## Brief change log
   
Translate /dev/table/streaming/time_attributes.zh.md
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15959) Add min/max number of slots configuration to limit total number of slots

2020-02-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038036#comment-17038036
 ] 

Xintong Song commented on FLINK-15959:
--

Minor:
I would suggest to replace "taskmanager.[minimaum|maximum].numberOfTotalSlots" 
with "slotmanager.[min|max]-slots". "taskmanager.*" are usually per-TM 
configurations, while what we are discussing are cluster level min/max limits. 
Besides, IIUC, the configuration option should be used mostly by the 
resource/slot manager rather than task managers.

> Add min/max number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Priority: Major
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16088) Translate "Query Configuration" page of "Streaming Concepts" into Chinese

2020-02-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16088:
---

Assignee: QinChao

> Translate "Query Configuration" page of "Streaming Concepts" into Chinese 
> --
>
> Key: FLINK-16088
> URL: https://issues.apache.org/jira/browse/FLINK-16088
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: QinChao
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/streaming/query_configuration.html
> The markdown file is located in 
> {{flink/docs/dev/table/streaming/query_configuration.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16088) Translate "Query Configuration" page of "Streaming Concepts" into Chinese

2020-02-16 Thread QinChao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038034#comment-17038034
 ] 

QinChao commented on FLINK-16088:
-

Hi Jark, can you assign this to me? I will finish this as soon as possible.

> Translate "Query Configuration" page of "Streaming Concepts" into Chinese 
> --
>
> Key: FLINK-16088
> URL: https://issues.apache.org/jira/browse/FLINK-16088
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/streaming/query_configuration.html
> The markdown file is located in 
> {{flink/docs/dev/table/streaming/query_configuration.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15959) Add min/max number of slots configuration to limit total number of slots

2020-02-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038027#comment-17038027
 ] 

Xintong Song commented on FLINK-15959:
--

[~liuyufei], thanks for updating the proposal.

Regarding the max limit, I think RM should guarantee that it is not exceeded. 
To be specific, RM can check how many worker / slots are already started, 
including registered and pending ones, and reject to start new workers if 
reaching the max limit.

The background of having a max limit is to control the resource consumption a 
bath job, so that it can be executed with much less slots than its parallelism 
without using up the cluster resources.

> Add min/max number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Priority: Major
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 commented on a change in pull request #10995: [FLINK-15847][ml] Include flink-ml-api and flink-ml-lib in opt

2020-02-16 Thread GitBox
hequn8128 commented on a change in pull request #10995: [FLINK-15847][ml] 
Include flink-ml-api and flink-ml-lib in opt
URL: https://github.com/apache/flink/pull/10995#discussion_r379951593
 
 

 ##
 File path: flink-dist/src/main/assemblies/opt.xml
 ##
 @@ -168,6 +168,14 @@

flink-python_${scala.binary.version}-${project.version}.jar
0644

+
+   
+   
 
 Review comment:
   For the table module, it makes no sense to let any sub-modules to bundle 
cep, so a uber module is used to create the bundle jar. but for ml module, it 
makes sense to let flink-ml-lib to bundle the flink-ml-api module, so that we 
can simply put the flink-ml-lib jar into the opt. 
   
   Anyway, I have created an uber module for the flink-ml in the latest pr. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-16106) Add PersistedList to the SDK

2020-02-16 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reassigned FLINK-16106:
---

Assignee: Tzu-Li (Gordon) Tai

> Add PersistedList to the SDK
> 
>
> Key: FLINK-16106
> URL: https://issues.apache.org/jira/browse/FLINK-16106
> Project: Flink
>  Issue Type: Task
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>
> Now that statefun is not multiplexing state in a single column family,
> We can add a PersistedList to the SDK.
> A persisted list would support addition, (add and addAll) and iteration over 
> the items.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16107) github link on statefun.io should point to https://github.com/apache/flink-statefun

2020-02-16 Thread Bowen Li (Jira)
Bowen Li created FLINK-16107:


 Summary: github link on statefun.io should point to 
https://github.com/apache/flink-statefun
 Key: FLINK-16107
 URL: https://issues.apache.org/jira/browse/FLINK-16107
 Project: Flink
  Issue Type: Improvement
  Components: Stateful Functions
Reporter: Bowen Li
Assignee: Tzu-Li (Gordon) Tai


github link on statefun.io website should point to 
[https://github.com/apache/flink-statefun] rather than 
[https://github.com/ververica/stateful-functions]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lonerzzz commented on issue #11042: FLINK-15744 Some TaskManager Task exceptions are logged as info

2020-02-16 Thread GitBox
lonerzzz commented on issue #11042: FLINK-15744 Some TaskManager Task 
exceptions are logged as info
URL: https://github.com/apache/flink/pull/11042#issuecomment-586757018
 
 
   @zentol @aljoscha Upon reading the issue #5399, it didn't seem that any firm 
position was taken on the issue. The reference to setting JobManager output to 
log at the info level assumes an ability to recover. This is not true in all 
cases. Two situations that I have encountered are those from which recovery 
does not occur or occurs slowly:
   
   1) Job submission failure - there are many errors from which the submission 
will not recover without manual intervention. By forcing JobManager output to 
log at the info level, the JobManager must always be run with info level 
logging for situations where jobs are regularly submitted or the errors will 
not be visible.
   2) Rebalancing errors - several situations that I have encountered where the 
number of task slots is close to the number of tasks can result in jobs that 
are stuck awaiting deployment and rebalancing for very long periods of time in 
the event of a transient infrastructure error. While recovery may happen, it 
can take a while and a warning would at least allow operations staff to take 
manual action to correct things rather than finding out that a job in a 
pipeline is not processing because it is awaiting resources.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #10995: [FLINK-15847][ml] Include flink-ml-api and flink-ml-lib in opt

2020-02-16 Thread GitBox
zentol commented on a change in pull request #10995: [FLINK-15847][ml] Include 
flink-ml-api and flink-ml-lib in opt
URL: https://github.com/apache/flink/pull/10995#discussion_r379933502
 
 

 ##
 File path: flink-ml-parent/flink-ml-lib/pom.xml
 ##
 @@ -57,4 +57,30 @@ under the License.
1.1.2


+
+   
+   
+   
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   package
+   
+   shade
+   
+   
+   
+   
+   
com.github.fommil.netlib:core
 
 Review comment:
   If the dependency is not bundled in 1.10 then yes, the licensing part of the 
commit should be reverted.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #10995: [FLINK-15847][ml] Include flink-ml-api and flink-ml-lib in opt

2020-02-16 Thread GitBox
zentol commented on a change in pull request #10995: [FLINK-15847][ml] Include 
flink-ml-api and flink-ml-lib in opt
URL: https://github.com/apache/flink/pull/10995#discussion_r379933420
 
 

 ##
 File path: flink-dist/src/main/assemblies/opt.xml
 ##
 @@ -168,6 +168,14 @@

flink-python_${scala.binary.version}-${project.version}.jar
0644

+
+   
+   
 
 Review comment:
   Where exactly do they differ? They both have multiple sub-modules that you 
want to ship as a single jar.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16106) Add PersistedList to the SDK

2020-02-16 Thread Igal Shilman (Jira)
Igal Shilman created FLINK-16106:


 Summary: Add PersistedList to the SDK
 Key: FLINK-16106
 URL: https://issues.apache.org/jira/browse/FLINK-16106
 Project: Flink
  Issue Type: Task
  Components: Stateful Functions
Reporter: Igal Shilman


Now that statefun is not multiplexing state in a single column family,
We can add a PersistedList to the SDK.
A persisted list would support addition, (add and addAll) and iteration over 
the items.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-13978) Switch to Azure Pipelines as a CI tool for Flink

2020-02-16 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17037916#comment-17037916
 ] 

Robert Metzger edited comment on FLINK-13978 at 2/16/20 6:59 PM:
-

Building pushes / PRs implemented in "master" in 
84fd23d82c2908192d58186d6e061c89b018cda5

Will close this ticket when all related tasks are closed & travis support is 
removed.


was (Author: rmetzger):
Resolved in "master" in 84fd23d82c2908192d58186d6e061c89b018cda5

> Switch to Azure Pipelines as a CI tool for Flink
> 
>
> Key: FLINK-13978
> URL: https://issues.apache.org/jira/browse/FLINK-13978
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> See ML discussion: 
> [https://lists.apache.org/thread.html/b90aa518fcabce94f8e1de4132f46120fae613db6e95a2705f1bd1ea@%3Cdev.flink.apache.org%3E]
>  
> We want to try out Azure Pipelines for the following reasons:
>  * more mature system (compared to travis)
>  * 10 parallel, 6 hrs builds for open source
>  * ability to add custom machines
>  
> (See also INFRA-17030) 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-13978) Switch to Azure Pipelines as a CI tool for Flink

2020-02-16 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reopened FLINK-13978:


> Switch to Azure Pipelines as a CI tool for Flink
> 
>
> Key: FLINK-13978
> URL: https://issues.apache.org/jira/browse/FLINK-13978
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> See ML discussion: 
> [https://lists.apache.org/thread.html/b90aa518fcabce94f8e1de4132f46120fae613db6e95a2705f1bd1ea@%3Cdev.flink.apache.org%3E]
>  
> We want to try out Azure Pipelines for the following reasons:
>  * more mature system (compared to travis)
>  * 10 parallel, 6 hrs builds for open source
>  * ability to add custom machines
>  
> (See also INFRA-17030) 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-13978) Switch to Azure Pipelines as a CI tool for Flink

2020-02-16 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger resolved FLINK-13978.

Fix Version/s: 1.11.0
   Resolution: Fixed

Resolved in "master" in 84fd23d82c2908192d58186d6e061c89b018cda5

> Switch to Azure Pipelines as a CI tool for Flink
> 
>
> Key: FLINK-13978
> URL: https://issues.apache.org/jira/browse/FLINK-13978
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> See ML discussion: 
> [https://lists.apache.org/thread.html/b90aa518fcabce94f8e1de4132f46120fae613db6e95a2705f1bd1ea@%3Cdev.flink.apache.org%3E]
>  
> We want to try out Azure Pipelines for the following reasons:
>  * more mature system (compared to travis)
>  * 10 parallel, 6 hrs builds for open source
>  * ability to add custom machines
>  
> (See also INFRA-17030) 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] asfgit closed pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-16 Thread GitBox
asfgit closed pull request #10976: [FLINK-13978][build system] Add experimental 
support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11101: [FLINK-16068][table-planner-blink] Fix computed column with escaped k…

2020-02-16 Thread GitBox
flinkbot edited a comment on issue #11101: [FLINK-16068][table-planner-blink] 
Fix computed column with escaped k…
URL: https://github.com/apache/flink/pull/11101#issuecomment-586719516
 
 
   
   ## CI report:
   
   * 994704592f9aa725e5370210c6df1c6de0c2ddc8 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/149173873) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5227)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-16 Thread GitBox
flinkbot edited a comment on issue #10976: [FLINK-13978][build system] Add 
experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#issuecomment-580319851
 
 
   
   ## CI report:
   
   * 322bf186867ed0ec1a867ba7919e39e3c437aba0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/146777318) 
   * 0e9afb2b74b214deabb3f09ddd06bca6c3d3b788 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/146897244) 
   * 438cbcb4f6388f66b16e9b49f9e9c1f44dee89d4 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/147715780) 
   * 5a81004ac5c7461921fa679bb1a428c155ae4a25 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/148552963) 
   * 4f4267a436da19e3216dddb529c5ace4c9d0efba Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/148558147) 
   * 0d869f3770aeb5cc565362e98d229003a827bca8 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/148766925) 
   * 56d01e5fd570d28bb86ff7ead98e90f664289bfe Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/148771676) 
   * 93946ab18cece030fa526fce3b7f34206fcca643 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/148988925) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5184)
 
   * 2656304328dfe44278c4bcdd02e07ae9f2786747 UNKNOWN
   * af8c96151b79e85319ed0fa43a60cb58c8927f96 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149168315) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5225)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-16 Thread GitBox
flinkbot edited a comment on issue #10976: [FLINK-13978][build system] Add 
experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#issuecomment-580319851
 
 
   
   ## CI report:
   
   * 322bf186867ed0ec1a867ba7919e39e3c437aba0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/146777318) 
   * 0e9afb2b74b214deabb3f09ddd06bca6c3d3b788 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/146897244) 
   * 438cbcb4f6388f66b16e9b49f9e9c1f44dee89d4 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/147715780) 
   * 5a81004ac5c7461921fa679bb1a428c155ae4a25 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/148552963) 
   * 4f4267a436da19e3216dddb529c5ace4c9d0efba Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/148558147) 
   * 0d869f3770aeb5cc565362e98d229003a827bca8 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/148766925) 
   * 56d01e5fd570d28bb86ff7ead98e90f664289bfe Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/148771676) 
   * 93946ab18cece030fa526fce3b7f34206fcca643 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/148988925) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5184)
 
   * 2656304328dfe44278c4bcdd02e07ae9f2786747 UNKNOWN
   * af8c96151b79e85319ed0fa43a60cb58c8927f96 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149168315) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

2020-02-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16070:
---

Assignee: godfrey he

> Blink planner can not extract correct unique key for UpsertStreamTableSink 
> ---
>
> Key: FLINK-16070
> URL: https://issues.apache.org/jira/browse/FLINK-16070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: godfrey he
>Priority: Critical
> Fix For: 1.10.1
>
>
> I reproduce an Elasticsearch6UpsertTableSink issue which user reported in 
> mail list[1] that Blink planner can not extract correct unique key for 
> following query, but legacy planner works well. 
> {code:java}
> // user code
> INSERT INTO ES6_ZHANGLE_OUTPUT  
>  SELECT aggId, pageId, ts_min as ts,  
>count(case when eventId = 'exposure' then 1 else null end) as expoCnt, 
>  
>count(case when eventId = 'click' then 1 else null end) as clkCnt  
>  FROM  (    
>  SELECT        
>'ZL_001' as aggId,
>         pageId,        
> eventId,        
> recvTime,        
> ts2Date(recvTime) as ts_min    
>  from kafka_zl_etrack_event_stream    
>  where eventId in ('exposure', 'click')  
>  ) as t1  
>  group by aggId, pageId, ts_min
> {code}
> I  found that blink planner can not extract correct unique key in 
> `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well 
> in  
> `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)*
>  `. A simple ETL job to reproduce this issue can refers[2]
>  
> [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]
> [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11101: [FLINK-16068][table-planner-blink] Fix computed column with escaped k…

2020-02-16 Thread GitBox
flinkbot commented on issue #11101: [FLINK-16068][table-planner-blink] Fix 
computed column with escaped k…
URL: https://github.com/apache/flink/pull/11101#issuecomment-586719516
 
 
   
   ## CI report:
   
   * 994704592f9aa725e5370210c6df1c6de0c2ddc8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15702) Make sqlClient classloader aligned with other components

2020-02-16 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen closed FLINK-15702.
-
  Assignee: liupengcheng
Resolution: Fixed

master via 946d25ea9c912a1c190540e74dfd28cd9e9bede6

> Make sqlClient classloader aligned with other components
> 
>
> Key: FLINK-15702
> URL: https://issues.apache.org/jira/browse/FLINK-15702
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: liupengcheng
>Assignee: liupengcheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, Flink sqlClient still use hardcoded `parentFirst` classloader to 
> load user specified jars and libraries, this is easily causing classes 
> conflicts. In [FLINK-13749|https://issues.apache.org/jira/browse/FLINK-13749] 
> , we already make the classloader consistent in both client and remote 
> components.
> So I think we should do the same for sqlClient.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >