[jira] [Created] (FLINK-11798) Incorrect Kubernetes Documentation

2019-03-03 Thread Pritesh Patel (JIRA)
Pritesh Patel created FLINK-11798:
-

 Summary: Incorrect Kubernetes Documentation
 Key: FLINK-11798
 URL: https://issues.apache.org/jira/browse/FLINK-11798
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes
Affects Versions: 1.7.2
Reporter: Pritesh Patel


I have been trying to use the kubernetes session cluster manifests provided in 
the documentation. The -Dtaskmanager.host flag doesn't seem to pass through, 
meaning it uses the pod name as the host name. This wont work.

The current docs state the args should be:

 
{code:java}
args: 
- taskmanager 
- "-Dtaskmanager.host=$(K8S_POD_IP)"
{code}
 

I did manage to get it to work by using this manifest for the taskmanager 
instead. This did waste alot of time as it was very hard to find.
{code:java}
args: 

- taskmanager.sh
- -Dtaskmanager.host=$(K8S_POD_IP)
- -Djobmanager.rpc.address=$(JOB_MANAGER_RPC_ADDRESS) 
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9459) Maven enforcer plugin prevents compilation with HDP's Hadoop

2019-03-03 Thread Truong Duc Kien (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782727#comment-16782727
 ] 

Truong Duc Kien commented on FLINK-9459:


This doesn't seem to happens anymore with Flink 1.7.

> Maven enforcer plugin prevents compilation with HDP's Hadoop
> 
>
> Key: FLINK-9459
> URL: https://issues.apache.org/jira/browse/FLINK-9459
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.5.0
>Reporter: Truong Duc Kien
>Priority: Major
>
> Compiling Flink with Hortonwork HDP's version of Hadoop is currently 
> unsuccessful due to Enforce Plugin catches a problem with their Hadoop.
>  
> The command used is
>  
> {noformat}
> mvn clean install -DskipTests -Dcheckstyle.skip=true 
> -Dmaven.javadoc.skip=true  -Pvendor-repos -Dhadoop.version=2.7.3.2.6.5.0-292
> {noformat}
>  
> The problems:
> {noformat}
> Dependency convergence error for 
> com.fasterxml.jackson.core:jackson-core:2.6.0 paths to dependency are:    
>  
> +-org.apache.flink:flink-bucketing-sink-test:1.5-SNAPSHOT   
>  +-org.apache.flink:flink-shaded-hadoop2:1.5-SNAPSHOT 
>    +-com.microsoft.azure:azure-storage:5.4.0 
>  +-com.fasterxml.jackson.core:jackson-core:2.6.0 
> and 
> +-org.apache.flink:flink-bucketing-sink-test:1.5-SNAPSHOT 
>  +-org.apache.flink:flink-shaded-hadoop2:1.5-SNAPSHOT 
>    +-com.fasterxml.jackson.core:jackson-core:2.6.0 
> and   
>   
> +-org.apache.flink:flink-bucketing-sink-test:1.5-SNAPSHOT 
>  +-org.apache.flink:flink-shaded-hadoop2:1.5-SNAPSHOT 
>    +-com.fasterxml.jackson.core:jackson-databind:2.2.3 
>  +-com.fasterxml.jackson.core:jackson-core:2.2.3 
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message: Failed while enforcing releasability. See above detailed 
> error message. 
> [INFO] FAILURE build of project org.apache.flink:flink-bucketing-sink-test   
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9459) Maven enforcer plugin prevents compilation with HDP's Hadoop

2019-03-03 Thread Truong Duc Kien (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Truong Duc Kien closed FLINK-9459.
--
   Resolution: Fixed
Fix Version/s: 1.7.3

> Maven enforcer plugin prevents compilation with HDP's Hadoop
> 
>
> Key: FLINK-9459
> URL: https://issues.apache.org/jira/browse/FLINK-9459
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.5.0
>Reporter: Truong Duc Kien
>Priority: Major
> Fix For: 1.7.3
>
>
> Compiling Flink with Hortonwork HDP's version of Hadoop is currently 
> unsuccessful due to Enforce Plugin catches a problem with their Hadoop.
>  
> The command used is
>  
> {noformat}
> mvn clean install -DskipTests -Dcheckstyle.skip=true 
> -Dmaven.javadoc.skip=true  -Pvendor-repos -Dhadoop.version=2.7.3.2.6.5.0-292
> {noformat}
>  
> The problems:
> {noformat}
> Dependency convergence error for 
> com.fasterxml.jackson.core:jackson-core:2.6.0 paths to dependency are:    
>  
> +-org.apache.flink:flink-bucketing-sink-test:1.5-SNAPSHOT   
>  +-org.apache.flink:flink-shaded-hadoop2:1.5-SNAPSHOT 
>    +-com.microsoft.azure:azure-storage:5.4.0 
>  +-com.fasterxml.jackson.core:jackson-core:2.6.0 
> and 
> +-org.apache.flink:flink-bucketing-sink-test:1.5-SNAPSHOT 
>  +-org.apache.flink:flink-shaded-hadoop2:1.5-SNAPSHOT 
>    +-com.fasterxml.jackson.core:jackson-core:2.6.0 
> and   
>   
> +-org.apache.flink:flink-bucketing-sink-test:1.5-SNAPSHOT 
>  +-org.apache.flink:flink-shaded-hadoop2:1.5-SNAPSHOT 
>    +-com.fasterxml.jackson.core:jackson-databind:2.2.3 
>  +-com.fasterxml.jackson.core:jackson-core:2.2.3 
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message: Failed while enforcing releasability. See above detailed 
> error message. 
> [INFO] FAILURE build of project org.apache.flink:flink-bucketing-sink-test   
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] KurtYoung commented on a change in pull request #7881: [FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread GitBox
KurtYoung commented on a change in pull request #7881: 
[FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter 
rules for batch and stream
URL: https://github.com/apache/flink/pull/7881#discussion_r261868739
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalNativeTableScan.scala
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.plan.nodes.logical
+
+import org.apache.flink.table.plan.nodes.FlinkConventions
+import org.apache.flink.table.plan.schema.DataStreamTable
+
+import com.google.common.collect.ImmutableList
+import org.apache.calcite.plan._
+import org.apache.calcite.rel.convert.ConverterRule
+import org.apache.calcite.rel.core.TableScan
+import org.apache.calcite.rel.logical.LogicalTableScan
+import org.apache.calcite.rel.metadata.RelMetadataQuery
+import org.apache.calcite.rel.{RelCollation, RelCollationTraitDef, RelNode}
+import org.apache.calcite.schema.Table
+
+import java.util
+import java.util.function.Supplier
+
+class FlinkLogicalNativeTableScan(
 
 Review comment:
   Can we just name this `FlinkLogicalDataStreamTableScan`? Since we do not 
need support DataSet table, we can make the name more explicit


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #7881: [FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread GitBox
KurtYoung commented on a change in pull request #7881: 
[FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter 
rules for batch and stream
URL: https://github.com/apache/flink/pull/7881#discussion_r261868803
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalNativeTableScan.scala
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.plan.nodes.logical
+
+import org.apache.flink.table.plan.nodes.FlinkConventions
+import org.apache.flink.table.plan.schema.DataStreamTable
+
+import com.google.common.collect.ImmutableList
+import org.apache.calcite.plan._
+import org.apache.calcite.rel.convert.ConverterRule
+import org.apache.calcite.rel.core.TableScan
+import org.apache.calcite.rel.logical.LogicalTableScan
+import org.apache.calcite.rel.metadata.RelMetadataQuery
+import org.apache.calcite.rel.{RelCollation, RelCollationTraitDef, RelNode}
+import org.apache.calcite.schema.Table
+
+import java.util
+import java.util.function.Supplier
+
+class FlinkLogicalNativeTableScan(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+table: RelOptTable)
+  extends TableScan(cluster, traitSet, table)
+with FlinkLogicalRel {
 
 Review comment:
   align with extends


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #7881: [FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread GitBox
KurtYoung commented on a change in pull request #7881: 
[FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter 
rules for batch and stream
URL: https://github.com/apache/flink/pull/7881#discussion_r261868962
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/calcite/FlinkTypeFactory.scala
 ##
 @@ -0,0 +1,244 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.calcite
+
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo._
+import org.apache.flink.api.common.typeinfo._
+import org.apache.flink.api.java.typeutils.ValueTypeInfo._
+import org.apache.flink.table.`type`.DecimalType
+import org.apache.flink.table.api.TableException
+import org.apache.flink.table.calcite.FlinkTypeFactory.typeInfoToSqlTypeName
+import org.apache.flink.table.typeutils._
+
+import org.apache.calcite.avatica.util.TimeUnit
+import org.apache.calcite.jdbc.JavaTypeFactoryImpl
+import org.apache.calcite.rel.`type`._
+import org.apache.calcite.sql.SqlIntervalQualifier
+import org.apache.calcite.sql.`type`.SqlTypeName
+import org.apache.calcite.sql.`type`.SqlTypeName._
+import org.apache.calcite.sql.parser.SqlParserPos
+
+import java.util
+
+import scala.collection.JavaConverters._
+
+/**
+  * Flink specific type factory that represents the interface between Flink's 
[[TypeInformation]]
 
 Review comment:
   We need add a TODO or jira to track all the convert between 
`TypeInformation` and Calcite's `RelDataType`. From my understanding, in most 
cases, we only need to convert between `InternalType` and `RelDataType`, and 
the usage of `TypeInformation` should verified case by case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-10506) Introduce minimum, target and maximum parallelism to JobGraph

2019-03-03 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao reassigned FLINK-10506:


Assignee: (was: Gary Yao)

> Introduce minimum, target and maximum parallelism to JobGraph
> -
>
> Key: FLINK-10506
> URL: https://issues.apache.org/jira/browse/FLINK-10506
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.7.0
>Reporter: Till Rohrmann
>Priority: Major
> Fix For: 1.8.0
>
>
> In order to run a job with a variable parallelism, one needs to be able to 
> define the minimum and maximum parallelism for an operator as well as the 
> current target value. In the first implementation, minimum could be 1 and 
> maximum the max parallelism of the operator if no explicit parallelism has 
> been specified for an operator. If a parallelism p has been specified (via 
> setParallelism(p)), then minimum = maximum = p. The target value could be the 
> command line parameter -p or the default parallelism.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11781) Config option yarn.per-job-cluster.include-user-jar cannot be set to DISABLED

2019-03-03 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-11781:
-
Description: 
*Description*
 Setting {{yarn.per-job-cluster.include-user-jar: DISABLED}} in 
{{flink-conf.yaml}} is not supported (anymore). Doing so will lead to the job 
jar not being on the system classpath, which is mandatory if Flink is deployed 
in the job mode.

 

  was:
*Description*
Setting {{yarn.per-job-cluster.include-user-jar: DISABLED}} in 
{{flink-conf.yaml}} is not supported (anymore). Doing so will lead to the job 
jar not being on the system classpath, which is mandatory if Flink is deployed 
in the job mode.


> Config option yarn.per-job-cluster.include-user-jar cannot be set to DISABLED
> -
>
> Key: FLINK-11781
> URL: https://issues.apache.org/jira/browse/FLINK-11781
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.6.4, 1.7.2, 1.8.0
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Major
>
> *Description*
>  Setting {{yarn.per-job-cluster.include-user-jar: DISABLED}} in 
> {{flink-conf.yaml}} is not supported (anymore). Doing so will lead to the job 
> jar not being on the system classpath, which is mandatory if Flink is 
> deployed in the job mode.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] Xeli commented on issue #6594: [FLINK-9311] [pubsub] Added PubSub source connector with support for checkpointing (ATLEAST_ONCE)

2019-03-03 Thread GitBox
Xeli commented on issue #6594: [FLINK-9311] [pubsub] Added PubSub source 
connector with support for checkpointing (ATLEAST_ONCE)
URL: https://github.com/apache/flink/pull/6594#issuecomment-469048228
 
 
   Hi @rmetzger 
   
   I've done some performance tests to see if checkpoint interval limits the 
throughput and I am not seeing this happen. On a smallish kubernetes pod with 
parallelisms ranging from 1 to 5 I keep seeing around 1000 m/s throughput 
consistently using checkpoint intervals ranging from 50ms to 1000ms.
   
   What setup / messages / parallism did you use to test it? And just for the 
double check: Are you sure the latest version of the connector was used? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11781) Config option yarn.per-job-cluster.include-user-jar cannot be set to DISABLED

2019-03-03 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-11781:
-
Description: 
*Description*
Setting {{yarn.per-job-cluster.include-user-jar: DISABLED}} in 
{{flink-conf.yaml}} is not supported (anymore). Doing so will lead to the job 
jar not being on the system classpath, which is mandatory if Flink is deployed 
in job mode. The job will never run.

*Expected behavior*
Documentation should reflect that setting 
{{yarn.per-job-cluster.include-user-jar: DISABLED}} does not work.
 

 

  was:
*Description*
 Setting {{yarn.per-job-cluster.include-user-jar: DISABLED}} in 
{{flink-conf.yaml}} is not supported (anymore). Doing so will lead to the job 
jar not being on the system classpath, which is mandatory if Flink is deployed 
in the job mode.

 


> Config option yarn.per-job-cluster.include-user-jar cannot be set to DISABLED
> -
>
> Key: FLINK-11781
> URL: https://issues.apache.org/jira/browse/FLINK-11781
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.6.4, 1.7.2, 1.8.0
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Major
>
> *Description*
> Setting {{yarn.per-job-cluster.include-user-jar: DISABLED}} in 
> {{flink-conf.yaml}} is not supported (anymore). Doing so will lead to the job 
> jar not being on the system classpath, which is mandatory if Flink is 
> deployed in job mode. The job will never run.
> *Expected behavior*
> Documentation should reflect that setting 
> {{yarn.per-job-cluster.include-user-jar: DISABLED}} does not work.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #7883: [FLINK-11781][yarn] Remove "DISABLED" as possible value for yarn.per-job-cluster.include-user-jar

2019-03-03 Thread GitBox
flinkbot commented on issue #7883: [FLINK-11781][yarn] Remove "DISABLED" as 
possible value for yarn.per-job-cluster.include-user-jar
URL: https://github.com/apache/flink/pull/7883#issuecomment-469048986
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11781) Config option yarn.per-job-cluster.include-user-jar cannot be set to DISABLED

2019-03-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11781:
---
Labels: pull-request-available  (was: )

> Config option yarn.per-job-cluster.include-user-jar cannot be set to DISABLED
> -
>
> Key: FLINK-11781
> URL: https://issues.apache.org/jira/browse/FLINK-11781
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.6.4, 1.7.2, 1.8.0
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Major
>  Labels: pull-request-available
>
> *Description*
> Setting {{yarn.per-job-cluster.include-user-jar: DISABLED}} in 
> {{flink-conf.yaml}} is not supported (anymore). Doing so will lead to the job 
> jar not being on the system classpath, which is mandatory if Flink is 
> deployed in job mode. The job will never run.
> *Expected behavior*
> Documentation should reflect that setting 
> {{yarn.per-job-cluster.include-user-jar: DISABLED}} does not work.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] GJL opened a new pull request #7883: [FLINK-11781][yarn] Remove "DISABLED" as possible value for yarn.per-job-cluster.include-user-jar

2019-03-03 Thread GitBox
GJL opened a new pull request #7883: [FLINK-11781][yarn] Remove "DISABLED" as 
possible value for yarn.per-job-cluster.include-user-jar
URL: https://github.com/apache/flink/pull/7883
 
 
   ## What is the purpose of the change
   
   *This removes `DISABLED` as a possible value for the config option 
`yarn.per-job-cluster.include-user-jar`. *
   
   
   ## Brief change log
   
 - *See commits*
 
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (**yes** / no / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #7884: [BP-1.8][FLINK-11781][yarn] Remove "DISABLED" as possible value for yarn.per-job-cluster.include-user-jar

2019-03-03 Thread GitBox
flinkbot commented on issue #7884: [BP-1.8][FLINK-11781][yarn] Remove 
"DISABLED" as possible value for yarn.per-job-cluster.include-user-jar
URL: https://github.com/apache/flink/pull/7884#issuecomment-469049149
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL opened a new pull request #7884: [BP-1.8][FLINK-11781][yarn] Remove "DISABLED" as possible value for yarn.per-job-cluster.include-user-jar

2019-03-03 Thread GitBox
GJL opened a new pull request #7884: [BP-1.8][FLINK-11781][yarn] Remove 
"DISABLED" as possible value for yarn.per-job-cluster.include-user-jar
URL: https://github.com/apache/flink/pull/7884
 
 
   ## What is the purpose of the change
   
   *This removes `DISABLED` as a possible value for the config option 
`yarn.per-job-cluster.include-user-jar`. Since Flink 1.5, this feature is 
broken.*
   
   
   ## Brief change log
   
 - *See commits*
 
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (**yes** / no / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11781) Config option yarn.per-job-cluster.include-user-jar cannot be set to DISABLED

2019-03-03 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-11781:
-
Fix Version/s: 1.9.0
   1.8.0

> Config option yarn.per-job-cluster.include-user-jar cannot be set to DISABLED
> -
>
> Key: FLINK-11781
> URL: https://issues.apache.org/jira/browse/FLINK-11781
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.6.4, 1.7.2, 1.8.0
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0, 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Description*
> Setting {{yarn.per-job-cluster.include-user-jar: DISABLED}} in 
> {{flink-conf.yaml}} is not supported (anymore). Doing so will lead to the job 
> jar not being on the system classpath, which is mandatory if Flink is 
> deployed in job mode. The job will never run.
> *Expected behavior*
> Documentation should reflect that setting 
> {{yarn.per-job-cluster.include-user-jar: DISABLED}} does not work.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11781) Config option yarn.per-job-cluster.include-user-jar cannot be set to DISABLED

2019-03-03 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-11781:
-
Release Note: Remove "DISABLED" from possible values for config option 
yarn.per-job-cluster.include-user-jar. This feature is broken beginning from 
Flink 1.5 anyways.

> Config option yarn.per-job-cluster.include-user-jar cannot be set to DISABLED
> -
>
> Key: FLINK-11781
> URL: https://issues.apache.org/jira/browse/FLINK-11781
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.6.4, 1.7.2, 1.8.0
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0, 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Description*
> Setting {{yarn.per-job-cluster.include-user-jar: DISABLED}} in 
> {{flink-conf.yaml}} is not supported (anymore). Doing so will lead to the job 
> jar not being on the system classpath, which is mandatory if Flink is 
> deployed in job mode. The job will never run.
> *Expected behavior*
> Documentation should reflect that setting 
> {{yarn.per-job-cluster.include-user-jar: DISABLED}} does not work.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] zentol commented on a change in pull request #7883: [FLINK-11781][yarn] Remove "DISABLED" as possible value for yarn.per-job-cluster.include-user-jar

2019-03-03 Thread GitBox
zentol commented on a change in pull request #7883: [FLINK-11781][yarn] Remove 
"DISABLED" as possible value for yarn.per-job-cluster.include-user-jar
URL: https://github.com/apache/flink/pull/7883#discussion_r261890617
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java
 ##
 @@ -156,7 +155,6 @@ private YarnConfigOptions() {}
 
/** @see YarnConfigOptions#CLASSPATH_INCLUDE_USER_JAR */
public enum UserJarInclusion {
-   DISABLED,
 
 Review comment:
   We should check where this is parsed to the enum and throw a meaningful 
exception explaining that DISABLED has been removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11799) KryoSerializer/OperatorChain ignores copy failure resulting in NullPointerException

2019-03-03 Thread Jason Kania (JIRA)
Jason Kania created FLINK-11799:
---

 Summary: KryoSerializer/OperatorChain ignores copy failure 
resulting in NullPointerException
 Key: FLINK-11799
 URL: https://issues.apache.org/jira/browse/FLINK-11799
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.7.2
Reporter: Jason Kania


I was encountering a problem with NullPointerExceptions with the deserialized 
object reaching my ProcessFunction process() method implementation as a null 
value. Upon investigation, I discovered two issues with the implementation of 
the KryoSerializer copy().

1) The 'public T copy(T from)' method swallows the error if the kryo copy() 
call generates an exception. The code should report the copy error at least 
once as a warning to be aware that the kryo copy() is failing. I understand 
that the code is there to handle the lack of a copy implementation but due to 
the potential inefficiency of having to write and read the object instead of 
copying it, this would seem useful information to share at the least. It is 
also important to have a warning in case the cause of the copy error is 
something that needs to be fixed.

2) The call to 'kryo.readObject(input, from.getClass())' does not handle the 
fact that the kryo readObject(Input input, Class aClass) method may return a 
null value if there are any issues. This could be handled with a check or 
warning in the OperatorChain.CopyingChainingOutput.pushToOperator() method but 
is also ignored there, allowing a null value to be passed along without 
providing any reason for the null value in logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10506) Introduce minimum, target and maximum parallelism to JobGraph

2019-03-03 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-10506:


Assignee: vinoyang

> Introduce minimum, target and maximum parallelism to JobGraph
> -
>
> Key: FLINK-10506
> URL: https://issues.apache.org/jira/browse/FLINK-10506
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.7.0
>Reporter: Till Rohrmann
>Assignee: vinoyang
>Priority: Major
> Fix For: 1.8.0
>
>
> In order to run a job with a variable parallelism, one needs to be able to 
> define the minimum and maximum parallelism for an operator as well as the 
> current target value. In the first implementation, minimum could be 1 and 
> maximum the max parallelism of the operator if no explicit parallelism has 
> been specified for an operator. If a parallelism p has been specified (via 
> setParallelism(p)), then minimum = maximum = p. The target value could be the 
> command line parameter -p or the default parallelism.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] glaksh100 commented on issue #7679: [FLINK-11501][Kafka Connector] Add ratelimiting to Kafka consumer

2019-03-03 Thread GitBox
glaksh100 commented on issue #7679: [FLINK-11501][Kafka Connector] Add 
ratelimiting to Kafka consumer
URL: https://github.com/apache/flink/pull/7679#issuecomment-469089120
 
 
   @tweise Thank you for the patient and thorough review :) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11800) Move table-planner-blink type to table-runtime-blink

2019-03-03 Thread Jingsong Lee (JIRA)
Jingsong Lee created FLINK-11800:


 Summary: Move table-planner-blink type to table-runtime-blink
 Key: FLINK-11800
 URL: https://issues.apache.org/jira/browse/FLINK-11800
 Project: Flink
  Issue Type: Improvement
Reporter: Jingsong Lee
Assignee: Jingsong Lee


We should put types in runtime because runtime code relies heavily on types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] JingsongLi opened a new pull request #7885: [FLINK-11800][table-runtime-blink] Move table-planner-blink type to table-runtime-blink

2019-03-03 Thread GitBox
JingsongLi opened a new pull request #7885: [FLINK-11800][table-runtime-blink] 
Move table-planner-blink type to table-runtime-blink
URL: https://github.com/apache/flink/pull/7885
 
 
   
   ## What is the purpose of the change
   
   package move
   
   ## Verifying this change
   
   ut
   
   ## Does this pull request potentially affect one of the following parts:
   
   all no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #7885: [FLINK-11800][table-runtime-blink] Move table-planner-blink type to table-runtime-blink

2019-03-03 Thread GitBox
flinkbot commented on issue #7885: [FLINK-11800][table-runtime-blink] Move 
table-planner-blink type to table-runtime-blink
URL: https://github.com/apache/flink/pull/7885#issuecomment-469094509
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11800) Move table-planner-blink type to table-runtime-blink

2019-03-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11800:
---
Labels: pull-request-available  (was: )

> Move table-planner-blink type to table-runtime-blink
> 
>
> Key: FLINK-11800
> URL: https://issues.apache.org/jira/browse/FLINK-11800
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
>
> We should put types in runtime because runtime code relies heavily on types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] JingsongLi closed pull request #7885: [FLINK-11800][table-runtime-blink] Move table-planner-blink type to table-runtime-blink

2019-03-03 Thread GitBox
JingsongLi closed pull request #7885: [FLINK-11800][table-runtime-blink] Move 
table-planner-blink type to table-runtime-blink
URL: https://github.com/apache/flink/pull/7885
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11800) Move table-planner-blink type to table-runtime-blink

2019-03-03 Thread Jingsong Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-11800.

Resolution: Invalid

> Move table-planner-blink type to table-runtime-blink
> 
>
> Key: FLINK-11800
> URL: https://issues.apache.org/jira/browse/FLINK-11800
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We should put types in runtime because runtime code relies heavily on types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11801) Port SqlParserException to flink-table-common

2019-03-03 Thread Jark Wu (JIRA)
Jark Wu created FLINK-11801:
---

 Summary: Port SqlParserException to flink-table-common
 Key: FLINK-11801
 URL: https://issues.apache.org/jira/browse/FLINK-11801
 Project: Flink
  Issue Type: New Feature
  Components: API / Table SQL
Reporter: Jark Wu
Assignee: Jark Wu


A more detailed description can be found in FLIP-32.

SqlParserException is used in FlinkPlannerImpl for sql parsing. We will have 
another FlinkPlannerImpl in flink-table-planner-blink. 
Because {{SqlParserException}} is an API level exception, we should move 
{{SqlParserException}} to flink-table-common to avoid copy it in 
flink-table-planner-blink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] wuchong opened a new pull request #7886: [FLINK-11801] [table-common] Port SqlParserException to flink-table-common

2019-03-03 Thread GitBox
wuchong opened a new pull request #7886: [FLINK-11801] [table-common] Port 
SqlParserException to flink-table-common
URL: https://github.com/apache/flink/pull/7886
 
 
   
   
   
   ## What is the purpose of the change
   
   Port `SqlParserException` to flink-table-common
   
   `SqlParserException` is used in `FlinkPlannerImpl` for sql parsing. We will 
have another `FlinkPlannerImpl` in flink-table-planner-blink. 
   
   Because `SqlParserException` is an API level exception, we should move it to 
`flink-table-common` to avoid coping it to `flink-table-planner-blink`.
   
   ## Brief change log
   
- Port `SqlParserException` to flink-table-common
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #7886: [FLINK-11801] [table-common] Port SqlParserException to flink-table-common

2019-03-03 Thread GitBox
flinkbot commented on issue #7886: [FLINK-11801] [table-common] Port 
SqlParserException to flink-table-common
URL: https://github.com/apache/flink/pull/7886#issuecomment-469099044
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11801) Port SqlParserException to flink-table-common

2019-03-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11801:
---
Labels: pull-request-available  (was: )

> Port SqlParserException to flink-table-common
> -
>
> Key: FLINK-11801
> URL: https://issues.apache.org/jira/browse/FLINK-11801
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Table SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available
>
> A more detailed description can be found in FLIP-32.
> SqlParserException is used in FlinkPlannerImpl for sql parsing. We will have 
> another FlinkPlannerImpl in flink-table-planner-blink. 
> Because {{SqlParserException}} is an API level exception, we should move 
> {{SqlParserException}} to flink-table-common to avoid copy it in 
> flink-table-planner-blink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] KurtYoung commented on issue #7886: [FLINK-11801] [table-common] Port SqlParserException to flink-table-common

2019-03-03 Thread GitBox
KurtYoung commented on issue #7886: [FLINK-11801] [table-common] Port 
SqlParserException to flink-table-common
URL: https://github.com/apache/flink/pull/7886#issuecomment-469100083
 
 
   @flinkbot attention @twalthr 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7886: [FLINK-11801] [table-common] Port SqlParserException to flink-table-common

2019-03-03 Thread GitBox
flinkbot edited a comment on issue #7886: [FLINK-11801] [table-common] Port 
SqlParserException to flink-table-common
URL: https://github.com/apache/flink/pull/7886#issuecomment-469099044
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @twalthr [PMC]
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11803) Create FlinkTypeFactory for Blink

2019-03-03 Thread Jingsong Lee (JIRA)
Jingsong Lee created FLINK-11803:


 Summary: Create FlinkTypeFactory for Blink
 Key: FLINK-11803
 URL: https://issues.apache.org/jira/browse/FLINK-11803
 Project: Flink
  Issue Type: New Feature
Reporter: Jingsong Lee
Assignee: Jingsong Lee






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11803) Improve FlinkTypeFactory for Blink

2019-03-03 Thread Jingsong Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-11803:
-
Summary: Improve FlinkTypeFactory for Blink  (was: Create FlinkTypeFactory 
for Blink)

> Improve FlinkTypeFactory for Blink
> --
>
> Key: FLINK-11803
> URL: https://issues.apache.org/jira/browse/FLINK-11803
> Project: Flink
>  Issue Type: New Feature
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11803) Improve FlinkTypeFactory for Blink

2019-03-03 Thread Jingsong Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-11803:
-
Description: We need change TypeInformation to InternalType.

> Improve FlinkTypeFactory for Blink
> --
>
> Key: FLINK-11803
> URL: https://issues.apache.org/jira/browse/FLINK-11803
> Project: Flink
>  Issue Type: New Feature
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>
> We need change TypeInformation to InternalType.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] godfreyhe commented on a change in pull request #7881: [FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread GitBox
godfreyhe commented on a change in pull request #7881: 
[FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter 
rules for batch and stream
URL: https://github.com/apache/flink/pull/7881#discussion_r261910096
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalNativeTableScan.scala
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.plan.nodes.logical
+
+import org.apache.flink.table.plan.nodes.FlinkConventions
+import org.apache.flink.table.plan.schema.DataStreamTable
+
+import com.google.common.collect.ImmutableList
+import org.apache.calcite.plan._
+import org.apache.calcite.rel.convert.ConverterRule
+import org.apache.calcite.rel.core.TableScan
+import org.apache.calcite.rel.logical.LogicalTableScan
+import org.apache.calcite.rel.metadata.RelMetadataQuery
+import org.apache.calcite.rel.{RelCollation, RelCollationTraitDef, RelNode}
+import org.apache.calcite.schema.Table
+
+import java.util
+import java.util.function.Supplier
+
+class FlinkLogicalNativeTableScan(
 
 Review comment:
   good idea


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on a change in pull request #7881: [FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread GitBox
godfreyhe commented on a change in pull request #7881: 
[FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter 
rules for batch and stream
URL: https://github.com/apache/flink/pull/7881#discussion_r261910112
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalNativeTableScan.scala
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.plan.nodes.logical
+
+import org.apache.flink.table.plan.nodes.FlinkConventions
+import org.apache.flink.table.plan.schema.DataStreamTable
+
+import com.google.common.collect.ImmutableList
+import org.apache.calcite.plan._
+import org.apache.calcite.rel.convert.ConverterRule
+import org.apache.calcite.rel.core.TableScan
+import org.apache.calcite.rel.logical.LogicalTableScan
+import org.apache.calcite.rel.metadata.RelMetadataQuery
+import org.apache.calcite.rel.{RelCollation, RelCollationTraitDef, RelNode}
+import org.apache.calcite.schema.Table
+
+import java.util
+import java.util.function.Supplier
+
+class FlinkLogicalNativeTableScan(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+table: RelOptTable)
+  extends TableScan(cluster, traitSet, table)
+with FlinkLogicalRel {
 
 Review comment:
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on a change in pull request #7881: [FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread GitBox
godfreyhe commented on a change in pull request #7881: 
[FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter 
rules for batch and stream
URL: https://github.com/apache/flink/pull/7881#discussion_r261910967
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/calcite/FlinkTypeFactory.scala
 ##
 @@ -0,0 +1,244 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.calcite
+
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo._
+import org.apache.flink.api.common.typeinfo._
+import org.apache.flink.api.java.typeutils.ValueTypeInfo._
+import org.apache.flink.table.`type`.DecimalType
+import org.apache.flink.table.api.TableException
+import org.apache.flink.table.calcite.FlinkTypeFactory.typeInfoToSqlTypeName
+import org.apache.flink.table.typeutils._
+
+import org.apache.calcite.avatica.util.TimeUnit
+import org.apache.calcite.jdbc.JavaTypeFactoryImpl
+import org.apache.calcite.rel.`type`._
+import org.apache.calcite.sql.SqlIntervalQualifier
+import org.apache.calcite.sql.`type`.SqlTypeName
+import org.apache.calcite.sql.`type`.SqlTypeName._
+import org.apache.calcite.sql.parser.SqlParserPos
+
+import java.util
+
+import scala.collection.JavaConverters._
+
+/**
+  * Flink specific type factory that represents the interface between Flink's 
[[TypeInformation]]
 
 Review comment:
   Currently, `FlinkTypeFactory` only contains very basic features based on 
`TypeInformation` to make this PR work. @JingsongLi have created two jiras: 
https://issues.apache.org/jira/browse/FLINK-11803 and 
https://issues.apache.org/jira/browse/FLINK-11802 to improve this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on issue #7881: [FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread GitBox
godfreyhe commented on issue #7881: [FLINK-11795][table-planner-blink] 
Introduce DataStream nodes and converter rules for batch and stream
URL: https://github.com/apache/flink/pull/7881#issuecomment-469102818
 
 
   Thanks a lot for the review. I have update the PR based on comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe edited a comment on issue #7881: [FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread GitBox
godfreyhe edited a comment on issue #7881: [FLINK-11795][table-planner-blink] 
Introduce DataStream nodes and converter rules for batch and stream
URL: https://github.com/apache/flink/pull/7881#issuecomment-469102818
 
 
   Thanks a lot for the review. I have updated the PR based on comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on a change in pull request #7881: [FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread GitBox
godfreyhe commented on a change in pull request #7881: 
[FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter 
rules for batch and stream
URL: https://github.com/apache/flink/pull/7881#discussion_r261911680
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalDataStreamTableScan.scala
 ##
 @@ -51,46 +48,30 @@ class FlinkLogicalNativeTableScan(
   }
 }
 
-class FlinkLogicalNativeTableScanConverter
+class FlinkLogicalDataStreamTableScanConverter
   extends ConverterRule(
 classOf[LogicalTableScan],
 Convention.NONE,
 FlinkConventions.LOGICAL,
-"FlinkLogicalNativeTableScanConverter") {
+"FlinkLogicalDataStreamTableScanConverter") {
 
   override def matches(call: RelOptRuleCall): Boolean = {
 val scan: TableScan = call.rel(0)
-FlinkLogicalNativeTableScan.isLogicalNativeTableScan(scan)
+val dataStreamTable = scan.getTable.unwrap(classOf[DataStreamTable[_]])
+dataStreamTable != null
   }
 
   def convert(rel: RelNode): RelNode = {
 val scan = rel.asInstanceOf[TableScan]
-FlinkLogicalNativeTableScan.create(rel.getCluster, scan.getTable)
+val traitSet = rel.getTraitSet.replace(FlinkConventions.LOGICAL)
+new FlinkLogicalDataStreamTableScan(
+  rel.getCluster,
+  traitSet,
+  scan.getTable
+)
   }
 }
 
-object FlinkLogicalNativeTableScan {
-  val CONVERTER = new FlinkLogicalNativeTableScanConverter
-
-  def isLogicalNativeTableScan(scan: TableScan): Boolean = {
-val dataStreamTable = scan.getTable.unwrap(classOf[DataStreamTable[_]])
-dataStreamTable != null
-  }
-
-  def create(cluster: RelOptCluster, relOptTable: RelOptTable): 
FlinkLogicalNativeTableScan = {
-val table = relOptTable.unwrap(classOf[Table])
-val traitSet = cluster.traitSetOf(Convention.NONE).replaceIfs(
-  RelCollationTraitDef.INSTANCE, new Supplier[util.List[RelCollation]]() {
-def get: util.List[RelCollation] = {
-  if (table != null) {
-table.getStatistic.getCollations
-  } else {
-ImmutableList.of[RelCollation]
-  }
-}
-  })
-val scan = new FlinkLogicalNativeTableScan(cluster, traitSet, relOptTable)
-val newTraitSet = 
scan.getTraitSet.replace(FlinkConventions.LOGICAL).simplify()
-scan.copy(newTraitSet, 
scan.getInputs).asInstanceOf[FlinkLogicalNativeTableScan]
-  }
 
 Review comment:
   The implementation looks a bit weird, so I revert this and add this back 
when need.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11802) Create TypeInfo and TypeSerializer for blink data format

2019-03-03 Thread Jingsong Lee (JIRA)
Jingsong Lee created FLINK-11802:


 Summary: Create TypeInfo and TypeSerializer for blink data format
 Key: FLINK-11802
 URL: https://issues.apache.org/jira/browse/FLINK-11802
 Project: Flink
  Issue Type: New Feature
Reporter: Jingsong Lee
Assignee: Jingsong Lee






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11803) Improve FlinkTypeFactory for Blink

2019-03-03 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782930#comment-16782930
 ] 

Kurt Young commented on FLINK-11803:


I think we also need define a clear boundary between `TypeInformation` and 
`InternalType`, to make everyone on same page about when to use 
`TypeInformation`, when to use `InternalType`

> Improve FlinkTypeFactory for Blink
> --
>
> Key: FLINK-11803
> URL: https://issues.apache.org/jira/browse/FLINK-11803
> Project: Flink
>  Issue Type: New Feature
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>
> We need change TypeInformation to InternalType.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] KurtYoung commented on issue #7881: [FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread GitBox
KurtYoung commented on issue #7881: [FLINK-11795][table-planner-blink] 
Introduce DataStream nodes and converter rules for batch and stream
URL: https://github.com/apache/flink/pull/7881#issuecomment-469105633
 
 
   merging...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung closed pull request #7881: [FLINK-11795][table-planner-blink] Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread GitBox
KurtYoung closed pull request #7881: [FLINK-11795][table-planner-blink] 
Introduce DataStream nodes and converter rules for batch and stream
URL: https://github.com/apache/flink/pull/7881
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11795) Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782934#comment-16782934
 ] 

Kurt Young commented on FLINK-11795:


implemented in 6918d62bac9e421dd06611563ecf4c964fbf2448

> Introduce DataStream nodes and converter rules for batch and stream
> ---
>
> Key: FLINK-11795
> URL: https://issues.apache.org/jira/browse/FLINK-11795
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Table SQL
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 1. adds DataStreamTable
> 2. adds DataStream nodes for batch and stream
> 3. adds convert rules for DataStream nodes
> 4. adds dependent classes: FlinkStatistic, FlinkTypeFactory and InlineTable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11795) Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-11795:
---
Component/s: (was: API / Table SQL)
 SQL / Planner

> Introduce DataStream nodes and converter rules for batch and stream
> ---
>
> Key: FLINK-11795
> URL: https://issues.apache.org/jira/browse/FLINK-11795
> Project: Flink
>  Issue Type: New Feature
>  Components: SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 1. adds DataStreamTable
> 2. adds DataStream nodes for batch and stream
> 3. adds convert rules for DataStream nodes
> 4. adds dependent classes: FlinkStatistic, FlinkTypeFactory and InlineTable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11795) Introduce DataStream nodes and converter rules for batch and stream

2019-03-03 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-11795.
--
   Resolution: Implemented
Fix Version/s: 1.9.0

> Introduce DataStream nodes and converter rules for batch and stream
> ---
>
> Key: FLINK-11795
> URL: https://issues.apache.org/jira/browse/FLINK-11795
> Project: Flink
>  Issue Type: New Feature
>  Components: SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> 1. adds DataStreamTable
> 2. adds DataStream nodes for batch and stream
> 3. adds convert rules for DataStream nodes
> 4. adds dependent classes: FlinkStatistic, FlinkTypeFactory and InlineTable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] zhijiangW commented on issue #7713: [FLINK-10995][network] Copy intermediate serialization results only once for broadcast mode

2019-03-03 Thread GitBox
zhijiangW commented on issue #7713: [FLINK-10995][network] Copy intermediate 
serialization results only once for broadcast mode
URL: https://github.com/apache/flink/pull/7713#issuecomment-469110871
 
 
   @pnowojski , thanks for the kindly suggestion!
   
   The current broadcast usages for `RecordWriter` is only for emitting 
`Watermark` and `StreamStatus`.  Your worries might be right and I also ever 
considered that it might bring bad effects for special scenarios if switching 
frequently.
   
   Maybe the conservative way is only making the special improvement path for 
`BroadcastRecordWriter` which switches the mode in emitting `LatencyMarker` 
currently. If you approve this way, I would adjust the codes then. :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7713: [FLINK-10995][network] Copy intermediate serialization results only once for broadcast mode

2019-03-03 Thread GitBox
flinkbot edited a comment on issue #7713: [FLINK-10995][network] Copy 
intermediate serialization results only once for broadcast mode
URL: https://github.com/apache/flink/pull/7713#issuecomment-463979858
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on issue #7186: [FLINK-10941] Keep slots which contain unconsumed result partitions

2019-03-03 Thread GitBox
zhijiangW commented on issue #7186: [FLINK-10941] Keep slots which contain 
unconsumed result partitions
URL: https://github.com/apache/flink/pull/7186#issuecomment-469132652
 
 
   @azagrebin , the most basic problem currently is when TM is released by RM, 
but the partition data has not transported completed yet, then it would cause 
unnecessary failover which exist in both stream and batch modes.
   
   As we know, the rule for releasing TM  by RM is based on whether all the 
slots are idle or not. I think we can still stick to this rule. Then the root 
problem is that when task finishes, that does not mean the corresponding slot 
should be released. Both task and its input/output occupy the slot resource, 
and we can regard the network buffer as one part of  resource in 
`ResourceProfile` of a slot. When task finishes, if partition data does not 
finish transporting, then the network buffer is not released yet. So the slot 
should be regarded as used to maintain active TM on RM side.
   
   From the aspect of decoupling task resource with partition/shuffle resource 
in slot, we can solve the issue of TM release in natural way and do not destroy 
anything.
   
   Considering delay release the partition for inactive scenarios, it is 
involved in another basic mechanism for releasing partition, apart from the 
above basic mechanisms of releasing TM and decoupling partition resource. I 
would give two points:
   
   - Closed connection should only release the network reader views (sequence 
view, sub partition view), but not release the sub partition from 
`ResultPartitionManager`. The view could be re-created to consume the partition 
repeatedly via re-establishing the connection. 
   
   - When to release partition should be determined by `ShuffleService` on TM 
or `ShuffleMaster` on JM. And there would be different strategies based on 
different `ShuffleManager` implementations, such as based on transport complete 
on producer side, or consumption complete on consumer side, or all kinds of 
TTL, etc.
   
   In priority I think we should solve the issue of TM release in first step as 
I mentioned in the beginning, and take the simple way for releasing partition 
based on transport finished. Further we could improve the failover case by 
supporting delay release partition or multiple consumptions  of one partition.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] twalthr commented on issue #7886: [FLINK-11801] [table-common] Port SqlParserException to flink-table-common

2019-03-03 Thread GitBox
twalthr commented on issue #7886: [FLINK-11801] [table-common] Port 
SqlParserException to flink-table-common
URL: https://github.com/apache/flink/pull/7886#issuecomment-469150069
 
 
   @KurtYoung I would move `SqlParserException` to `flink-table-api-java` for 
now. And only move things to common that are really required across many 
different Maven modules. As the description of the PR states, it is an API 
exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi opened a new pull request #7887: [FLINK-11802][table-runtime-blink] Create TypeInfo and TypeSerializer for blink data format

2019-03-03 Thread GitBox
JingsongLi opened a new pull request #7887: [FLINK-11802][table-runtime-blink] 
Create TypeInfo and TypeSerializer for blink data format
URL: https://github.com/apache/flink/pull/7887
 
 
   ## What is the purpose of the change
   
   Create TypeInfo and TypeSerializer for blink data format
   
   ## Brief change log
   
   Add some TypeInfo and TypeSerializer
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   unit test and test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (yes)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (JavaDocs )
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #7887: [FLINK-11802][table-runtime-blink] Create TypeInfo and TypeSerializer for blink data format

2019-03-03 Thread GitBox
flinkbot commented on issue #7887: [FLINK-11802][table-runtime-blink] Create 
TypeInfo and TypeSerializer for blink data format
URL: https://github.com/apache/flink/pull/7887#issuecomment-469151371
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11802) Create TypeInfo and TypeSerializer for blink data format

2019-03-03 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11802:
---
Labels: pull-request-available  (was: )

> Create TypeInfo and TypeSerializer for blink data format
> 
>
> Key: FLINK-11802
> URL: https://issues.apache.org/jira/browse/FLINK-11802
> Project: Flink
>  Issue Type: New Feature
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11439) INSERT INTO flink_sql SELECT * FROM blink_sql

2019-03-03 Thread zhisheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16783076#comment-16783076
 ] 

zhisheng commented on FLINK-11439:
--

Interesting title, 火钳留名:D

> INSERT INTO flink_sql SELECT * FROM blink_sql
> -
>
> Key: FLINK-11439
> URL: https://issues.apache.org/jira/browse/FLINK-11439
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Table SQL
>Reporter: Timo Walther
>Priority: Major
>
> As Stephan already announced on the [mailing 
> list|https://lists.apache.org/thread.html/2f7330e85d702a53b4a2b361149930b50f2e89d8e8a572f8ee2a0e6d@%3Cdev.flink.apache.org%3E],
>  the Flink community will receive a big code contribution from Alibaba. The 
> {{flink-table}} module is one of the biggest parts that will receive many new 
> features and major architectural improvements. Instead of waiting until the 
> next major version of Flink or introducing big API-breaking changes, we would 
> like to gradually build up the Blink-based planner and runtime while keeping 
> the Table & SQL API mostly stable. Users will be able to play around with the 
> current merge status of the new planner or fall back to the old planner until 
> the new one is stable.
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> In a nutshell:
>  - Split the {{flink-table}} module similar to the proposal of 
> [FLIP-28|https://cwiki.apache.org/confluence/display/FLINK/FLIP-28%3A+Long-term+goal+of+making+flink-table+Scala-free]
>  which is outdated. This is a preparation to separate API from core (targeted 
> for Flink 1.8).
>  - Perform minor API changes to separate API from actual implementation 
> (targeted for Flink 1.8).
>  - Merge a MVP Blink SQL planner given that necessary Flink core/runtime 
> changes have been completed.
>  The merging will happen in stages (e.g. basic planner framework, then 
> operator by operator). The exact merging plan still needs to be determined.
>  - Rework the type system in order to unblock work on unified table 
> environments, UDFs, sources/sinks, and catalog.
>  - Enable full end-to-end batch and stream execution features.
> Our mid-term goal:
> Run full TPC-DS on a unified batch/streaming runtime. Initially, we will only 
> support ingesting data coming from the DataStream API. Once we reworked the 
> sources/sink interfaces, we will target full end-to-end TPC-DS query 
> execution with table connectors.
> This issue is an umbrella issue for tracking the Blink SQL merge efforts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)