[jira] [Resolved] (FLINK-14321) Support to parse watermark statement in SQL DDL

2019-10-24 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-14321.
-
Fix Version/s: 1.10.0
   Resolution: Fixed

1.10.0: bccc4404ec9e73cc03e36631e6b3cf38a8c0e571

> Support to parse watermark statement in SQL DDL
> ---
>
> Key: FLINK-14321
> URL: https://issues.apache.org/jira/browse/FLINK-14321
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Support to parse watermark syntax in SQL DDL. This can implemented in 
> {{flink-sql-parser}} module. 
> The watermark syntax is as following:
> {{WATERMARK FOR columnName AS }}
> We should also do some validation during parsing, for example, whether the 
> referenced rowtime field exist. We should also support to reference a nested 
> field as the rowtime field. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9952: [FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9952: [FLINK-14321][sql-parser] Support to 
parse watermark statement in SQL DDL
URL: https://github.com/apache/flink/pull/9952#issuecomment-544436896
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b27e15bfce1229c1b4cdd80a2a8a689561352d2b (Fri Oct 25 
06:55:51 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @danny0405
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong closed pull request #9952: [FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL

2019-10-24 Thread GitBox
wuchong closed pull request #9952: [FLINK-14321][sql-parser] Support to parse 
watermark statement in SQL DDL
URL: https://github.com/apache/flink/pull/9952
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9993: [FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9993: [FLINK-14498][runtime]Introduce 
NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.
URL: https://github.com/apache/flink/pull/9993#issuecomment-546219804
 
 
   
   ## CI report:
   
   * f47a9f60dff6710ce5a7d5fe341a94d0fffb2d6d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133499201)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9952: [FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9952: [FLINK-14321][sql-parser] Support to 
parse watermark statement in SQL DDL
URL: https://github.com/apache/flink/pull/9952#issuecomment-544436896
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b27e15bfce1229c1b4cdd80a2a8a689561352d2b (Fri Oct 25 
06:53:48 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @danny0405
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #9952: [FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL

2019-10-24 Thread GitBox
wuchong commented on issue #9952: [FLINK-14321][sql-parser] Support to parse 
watermark statement in SQL DDL
URL: https://github.com/apache/flink/pull/9952#issuecomment-546226951
 
 
   Thanks @KurtYoung  and @danny0405 for the reviewing. I will merge this then. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338907408
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchExecSinkRule.scala
 ##
 @@ -44,32 +42,36 @@ class BatchExecSinkRule extends ConverterRule(
 val sinkNode = rel.asInstanceOf[FlinkLogicalSink]
 val newTrait = rel.getTraitSet.replace(FlinkConventions.BATCH_PHYSICAL)
 var requiredTraitSet = 
sinkNode.getInput.getTraitSet.replace(FlinkConventions.BATCH_PHYSICAL)
-sinkNode.sink match {
-  case partitionSink: PartitionableTableSink
-if partitionSink.getPartitionFieldNames != null &&
-  partitionSink.getPartitionFieldNames.nonEmpty =>
-val partitionFields = partitionSink.getPartitionFieldNames
-val partitionIndices = partitionFields
-  .map(partitionSink.getTableSchema.getFieldNames.indexOf(_))
-// validate
-partitionIndices.foreach { idx =>
-  if (idx < 0) {
-throw new TableException(s"Partitionable sink ${sinkNode.sinkName} 
field " +
-  s"${partitionFields.get(idx)} must be in the schema.")
-  }
-}
+if (sinkNode.catalogTable != null && sinkNode.catalogTable.isPartitioned) {
+  sinkNode.sink match {
+case partitionSink: PartitionableTableSink =>
+  val partKeys = sinkNode.catalogTable.getPartitionKeys
+  if (!partKeys.isEmpty) {
+val partitionIndices =
+  
partKeys.map(partitionSink.getTableSchema.getFieldNames.indexOf(_))
+// validate
+partitionIndices.foreach { idx =>
+  if (idx < 0) {
+throw new TableException(s"Partitionable sink 
${sinkNode.sinkName} field " +
+s"${partKeys.get(idx)} must be in the schema.")
+  }
+}
 
-requiredTraitSet = requiredTraitSet.plus(
-  FlinkRelDistribution.hash(partitionIndices
-.map(Integer.valueOf), requireStrict = false))
+requiredTraitSet = requiredTraitSet.plus(
 
 Review comment:
   move this into `if (partitionSink.configurePartitionGrouping(true)) {`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338906446
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchExecSinkRule.scala
 ##
 @@ -44,32 +42,36 @@ class BatchExecSinkRule extends ConverterRule(
 val sinkNode = rel.asInstanceOf[FlinkLogicalSink]
 val newTrait = rel.getTraitSet.replace(FlinkConventions.BATCH_PHYSICAL)
 var requiredTraitSet = 
sinkNode.getInput.getTraitSet.replace(FlinkConventions.BATCH_PHYSICAL)
-sinkNode.sink match {
-  case partitionSink: PartitionableTableSink
-if partitionSink.getPartitionFieldNames != null &&
-  partitionSink.getPartitionFieldNames.nonEmpty =>
-val partitionFields = partitionSink.getPartitionFieldNames
-val partitionIndices = partitionFields
-  .map(partitionSink.getTableSchema.getFieldNames.indexOf(_))
-// validate
-partitionIndices.foreach { idx =>
-  if (idx < 0) {
-throw new TableException(s"Partitionable sink ${sinkNode.sinkName} 
field " +
-  s"${partitionFields.get(idx)} must be in the schema.")
-  }
-}
+if (sinkNode.catalogTable != null && sinkNode.catalogTable.isPartitioned) {
+  sinkNode.sink match {
+case partitionSink: PartitionableTableSink =>
+  val partKeys = sinkNode.catalogTable.getPartitionKeys
+  if (!partKeys.isEmpty) {
+val partitionIndices =
+  
partKeys.map(partitionSink.getTableSchema.getFieldNames.indexOf(_))
+// validate
+partitionIndices.foreach { idx =>
+  if (idx < 0) {
+throw new TableException(s"Partitionable sink 
${sinkNode.sinkName} field " +
+s"${partKeys.get(idx)} must be in the schema.")
+  }
+}
 
-requiredTraitSet = requiredTraitSet.plus(
-  FlinkRelDistribution.hash(partitionIndices
-.map(Integer.valueOf), requireStrict = false))
+requiredTraitSet = requiredTraitSet.plus(
+  FlinkRelDistribution.hash(partitionIndices
+  .map(Integer.valueOf), requireStrict = false))
 
-if (partitionSink.configurePartitionGrouping(true)) {
-  // default to asc.
-  val fieldCollations = 
partitionIndices.map(FlinkRelOptUtil.ofRelFieldCollation)
-  requiredTraitSet = 
requiredTraitSet.plus(RelCollations.of(fieldCollations: _*))
-}
-  case _ =>
+if (partitionSink.configurePartitionGrouping(true)) {
+  // default to asc.
+  val fieldCollations = 
partitionIndices.map(FlinkRelOptUtil.ofRelFieldCollation)
+  requiredTraitSet = 
requiredTraitSet.plus(RelCollations.of(fieldCollations: _*))
+}
+  }
+case _ => throw new TableException(
+  s"Table(${sinkNode.sinkName}) with partition keys should be a 
PartitionableTableSink.")
 
 Review comment:
   We need PartitionableTableSink to write data to partitioned table: $tableName


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338907002
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchExecSinkRule.scala
 ##
 @@ -44,32 +42,36 @@ class BatchExecSinkRule extends ConverterRule(
 val sinkNode = rel.asInstanceOf[FlinkLogicalSink]
 val newTrait = rel.getTraitSet.replace(FlinkConventions.BATCH_PHYSICAL)
 var requiredTraitSet = 
sinkNode.getInput.getTraitSet.replace(FlinkConventions.BATCH_PHYSICAL)
-sinkNode.sink match {
-  case partitionSink: PartitionableTableSink
-if partitionSink.getPartitionFieldNames != null &&
-  partitionSink.getPartitionFieldNames.nonEmpty =>
-val partitionFields = partitionSink.getPartitionFieldNames
-val partitionIndices = partitionFields
-  .map(partitionSink.getTableSchema.getFieldNames.indexOf(_))
-// validate
-partitionIndices.foreach { idx =>
-  if (idx < 0) {
-throw new TableException(s"Partitionable sink ${sinkNode.sinkName} 
field " +
-  s"${partitionFields.get(idx)} must be in the schema.")
-  }
-}
+if (sinkNode.catalogTable != null && sinkNode.catalogTable.isPartitioned) {
+  sinkNode.sink match {
+case partitionSink: PartitionableTableSink =>
+  val partKeys = sinkNode.catalogTable.getPartitionKeys
+  if (!partKeys.isEmpty) {
 
 Review comment:
   we can assert part keys are non empty?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338907291
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchExecSinkRule.scala
 ##
 @@ -44,32 +42,36 @@ class BatchExecSinkRule extends ConverterRule(
 val sinkNode = rel.asInstanceOf[FlinkLogicalSink]
 val newTrait = rel.getTraitSet.replace(FlinkConventions.BATCH_PHYSICAL)
 var requiredTraitSet = 
sinkNode.getInput.getTraitSet.replace(FlinkConventions.BATCH_PHYSICAL)
-sinkNode.sink match {
-  case partitionSink: PartitionableTableSink
-if partitionSink.getPartitionFieldNames != null &&
-  partitionSink.getPartitionFieldNames.nonEmpty =>
-val partitionFields = partitionSink.getPartitionFieldNames
-val partitionIndices = partitionFields
-  .map(partitionSink.getTableSchema.getFieldNames.indexOf(_))
-// validate
-partitionIndices.foreach { idx =>
-  if (idx < 0) {
-throw new TableException(s"Partitionable sink ${sinkNode.sinkName} 
field " +
-  s"${partitionFields.get(idx)} must be in the schema.")
-  }
-}
+if (sinkNode.catalogTable != null && sinkNode.catalogTable.isPartitioned) {
+  sinkNode.sink match {
+case partitionSink: PartitionableTableSink =>
+  val partKeys = sinkNode.catalogTable.getPartitionKeys
+  if (!partKeys.isEmpty) {
+val partitionIndices =
+  
partKeys.map(partitionSink.getTableSchema.getFieldNames.indexOf(_))
+// validate
 
 Review comment:
   don't have to validate again? we can move all validation logic to 
`TableSinkUtils:validate`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338907827
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/delegation/PlannerBase.scala
 ##
 @@ -254,28 +252,30 @@ abstract class PlannerBase(
 */
   protected def translateToPlan(execNodes: util.List[ExecNode[_, _]]): 
util.List[Transformation[_]]
 
-  private def getTableSink(objectIdentifier: ObjectIdentifier): 
Option[TableSink[_]] = {
-JavaScalaConversionUtil.toScala(catalogManager.getTable(objectIdentifier)) 
match {
+  private def getTableSink(identifier: ObjectIdentifier): 
Option[(CatalogTable, TableSink[_])] = {
+JavaScalaConversionUtil.toScala(catalogManager.getTable(identifier)) match 
{
   case Some(s) if s.isInstanceOf[ConnectorCatalogTable[_, _]] =>
-JavaScalaConversionUtil
-  .toScala(s.asInstanceOf[ConnectorCatalogTable[_, _]].getTableSink)
+val table = s.asInstanceOf[ConnectorCatalogTable[_, _]]
+JavaScalaConversionUtil.toScala(table.getTableSink) match {
+  case Some(sink) => Some(table, sink)
+  case None => None
+}
 
   case Some(s) if s.isInstanceOf[CatalogTable] =>
-
-val catalog = 
catalogManager.getCatalog(objectIdentifier.getCatalogName)
-val catalogTable = s.asInstanceOf[CatalogTable]
+val catalog = catalogManager.getCatalog(identifier.getCatalogName)
+val table = s.asInstanceOf[CatalogTable]
 if (catalog.isPresent && catalog.get().getTableFactory.isPresent) {
-  val objectPath = objectIdentifier.toObjectPath
+  val objectPath = identifier.toObjectPath
   val sink = TableFactoryUtil.createTableSinkForCatalogTable(
 catalog.get(),
-catalogTable,
+table,
 objectPath)
   if (sink.isPresent) {
-return Option(sink.get())
+return Option(table, sink.get())
   }
 }
-val sinkProperties = catalogTable.toProperties
-Option(TableFactoryService.find(classOf[TableSinkFactory[_]], 
sinkProperties)
+val sinkProperties = table.toProperties
+Option(table, TableFactoryService.find(classOf[TableSinkFactory[_]], 
sinkProperties)
 
 Review comment:
   Could you explain what's the difference between 
`TableFactoryUtil.createTableSinkForCatalogTable` and 
`TableFactoryService.find(classOf[TableSinkFactory[_]], sinkProperties)`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338906042
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/PushPartitionIntoTableSourceScanRule.scala
 ##
 @@ -62,85 +60,89 @@ class PushPartitionIntoTableSourceScanRule extends 
RelOptRule(
 val filter: Filter = call.rel(0)
 val scan: LogicalTableScan = call.rel(1)
 val table: FlinkRelOptTable = scan.getTable.asInstanceOf[FlinkRelOptTable]
-pushPartitionIntoScan(call, filter, scan, table)
-  }
 
-  private def pushPartitionIntoScan(
-  call: RelOptRuleCall,
-  filter: Filter,
-  scan: LogicalTableScan,
-  relOptTable: FlinkRelOptTable): Unit = {
-
-val tableSourceTable = relOptTable.unwrap(classOf[TableSourceTable[_]])
-val tableSource = 
tableSourceTable.tableSource.asInstanceOf[PartitionableTableSource]
-val partitionFieldNames = tableSource.getPartitionFieldNames.toList.toArray
-val inputFieldType = filter.getInput.getRowType
-
-val relBuilder = call.builder()
-val maxCnfNodeCount = FlinkRelOptUtil.getMaxCnfNodeCount(scan)
-val (partitionPredicate, nonPartitionPredicate) =
-  RexNodeExtractor.extractPartitionPredicates(
-filter.getCondition,
-maxCnfNodeCount,
+val tableSourceTable = table.unwrap(classOf[TableSourceTable[_]])
+
+if (!tableSourceTable.tableSource.isInstanceOf[PartitionableTableSource]) {
+  throw new TableException(s"Table(${table.getQualifiedName}) with 
partition keys" +
 
 Review comment:
   It the table is partitioned, but we have a non PartitionableTableSource, 
couldn't we just skip partition prune and read the whole data instead?
   Throwing an exception doesn't seem to be right.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338905610
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/delegation/PlannerBase.scala
 ##
 @@ -254,28 +252,30 @@ abstract class PlannerBase(
 */
   protected def translateToPlan(execNodes: util.List[ExecNode[_, _]]): 
util.List[Transformation[_]]
 
-  private def getTableSink(objectIdentifier: ObjectIdentifier): 
Option[TableSink[_]] = {
-JavaScalaConversionUtil.toScala(catalogManager.getTable(objectIdentifier)) 
match {
+  private def getTableSink(identifier: ObjectIdentifier): 
Option[(CatalogTable, TableSink[_])] = {
 
 Review comment:
   identifier -> tableIdentifier
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338907546
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/stream/StreamExecSinkRule.scala
 ##
 @@ -42,32 +42,36 @@ class StreamExecSinkRule extends ConverterRule(
 val sinkNode = rel.asInstanceOf[FlinkLogicalSink]
 val newTrait = rel.getTraitSet.replace(FlinkConventions.STREAM_PHYSICAL)
 var requiredTraitSet = 
sinkNode.getInput.getTraitSet.replace(FlinkConventions.STREAM_PHYSICAL)
-sinkNode.sink match {
-  case partitionSink: PartitionableTableSink
-if partitionSink.getPartitionFieldNames != null &&
-  partitionSink.getPartitionFieldNames.nonEmpty =>
-val partitionFields = partitionSink.getPartitionFieldNames
-val partitionIndices = partitionFields
-  .map(partitionSink.getTableSchema.getFieldNames.indexOf(_))
-// validate
-partitionIndices.foreach { idx =>
-  if (idx < 0) {
-throw new TableException(s"Partitionable sink ${sinkNode.sinkName} 
field " +
-  s"${partitionFields.get(idx)} must be in the schema.")
-  }
-}
+if (sinkNode.catalogTable != null && sinkNode.catalogTable.isPartitioned) {
 
 Review comment:
   same comments as BatchExecSinkRule


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14527) Add integeration tests for PostgreSQL and MySQL dialects in flink jdbc module

2019-10-24 Thread Jark Wu (Jira)
Jark Wu created FLINK-14527:
---

 Summary: Add integeration tests for PostgreSQL and MySQL dialects 
in flink jdbc module
 Key: FLINK-14527
 URL: https://issues.apache.org/jira/browse/FLINK-14527
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Reporter: Jark Wu


Currently, we already supported PostgreSQL and MySQL and Derby dialects in 
flink-jdbc as sink and source. However, we only have integeration tests for 
Derby. 

We should add integeration tests for PostgreSQL and MySQL dialects too. Maybe 
we can use JUnit {{Parameterized}} feature to avoid duplicated testing code.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-14524) PostgreSQL JDBC sink generates invalid SQL in upsert mode

2019-10-24 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-14524.
-
Fix Version/s: 1.9.2
   1.10.0
   Resolution: Fixed

1.10.0: 1cefbb8b5d0ba0e687635441c04d1790e33ef76c
1.9.2: 7086a8958a40804a604171483d2f374de236cfdf

> PostgreSQL JDBC sink generates invalid SQL in upsert mode
> -
>
> Key: FLINK-14524
> URL: https://issues.apache.org/jira/browse/FLINK-14524
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0, 1.9.1
>Reporter: Fawad Halim
>Assignee: Fawad Halim
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.9.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The "upsert" query generated for the PostgreSQL dialect is missing a closing 
> parenthesis in the ON CONFLICT clause, causing the INSERT statement to error 
> out with the error
>  
> {{ERROR o.a.f.s.runtime.tasks.StreamTask - Error during disposal of stream 
> operator.}}
> {{java.lang.RuntimeException: Writing records to JDBC failed.}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.checkFlushException(JDBCUpsertOutputFormat.java:135)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.close(JDBCUpsertOutputFormat.java:184)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertSinkFunction.close(JDBCUpsertSinkFunction.java:61)}}
> {{ at 
> org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43)}}
> {{ at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:117)}}
> {{ at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:585)}}
> {{ at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:484)}}
> {{ at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)}}
> {{ at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)}}
> {{ at java.lang.Thread.run(Thread.java:748)}}
> {{Caused by: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO 
> "public.temperature"("id", "timestamp", "temperature") VALUES ('sensor_17', 
> '2019-10-25 00:39:10-05', 20.27573964210997) ON CONFLICT ("id", "timestamp" 
> DO UPDATE SET "id"=EXCLUDED."id", "timestamp"=EXCLUDED."timestamp", 
> "temperature"=EXCLUDED."temperature" was aborted: ERROR: syntax error at or 
> near "DO"}}
> {{ Position: 119 Call getNextException to see other errors in the batch.}}
> {{ at 
> org.postgresql.jdbc.BatchResultHandler.handleCompletion(BatchResultHandler.java:163)}}
> {{ at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:838)}}
> {{ at 
> org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1546)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.writer.UpsertWriter$UpsertWriterUsingUpsertStatement.internalExecuteBatch(UpsertWriter.java:177)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.writer.UpsertWriter.executeBatch(UpsertWriter.java:117)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.flush(JDBCUpsertOutputFormat.java:159)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.lambda$open$0(JDBCUpsertOutputFormat.java:124)}}
> {{ at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}
> {{ at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)}}
> {{ at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)}}
> {{ at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)}}
> {{ at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
> {{ at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
> {{ ... 1 common frames omitted}}
> {{Caused by: org.postgresql.util.PSQLException: ERROR: syntax error at or 
> near "DO"}}
> {{ Position: 119}}
> {{ at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2497)}}
> {{ at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2233)}}
> {{ at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:310)}}
> {{ at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:834)}}
> {{ ... 12 common frames omitted}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9990: [FLINK-14524] [flink-jdbc] Correct syntax for PostgreSQL dialect "upsert" statement

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9990: [FLINK-14524] [flink-jdbc] Correct 
syntax for PostgreSQL dialect "upsert" statement
URL: https://github.com/apache/flink/pull/9990#issuecomment-546166373
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 56a7c678cf04569e01bf65807b8bb1ffbe7acdd0 (Fri Oct 25 
06:41:37 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong merged pull request #9990: [FLINK-14524] [flink-jdbc] Correct syntax for PostgreSQL dialect "upsert" statement

2019-10-24 Thread GitBox
wuchong merged pull request #9990: [FLINK-14524] [flink-jdbc] Correct syntax 
for PostgreSQL dialect "upsert" statement
URL: https://github.com/apache/flink/pull/9990
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-14524) PostgreSQL JDBC sink generates invalid SQL in upsert mode

2019-10-24 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-14524:
---

Assignee: Fawad Halim

> PostgreSQL JDBC sink generates invalid SQL in upsert mode
> -
>
> Key: FLINK-14524
> URL: https://issues.apache.org/jira/browse/FLINK-14524
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0, 1.9.1
>Reporter: Fawad Halim
>Assignee: Fawad Halim
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The "upsert" query generated for the PostgreSQL dialect is missing a closing 
> parenthesis in the ON CONFLICT clause, causing the INSERT statement to error 
> out with the error
>  
> {{ERROR o.a.f.s.runtime.tasks.StreamTask - Error during disposal of stream 
> operator.}}
> {{java.lang.RuntimeException: Writing records to JDBC failed.}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.checkFlushException(JDBCUpsertOutputFormat.java:135)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.close(JDBCUpsertOutputFormat.java:184)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertSinkFunction.close(JDBCUpsertSinkFunction.java:61)}}
> {{ at 
> org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43)}}
> {{ at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:117)}}
> {{ at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:585)}}
> {{ at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:484)}}
> {{ at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)}}
> {{ at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)}}
> {{ at java.lang.Thread.run(Thread.java:748)}}
> {{Caused by: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO 
> "public.temperature"("id", "timestamp", "temperature") VALUES ('sensor_17', 
> '2019-10-25 00:39:10-05', 20.27573964210997) ON CONFLICT ("id", "timestamp" 
> DO UPDATE SET "id"=EXCLUDED."id", "timestamp"=EXCLUDED."timestamp", 
> "temperature"=EXCLUDED."temperature" was aborted: ERROR: syntax error at or 
> near "DO"}}
> {{ Position: 119 Call getNextException to see other errors in the batch.}}
> {{ at 
> org.postgresql.jdbc.BatchResultHandler.handleCompletion(BatchResultHandler.java:163)}}
> {{ at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:838)}}
> {{ at 
> org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1546)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.writer.UpsertWriter$UpsertWriterUsingUpsertStatement.internalExecuteBatch(UpsertWriter.java:177)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.writer.UpsertWriter.executeBatch(UpsertWriter.java:117)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.flush(JDBCUpsertOutputFormat.java:159)}}
> {{ at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.lambda$open$0(JDBCUpsertOutputFormat.java:124)}}
> {{ at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}
> {{ at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)}}
> {{ at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)}}
> {{ at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)}}
> {{ at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
> {{ at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
> {{ ... 1 common frames omitted}}
> {{Caused by: org.postgresql.util.PSQLException: ERROR: syntax error at or 
> near "DO"}}
> {{ Position: 119}}
> {{ at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2497)}}
> {{ at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2233)}}
> {{ at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:310)}}
> {{ at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:834)}}
> {{ ... 12 common frames omitted}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field 
names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#issuecomment-542593114
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4c59ed745122aa79a0b561aff6f4ec382cb31211 (Fri Oct 25 
06:34:30 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338903410
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/delegation/PlannerBase.scala
 ##
 @@ -254,28 +252,30 @@ abstract class PlannerBase(
 */
   protected def translateToPlan(execNodes: util.List[ExecNode[_, _]]): 
util.List[Transformation[_]]
 
-  private def getTableSink(objectIdentifier: ObjectIdentifier): 
Option[TableSink[_]] = {
-JavaScalaConversionUtil.toScala(catalogManager.getTable(objectIdentifier)) 
match {
+  private def getTableSink(identifier: ObjectIdentifier): 
Option[(CatalogTable, TableSink[_])] = {
 
 Review comment:
   identifier -> tableIdentifier


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9993: [FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.

2019-10-24 Thread GitBox
flinkbot commented on issue #9993: [FLINK-14498][runtime]Introduce 
NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.
URL: https://github.com/apache/flink/pull/9993#issuecomment-546219804
 
 
   
   ## CI report:
   
   * f47a9f60dff6710ce5a7d5fe341a94d0fffb2d6d : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field 
names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#issuecomment-542593114
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4c59ed745122aa79a0b561aff6f4ec382cb31211 (Fri Oct 25 
06:19:14 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338900074
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/delegation/PlannerBase.scala
 ##
 @@ -192,7 +192,11 @@ abstract class PlannerBase(
 s"${classOf[OverwritableTableSink].getSimpleName} but actually 
got " +
 sink.getClass.getName)
   }
-  LogicalSink.create(input, sink, 
catalogSink.getTablePath.mkString("."))
+  LogicalSink.create(
+input,
+sink,
+catalogSink.getTablePath.mkString("."),
+
catalogManager.getTable(identifier).get().asInstanceOf[CatalogTable])
 
 Review comment:
   Sorry i didn't refresh the web


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10435) Client sporadically hangs after Ctrl + C

2019-10-24 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959459#comment-16959459
 ] 

Yang Wang commented on FLINK-10435:
---

It seems that the yarn client hangs when calling `killApplication`. I will try 
to reproduce this corner case. [~gjy] could you share more information how to 
reproduce it. Or it just sporadically happens.

> Client sporadically hangs after Ctrl + C
> 
>
> Key: FLINK-10435
> URL: https://issues.apache.org/jira/browse/FLINK-10435
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client, Deployment / YARN
>Affects Versions: 1.5.5, 1.6.2, 1.7.0, 1.9.1
>Reporter: Gary Yao
>Priority: Major
>
> When submitting a YARN job cluster in attached mode, the client hangs 
> indefinitely if Ctrl + C is pressed at the right time. One can recover from 
> this by sending SIGKILL.
> *Command to submit job*
> {code}
> HADOOP_CLASSPATH=`hadoop classpath` bin/flink run -m yarn-cluster 
> examples/streaming/WordCount.jar
> {code}
>  
> *Output/Stacktrace*
> {code}
> [hadoop@ip-172-31-45-22 flink-1.5.4]$ HADOOP_CLASSPATH=`hadoop classpath` 
> bin/flink run -m yarn-cluster examples/streaming/WordCount.jar
> Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR was set.
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/home/hadoop/flink-1.5.4/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 2018-09-26 12:01:04,241 INFO  org.apache.hadoop.yarn.client.RMProxy   
>   - Connecting to ResourceManager at 
> ip-172-31-45-22.eu-central-1.compute.internal/172.31.45.22:8032
> 2018-09-26 12:01:04,386 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>   - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
> 2018-09-26 12:01:04,386 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>   - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
> 2018-09-26 12:01:04,402 WARN  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Neither the 
> HADOOP_CONF_DIR nor the YARN_CONF_DIR environment variable is set. The Flink 
> YARN Client needs one of these to be set to properly load the Hadoop 
> configuration for accessing YARN.
> 2018-09-26 12:01:04,598 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Cluster 
> specification: ClusterSpecification{masterMemoryMB=1024, 
> taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}
> 2018-09-26 12:01:04,972 WARN  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - The 
> configuration directory ('/home/hadoop/flink-1.5.4/conf') contains both LOG4J 
> and Logback configuration files. Please delete or rename one of them.
> 2018-09-26 12:01:07,857 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Submitting 
> application master application_1537944258063_0017
> 2018-09-26 12:01:07,913 INFO  
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted 
> application application_1537944258063_0017
> 2018-09-26 12:01:07,913 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Waiting for 
> the cluster to be allocated
> 2018-09-26 12:01:07,916 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Deploying 
> cluster, current state ACCEPTED
> ^C2018-09-26 12:01:08,851 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Cancelling 
> deployment from Deployment Failure Hook
> 2018-09-26 12:01:08,854 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Killing YARN 
> application
> 
>  The program finished with the following exception:
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't 
> deploy Yarn session cluster
>   at 
> org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:410)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:258)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:214)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1025)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1101)
>   at java.security.Acce

[GitHub] [flink] flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field 
names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#issuecomment-542593114
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4c59ed745122aa79a0b561aff6f4ec382cb31211 (Fri Oct 25 
06:15:10 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
JingsongLi commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338899223
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/catalog/DatabaseCalciteSchema.java
 ##
 @@ -143,7 +143,8 @@ private Table convertConnectorTable(
return new TableSourceTable<>(
tableSource,
isStreamingMode,
-   
FlinkStatistic.builder().tableStats(tableStats).build());
+   
FlinkStatistic.builder().tableStats(tableStats).build(),
+   null);
 
 Review comment:
   Hi Jark, I don't quite understand the expression in this source, but I think 
we need a good name to clarify it. It's may not just the `TableSourceTable` 
that saves them together? I think they are all attributes of table.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9993: [FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.

2019-10-24 Thread GitBox
flinkbot commented on issue #9993: [FLINK-14498][runtime]Introduce 
NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.
URL: https://github.com/apache/flink/pull/9993#issuecomment-546216825
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit f47a9f60dff6710ce5a7d5fe341a94d0fffb2d6d (Fri Oct 25 
06:13:35 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support primitive data types in Python user-defined functions

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support 
primitive data types in Python user-defined functions
URL: https://github.com/apache/flink/pull/9977#issuecomment-545444286
 
 
   
   ## CI report:
   
   * 89d33a5aeefd6934261cc185c306c2e55b6c8ad2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133195423)
   * 982528a22bd2cfdfdb367561464079946d158a4a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133219672)
   * e8feba0292bd3bd15320ba91f1ee1ef61aaf540f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133308166)
   * bf9706edba4f0dd6a91d5ea4833cf462c2043cbd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133381176)
   * afc9e2739bcf953d433179c2c48dfb1e1b8f4586 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133490256)
   * e9d91aa2f66e91c3a0377274e48eec8c115b68a7 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133494962)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wsry opened a new pull request #9993: [FLINK-14498][runtime]Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.

2019-10-24 Thread GitBox
wsry opened a new pull request #9993: [FLINK-14498][runtime]Introduce 
NetworkBufferPool#isAvailable() for interacting with LocalBufferPool.
URL: https://github.com/apache/flink/pull/9993
 
 
   ## What is the purpose of the change
   If the LocalBufferPool can not request available buffer from 
NetworkBufferPool, it would wait for 2 seconds before trying to request again 
in a loop way. Therefore it would bring some delays in practice.
   To improve this interaction, we introduce NetworkBufferPool#isAvailable to 
return a future which would be monitored by LocalBufferPool. Then once there 
are available buffers in NetworkBufferPool, it would complete this future to 
notify LocalBufferPool immediately. 
   At the same time, the blocking exclusive buffer requests to 
NetworkBufferPool will be also notified immediately when there are buffers 
recycled.
   
   ## Brief change log
   
- A new interface isAvailable which returns a CompletableFuture is added to 
the NetworkBufferPool. The future is completed when there are free segments in 
the pool.
- LocalBufferPool will monitor the future returned by isAvailable and will 
be notified to retry to request MemorySegment from NetworkBufferPool when the 
future is complete.
- Network buffer pool will maintain the state of buffer availability. The 
future will be transited to uncompleted state when there are no available 
segments and will be completed once there are available segments.
- Two ut cases are added to verify the change.
   
   ## Verifying this change
   
   Two new ut cases are added to verify the change. One is for the new added 
isAvailable interface of NetworkBufferPool to verify that the buffer 
availability is correctly maintained and the future callback is correctly 
processed. And the other is for the interaction between LocalBufferPool and 
NetworkBufferPool to verify that blocking requests of multi local buffer pools 
can be fulfilled by recycled segments to the global network buffer pool.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14498) Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool

2019-10-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14498:
---
Labels: pull-request-available  (was: )

> Introduce NetworkBufferPool#isAvailable() for interacting with LocalBufferPool
> --
>
> Key: FLINK-14498
> URL: https://issues.apache.org/jira/browse/FLINK-14498
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Network
>Reporter: zhijiang
>Assignee: Yingjie Cao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> If the LocalBufferPool can not request available buffer from 
> NetworkBufferPool, it would wait for 2 seconds before trying to request again 
> in a loop way. Therefore it would bring some delays in practice.
> To improve this interaction, we could introduce NetworkBufferPool#isAvailable 
> to return a future which would be monitored by LocalBufferPool. Then once 
> there are available buffers in NetworkBufferPool, it would complete this 
> future to notify LocalBufferPool immediately. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid competition of checkpoint triggering

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid 
competition of checkpoint triggering
URL: https://github.com/apache/flink/pull/9853#issuecomment-539472188
 
 
   
   ## CI report:
   
   * b10f5603fc55999adb8d5334a3d142d20322de5b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/130941094)
   * bd7553b9d922e246f7b9b7bafe0d8428a9be6a99 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/131514700)
   * 272536f7c0a89d283e76c187ca29cca2b0951318 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132093266)
   * 9c32b0c5e3c5b4c55cbfce6f8ecb3135d2bd1cb5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132284738)
   * 5183203896305067056e68e8dddc3b413e66788b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132312087)
   * d89e09edeeef4059c3ab5e3b3237603c2adda7f5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133323096)
   * 1a730fed3c2d6de865e3a6150493e8665087df23 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133400668)
   * e21219f05751606aa0ba103896f4dfcdd4b379a6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133491804)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9783: [FLINK-14040][travis] Enable MiniCluster tests based on schedulerNG in Flink cron build

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9783: [FLINK-14040][travis] Enable 
MiniCluster tests based on schedulerNG in Flink cron build
URL: https://github.com/apache/flink/pull/9783#issuecomment-535844880
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 0b61d830609a5453020737c8fd894a4ef9ba883d (Fri Oct 25 
05:55:50 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhuzhurk commented on issue #9783: [FLINK-14040][travis] Enable MiniCluster tests based on schedulerNG in Flink cron build

2019-10-24 Thread GitBox
zhuzhurk commented on issue #9783: [FLINK-14040][travis] Enable MiniCluster 
tests based on schedulerNG in Flink cron build
URL: https://github.com/apache/flink/pull/9783#issuecomment-546212165
 
 
   @zentol shall we merge this PR so that we can use it to verify the following 
scheduler changes? 
   I have rebased it onto latest master commits and the tests looks good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9952: [FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9952: [FLINK-14321][sql-parser] Support to 
parse watermark statement in SQL DDL
URL: https://github.com/apache/flink/pull/9952#issuecomment-544436896
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b27e15bfce1229c1b4cdd80a2a8a689561352d2b (Fri Oct 25 
05:48:41 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @danny0405
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9952: [FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL

2019-10-24 Thread GitBox
danny0405 commented on a change in pull request #9952: 
[FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL
URL: https://github.com/apache/flink/pull/9952#discussion_r338894035
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateTable.java
 ##
 @@ -310,9 +322,67 @@ private void printIndent(SqlWriter writer) {
public List columnList = new ArrayList<>();
public SqlNodeList primaryKeyList = SqlNodeList.EMPTY;
public List uniqueKeysList = new ArrayList<>();
+   @Nullable public SqlWatermark watermark;
}
 
public String[] fullTableName() {
return tableName.names.toArray(new String[0]);
}
+
+   // 
-
+
+   private static final class ColumnValidator {
+
+   private final Set allColumnNames = new HashSet<>();
+
+   /**
+* Adds column name to the registered column set. This will add 
nested column names recursive.
+* Nested column names are qualified using "." separator.
+*/
+   public void addColumn(SqlNode column) throws 
SqlValidateException {
+   String columnName;
+   if (column instanceof SqlTableColumn) {
+   SqlTableColumn tableColumn = (SqlTableColumn) 
column;
+   columnName = tableColumn.getName().getSimple();
+   addNestedColumn(columnName, 
tableColumn.getType());
+   } else if (column instanceof SqlBasicCall) {
+   SqlBasicCall tableColumn = (SqlBasicCall) 
column;
+   columnName = 
tableColumn.getOperands()[1].toString();
+   } else {
+   throw new 
UnsupportedOperationException("Unsupported column:" + column);
+   }
+
+   addColumnName(columnName, column.getParserPosition());
+   }
+
+   /**
+* Returns true if the column name is existed in the registered 
column set.
+* This supports qualified column name using "." separator.
+*/
+   public boolean contains(String columnName) {
+   return allColumnNames.contains(columnName);
+   }
+
+   private void addNestedColumn(String columnName, SqlDataTypeSpec 
columnType) throws SqlValidateException {
+   SqlTypeNameSpec typeName = columnType.getTypeNameSpec();
+   // validate composite type
+   if (typeName instanceof ExtendedSqlRowTypeNameSpec) {
 
 Review comment:
   Well, exactly, because of the implementation limitation, supporting only 
nested Record type seems reasonable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field 
names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#issuecomment-542593114
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4c59ed745122aa79a0b561aff6f4ec382cb31211 (Fri Oct 25 
05:47:41 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
wuchong commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338893966
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/catalog/DatabaseCalciteSchema.java
 ##
 @@ -143,7 +143,8 @@ private Table convertConnectorTable(
return new TableSourceTable<>(
tableSource,
isStreamingMode,
-   
FlinkStatistic.builder().tableStats(tableStats).build());
+   
FlinkStatistic.builder().tableStats(tableStats).build(),
+   null);
 
 Review comment:
   > source may also need to know compute columns. It needs to know which 
fields it does not read. (except compute columns expressions)
   
   Actually, we will retain some expression in source (for rowtime generation), 
then we have to have another field in `TableSourceTable` to keep this 
information. say `generatedExpressions`, this might be confused with the 
expressions in `CatalogTable`? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
wuchong commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338893966
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/catalog/DatabaseCalciteSchema.java
 ##
 @@ -143,7 +143,8 @@ private Table convertConnectorTable(
return new TableSourceTable<>(
tableSource,
isStreamingMode,
-   
FlinkStatistic.builder().tableStats(tableStats).build());
+   
FlinkStatistic.builder().tableStats(tableStats).build(),
+   null);
 
 Review comment:
   > source may also need to know compute columns. It needs to know which 
fields it does not read. (except compute columns expressions)
   Actually, we will retain some expression in source (for rowtime generation), 
then we have to have another field in `TableSourceTable` to keep this 
information. say `generatedExpressions`, this might be confused with the 
expressions in `CatalogTable`? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support primitive data types in Python user-defined functions

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support 
primitive data types in Python user-defined functions
URL: https://github.com/apache/flink/pull/9977#issuecomment-545444286
 
 
   
   ## CI report:
   
   * 89d33a5aeefd6934261cc185c306c2e55b6c8ad2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133195423)
   * 982528a22bd2cfdfdb367561464079946d158a4a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133219672)
   * e8feba0292bd3bd15320ba91f1ee1ef61aaf540f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133308166)
   * bf9706edba4f0dd6a91d5ea4833cf462c2043cbd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133381176)
   * afc9e2739bcf953d433179c2c48dfb1e1b8f4586 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133490256)
   * e9d91aa2f66e91c3a0377274e48eec8c115b68a7 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9952: [FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9952: [FLINK-14321][sql-parser] Support to 
parse watermark statement in SQL DDL
URL: https://github.com/apache/flink/pull/9952#issuecomment-544436896
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b27e15bfce1229c1b4cdd80a2a8a689561352d2b (Fri Oct 25 
05:34:29 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @danny0405
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9952: [FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL

2019-10-24 Thread GitBox
wuchong commented on a change in pull request #9952: [FLINK-14321][sql-parser] 
Support to parse watermark statement in SQL DDL
URL: https://github.com/apache/flink/pull/9952#discussion_r338891673
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateTable.java
 ##
 @@ -310,9 +322,67 @@ private void printIndent(SqlWriter writer) {
public List columnList = new ArrayList<>();
public SqlNodeList primaryKeyList = SqlNodeList.EMPTY;
public List uniqueKeysList = new ArrayList<>();
+   @Nullable public SqlWatermark watermark;
}
 
public String[] fullTableName() {
return tableName.names.toArray(new String[0]);
}
+
+   // 
-
+
+   private static final class ColumnValidator {
+
+   private final Set allColumnNames = new HashSet<>();
+
+   /**
+* Adds column name to the registered column set. This will add 
nested column names recursive.
+* Nested column names are qualified using "." separator.
+*/
+   public void addColumn(SqlNode column) throws 
SqlValidateException {
+   String columnName;
+   if (column instanceof SqlTableColumn) {
+   SqlTableColumn tableColumn = (SqlTableColumn) 
column;
+   columnName = tableColumn.getName().getSimple();
+   addNestedColumn(columnName, 
tableColumn.getType());
+   } else if (column instanceof SqlBasicCall) {
+   SqlBasicCall tableColumn = (SqlBasicCall) 
column;
+   columnName = 
tableColumn.getOperands()[1].toString();
+   } else {
+   throw new 
UnsupportedOperationException("Unsupported column:" + column);
+   }
+
+   addColumnName(columnName, column.getParserPosition());
+   }
+
+   /**
+* Returns true if the column name is existed in the registered 
column set.
+* This supports qualified column name using "." separator.
+*/
+   public boolean contains(String columnName) {
+   return allColumnNames.contains(columnName);
+   }
+
+   private void addNestedColumn(String columnName, SqlDataTypeSpec 
columnType) throws SqlValidateException {
+   SqlTypeNameSpec typeName = columnType.getTypeNameSpec();
+   // validate composite type
+   if (typeName instanceof ExtendedSqlRowTypeNameSpec) {
 
 Review comment:
   I think the main reason is implementation level. In the implementation, we 
recognize a time attribute field by a specical data type `TimeIndicatorType`. 
But for a `ArrayType`, we can't say the 3rd element in the array 
is `TimeIndicatorType`, all the elements in the array share the same data type 
`elementType`. That's why we only support field reference as time attribute 
field, because we can recognize them by datat type. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support primitive data types in Python user-defined functions

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support 
primitive data types in Python user-defined functions
URL: https://github.com/apache/flink/pull/9977#issuecomment-545428245
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e9d91aa2f66e91c3a0377274e48eec8c115b68a7 (Fri Oct 25 
05:26:20 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support primitive data types in Python user-defined functions

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support 
primitive data types in Python user-defined functions
URL: https://github.com/apache/flink/pull/9977#issuecomment-545444286
 
 
   
   ## CI report:
   
   * 89d33a5aeefd6934261cc185c306c2e55b6c8ad2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133195423)
   * 982528a22bd2cfdfdb367561464079946d158a4a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133219672)
   * e8feba0292bd3bd15320ba91f1ee1ef61aaf540f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133308166)
   * bf9706edba4f0dd6a91d5ea4833cf462c2043cbd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133381176)
   * afc9e2739bcf953d433179c2c48dfb1e1b8f4586 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133490256)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid competition of checkpoint triggering

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid 
competition of checkpoint triggering
URL: https://github.com/apache/flink/pull/9853#issuecomment-539472188
 
 
   
   ## CI report:
   
   * b10f5603fc55999adb8d5334a3d142d20322de5b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/130941094)
   * bd7553b9d922e246f7b9b7bafe0d8428a9be6a99 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/131514700)
   * 272536f7c0a89d283e76c187ca29cca2b0951318 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132093266)
   * 9c32b0c5e3c5b4c55cbfce6f8ecb3135d2bd1cb5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132284738)
   * 5183203896305067056e68e8dddc3b413e66788b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132312087)
   * d89e09edeeef4059c3ab5e3b3237603c2adda7f5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133323096)
   * 1a730fed3c2d6de865e3a6150493e8665087df23 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133400668)
   * e21219f05751606aa0ba103896f4dfcdd4b379a6 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133491804)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9992: [FLINK-14481][RunTime/Network]change Flink's port check to 0 to 65535

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9992: [FLINK-14481][RunTime/Network]change 
Flink's port check to 0 to 65535
URL: https://github.com/apache/flink/pull/9992#issuecomment-546188093
 
 
   
   ## CI report:
   
   * c63d8e2bc6d7bc16c46940a81857211d8788c1fe : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133488582)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-14519) Fail on invoking declineCheckpoint remotely

2019-10-24 Thread Jiayi Liao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Liao resolved FLINK-14519.

Resolution: Duplicate

Sorry for not looking into FLINK-14076 carefully. This is already fixed.

> Fail on invoking declineCheckpoint remotely
> ---
>
> Key: FLINK-14519
> URL: https://issues.apache.org/jira/browse/FLINK-14519
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.9.1
>Reporter: Jiayi Liao
>Priority: Major
>
> On invoking {{declineCheckpoint}}, the reason field of {{DeclineCheckpoint}} 
> is not serializable when  it is a {{RocksDBException}} because 
> {{org.rocksdb.Status}} doesn't implement serializable.
> {panel:title=Exception}
> Caused by: java.io.IOException: Could not serialize 0th argument of method 
> declineCheckpoint. This indicates that the argument type [Ljava.lang.Object; 
> is not serializable. Arguments have to be serializable for remote rpc calls.
> at 
> org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation$MethodInvocation.writeObject(RemoteRpcInvocation.java:186)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1028)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
> at 
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:586)
> at 
> org.apache.flink.util.SerializedValue.(SerializedValue.java:52)
> at 
> org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation.(RemoteRpcInvocation.java:53)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.createRpcInvocationMessage(AkkaInvocationHandler.java:264)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.invokeRpc(AkkaInvocationHandler.java:200)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.invoke(AkkaInvocationHandler.java:129)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaInvocationHandler.invoke(FencedAkkaInvocationHandler.java:78)
> ... 18 more
> Caused by: java.io.NotSerializableException: org.rocksdb.Status
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
> at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at 
> java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:441)
> at java.lang.Throwable.writeObject(Throwable.java:985)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1028)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
> at 
> org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation$MethodInvocation.writeObject(RemoteRpcInvocation.java:182)
> ... 33 more
> {panel}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement 
KubernetesSessionClusterEntrypoint.
URL: https://github.com/apache/flink/pull/9986#issuecomment-545887333
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 593bf42620faf09c1accbd692494646194e3d574 (Fri Oct 25 
05:03:57 UTC 2019)
   
   **Warnings:**
* **4 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangyang0918 commented on a change in pull request #9986: [FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.

2019-10-24 Thread GitBox
wangyang0918 commented on a change in pull request #9986: 
[FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.
URL: https://github.com/apache/flink/pull/9986#discussion_r338886907
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManager.java
 ##
 @@ -0,0 +1,351 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.ConfigurationUtils;
+import org.apache.flink.configuration.TaskManagerOptions;
+import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions;
+import org.apache.flink.kubernetes.kubeclient.FlinkKubeClient;
+import org.apache.flink.kubernetes.kubeclient.KubeClientFactory;
+import org.apache.flink.kubernetes.kubeclient.TaskManagerPodParameter;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkPod;
+import org.apache.flink.kubernetes.utils.Constants;
+import org.apache.flink.kubernetes.utils.KubernetesUtils;
+import org.apache.flink.runtime.clusterframework.ApplicationStatus;
+import 
org.apache.flink.runtime.clusterframework.ContaineredTaskManagerParameters;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.clusterframework.types.ResourceProfile;
+import org.apache.flink.runtime.concurrent.FutureUtils;
+import org.apache.flink.runtime.entrypoint.ClusterInformation;
+import org.apache.flink.runtime.entrypoint.parser.CommandLineOptions;
+import org.apache.flink.runtime.heartbeat.HeartbeatServices;
+import org.apache.flink.runtime.highavailability.HighAvailabilityServices;
+import org.apache.flink.runtime.metrics.groups.ResourceManagerMetricGroup;
+import org.apache.flink.runtime.resourcemanager.JobLeaderIdService;
+import org.apache.flink.runtime.resourcemanager.ResourceManager;
+import org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager;
+import org.apache.flink.runtime.rpc.FatalErrorHandler;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.util.FlinkException;
+import org.apache.flink.util.FlinkRuntimeException;
+import org.apache.flink.util.Preconditions;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.io.File;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * Kubernetes specific implementation of the {@link ResourceManager}.
+ */
+public class KubernetesResourceManager extends 
ResourceManager
+   implements FlinkKubeClient.PodCallbackHandler {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(KubernetesResourceManager.class);
+
+   public static final String ENV_RESOURCE_ID = "RESOURCE_ID";
+
+   private static final String TASK_MANAGER_PREFIX = "taskmanager";
+
+   private final ConcurrentMap 
workerNodeMap;
+
+   private final int numberOfTaskSlots;
+
+   private final int defaultTaskManagerMemoryMB;
+
+   private final double defaultCpus;
+
+   private final Collection slotsPerWorker;
+
+   private final Configuration flinkConfig;
+
+   private final AtomicLong maxPodId = new AtomicLong(0);
+
+   private final String clusterId;
+
+   private FlinkKubeClient kubeClient;
+
+   /** The number of pods requested, but not yet granted. */
+   private int numPendingPodRequests;
+
+   public KubernetesResourceManager(
+   RpcService rpcService,
+   String resourceManagerEndpointId,
+   ResourceID resourceId,
+   Configuration flinkConfig,
+   HighAvailabilityServices highAvailabilityServices,
+   HeartbeatServices heartbeatServices,
+   SlotManager slotManager,
+   JobLeaderIdService jobLeaderIdService,
+   ClusterInformation clusterInformation,
+   Fata

[GitHub] [flink] flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support primitive data types in Python user-defined functions

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support 
primitive data types in Python user-defined functions
URL: https://github.com/apache/flink/pull/9977#issuecomment-545444286
 
 
   
   ## CI report:
   
   * 89d33a5aeefd6934261cc185c306c2e55b6c8ad2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133195423)
   * 982528a22bd2cfdfdb367561464079946d158a4a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133219672)
   * e8feba0292bd3bd15320ba91f1ee1ef61aaf540f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133308166)
   * bf9706edba4f0dd6a91d5ea4833cf462c2043cbd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133381176)
   * afc9e2739bcf953d433179c2c48dfb1e1b8f4586 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133490256)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field 
names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#issuecomment-542593114
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4c59ed745122aa79a0b561aff6f4ec382cb31211 (Fri Oct 25 
04:47:37 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
JingsongLi commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338884634
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/delegation/PlannerBase.scala
 ##
 @@ -192,7 +192,11 @@ abstract class PlannerBase(
 s"${classOf[OverwritableTableSink].getSimpleName} but actually 
got " +
 sink.getClass.getName)
   }
-  LogicalSink.create(input, sink, 
catalogSink.getTablePath.mkString("."))
+  LogicalSink.create(
+input,
+sink,
+catalogSink.getTablePath.mkString("."),
+
catalogManager.getTable(identifier).get().asInstanceOf[CatalogTable])
 
 Review comment:
   Can you check again? I have modified it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field 
names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#issuecomment-542617499
 
 
   
   ## CI report:
   
   * f20e67a260971f17d77c7ecc7c55b04136e432c5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132123750)
   * 3c1480f11f084fb2d24618cb8d8f1b992c955f96 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133312035)
   * e3ba6b1f3e6439e4ecbc62a56b778c58b06a2a67 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133325977)
   * e5c058685d595bd711f3d8f50060a3db136e5ae1 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133357676)
   * 4c59ed745122aa79a0b561aff6f4ec382cb31211 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133486511)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9894: [FLINK-14342][table][python] Remove method FunctionDefinition#getLanguage.

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9894: [FLINK-14342][table][python] Remove 
method FunctionDefinition#getLanguage.
URL: https://github.com/apache/flink/pull/9894#issuecomment-541559889
 
 
   
   ## CI report:
   
   * 89595e0af27e0ae61f0fcd956ec422f203a3ab95 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/131751296)
   * b1aed08d2bbeaf73f6dc2a293f912b368319531b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/131758748)
   * e9c95e6f51ff45b69df5d248782921aa8e6edaab : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132077078)
   * 9c05be8a14dbf6b31fd24f6114be36e1adec3b3a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132082371)
   * a27700b7e163407788a4aac0d66d18d0c0efdefc : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133308152)
   * 8ee158ce22c7f49e0da88cc088261959d5de62e5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133486496)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid competition of checkpoint triggering

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid 
competition of checkpoint triggering
URL: https://github.com/apache/flink/pull/9853#issuecomment-539472188
 
 
   
   ## CI report:
   
   * b10f5603fc55999adb8d5334a3d142d20322de5b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/130941094)
   * bd7553b9d922e246f7b9b7bafe0d8428a9be6a99 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/131514700)
   * 272536f7c0a89d283e76c187ca29cca2b0951318 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132093266)
   * 9c32b0c5e3c5b4c55cbfce6f8ecb3135d2bd1cb5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132284738)
   * 5183203896305067056e68e8dddc3b413e66788b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132312087)
   * d89e09edeeef4059c3ab5e3b3237603c2adda7f5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133323096)
   * 1a730fed3c2d6de865e3a6150493e8665087df23 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133400668)
   * e21219f05751606aa0ba103896f4dfcdd4b379a6 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #9854: [FLINK-14230][task] Change the endInput call of the downstream operator to after the upstream operator closes

2019-10-24 Thread GitBox
zhijiangW commented on a change in pull request #9854: [FLINK-14230][task] 
Change the endInput call of the downstream operator to after the upstream 
operator closes
URL: https://github.com/apache/flink/pull/9854#discussion_r338863841
 
 

 ##
 File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java
 ##
 @@ -1023,44 +999,6 @@ public void asyncInvoke(IN input, ResultFuture 
resultFuture) throws Excepti
return in.transform("async wait operator", outTypeInfo, 
factory);
}
 
-   /**
-* Delay a while before async invocation to check whether end input 
waits for all elements finished or not.
-*/
-   @Test
-   public void testEndInput() throws Exception {
 
 Review comment:
   I am not sure whether we already have the tests to cover the case that 
`AsyncWaitOperator#close()` would wait for the inflight input finish. If not 
covered yet, we might refactor this test a bit for verifying that, because it 
is exactly the case that this PR wants to fix in practice.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14520) Could not find a suitable table factory, but Factory is available

2019-10-24 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959425#comment-16959425
 ] 

Jark Wu edited comment on FLINK-14520 at 10/25/19 4:37 AM:
---

Hi [~mserranom], {{CSV}} doesn't not work on filesystem, {{OldCsv}} should work 
but you should decalre schema on OldCsv again because {{deriveSchema}} doesn't 
work in 1.9. Please try this:


{code:java}
tableEnv
.connect(new FileSystem().path("file://./data.json"))
.withSchema(new Schema().field("f0", Types.LONG()))
.withFormat(new OldCsv().field("f0", Types.LONG()))
.inAppendMode()
.registerTableSink("sink");
{code}



was (Author: jark):
Hi [~mserranom], {{CSV}} doesn't not work, {{OldCsv}} should work but you 
should decalre schema on OldCsv again because {{deriveSchema}} doesn't work in 
1.9. Please try this:


{code:java}
tableEnv
.connect(new FileSystem().path("file://./data.json"))
.withSchema(new Schema().field("f0", Types.LONG()))
.withFormat(new OldCsv().field("f0", Types.LONG()))
.inAppendMode()
.registerTableSink("sink");
{code}


> Could not find a suitable table factory, but Factory is available
> -
>
> Key: FLINK-14520
> URL: https://issues.apache.org/jira/browse/FLINK-14520
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0, 1.9.1
> Environment: MacOS 10.14.5 and Ubuntu 16.10
>Reporter: Miguel Serrano
>Priority: Major
> Attachments: example.zip
>
>
> *Description*
> Flink can't find JSON table factory. 
> {color:#24292e}JsonRowFormatFactory{color} is considered but won't match 
> properties.
>  
> gist with code and error: 
> [https://gist.github.com/mserranom/4b2e0088b6000b892c38bd7f93d4fe73]
> Attached is a zip file for reproduction.
>  
> *Error message excerpt*
> {code:java}
> rg.apache.flink.table.api.TableException: findAndCreateTableSink failed.
> at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSink(TableFactoryUtil.java:87)
> at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSink(TableFactoryUtil.java:77)
> ...
> Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could 
> not find a suitable table factory for 
> 'org.apache.flink.table.factories.TableSinkFactory' in
> the classpath.
> ...
> The following properties are requested:
> connector.path=file://./data.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=false
> format.property-version=1
> format.type=json
> schema.0.name=f0
> schema.0.type=BIGINT
> update-mode=append
> ...
> The following factories have been considered:
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> ...
> {code}
> *Code*
> {code:java}
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> EnvironmentSettings settings =
> 
> EnvironmentSettings.newInstance().useOldPlanner().inStreamingMode().build();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, 
> settings);DataStreamSource stream = env.fromElements(1L, 21L, 22L); 
>
> Table table = tableEnv.fromDataStream(stream);
> tableEnv.registerTable("data", table);tableEnv
> .connect(new FileSystem().path("file://./data.json"))
> .withSchema(new Schema().field("f0", Types.LONG))
> .withFormat(new Json().failOnMissingField(false).deriveSchema())
> .inAppendMode()
> .registerTableSink("sink");
> env.execute();
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9952: [FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9952: [FLINK-14321][sql-parser] Support to 
parse watermark statement in SQL DDL
URL: https://github.com/apache/flink/pull/9952#issuecomment-544436896
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b27e15bfce1229c1b4cdd80a2a8a689561352d2b (Fri Oct 25 
04:38:26 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @danny0405
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #9952: [FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL

2019-10-24 Thread GitBox
danny0405 commented on a change in pull request #9952: 
[FLINK-14321][sql-parser] Support to parse watermark statement in SQL DDL
URL: https://github.com/apache/flink/pull/9952#discussion_r338883419
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/ddl/SqlCreateTable.java
 ##
 @@ -310,9 +322,67 @@ private void printIndent(SqlWriter writer) {
public List columnList = new ArrayList<>();
public SqlNodeList primaryKeyList = SqlNodeList.EMPTY;
public List uniqueKeysList = new ArrayList<>();
+   @Nullable public SqlWatermark watermark;
}
 
public String[] fullTableName() {
return tableName.names.toArray(new String[0]);
}
+
+   // 
-
+
+   private static final class ColumnValidator {
+
+   private final Set allColumnNames = new HashSet<>();
+
+   /**
+* Adds column name to the registered column set. This will add 
nested column names recursive.
+* Nested column names are qualified using "." separator.
+*/
+   public void addColumn(SqlNode column) throws 
SqlValidateException {
+   String columnName;
+   if (column instanceof SqlTableColumn) {
+   SqlTableColumn tableColumn = (SqlTableColumn) 
column;
+   columnName = tableColumn.getName().getSimple();
+   addNestedColumn(columnName, 
tableColumn.getType());
+   } else if (column instanceof SqlBasicCall) {
+   SqlBasicCall tableColumn = (SqlBasicCall) 
column;
+   columnName = 
tableColumn.getOperands()[1].toString();
+   } else {
+   throw new 
UnsupportedOperationException("Unsupported column:" + column);
+   }
+
+   addColumnName(columnName, column.getParserPosition());
+   }
+
+   /**
+* Returns true if the column name is existed in the registered 
column set.
+* This supports qualified column name using "." separator.
+*/
+   public boolean contains(String columnName) {
+   return allColumnNames.contains(columnName);
+   }
+
+   private void addNestedColumn(String columnName, SqlDataTypeSpec 
columnType) throws SqlValidateException {
+   SqlTypeNameSpec typeName = columnType.getTypeNameSpec();
+   // validate composite type
+   if (typeName instanceof ExtendedSqlRowTypeNameSpec) {
 
 Review comment:
   For "Does not exist", do you mean the value is null ? A record type field 
can also have null value.
   
   BTW, the user did knows that if the `array[1]` exists, i don't think we 
forbidden the syntax just for a "safe" reason. Actually, in the standard SQL, 
`ARRAY` type has a fixed length, that means, its structure is fixed.
   
   For `MAP` type, i think it's equivalent with a Java POJO or record, we 
support the record but not the map, that does not make sense to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14520) Could not find a suitable table factory, but Factory is available

2019-10-24 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959425#comment-16959425
 ] 

Jark Wu commented on FLINK-14520:
-

Hi [~mserranom], {{CSV}} doesn't not work, {{OldCsv}} should work but you 
should decalre schema on OldCsv again because {{deriveSchema}} doesn't work in 
1.9. Please try this:


{code:java}
tableEnv
.connect(new FileSystem().path("file://./data.json"))
.withSchema(new Schema().field("f0", Types.LONG()))
.withFormat(new OldCsv().field("f0", Types.LONG()))
.inAppendMode()
.registerTableSink("sink");
{code}


> Could not find a suitable table factory, but Factory is available
> -
>
> Key: FLINK-14520
> URL: https://issues.apache.org/jira/browse/FLINK-14520
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0, 1.9.1
> Environment: MacOS 10.14.5 and Ubuntu 16.10
>Reporter: Miguel Serrano
>Priority: Major
> Attachments: example.zip
>
>
> *Description*
> Flink can't find JSON table factory. 
> {color:#24292e}JsonRowFormatFactory{color} is considered but won't match 
> properties.
>  
> gist with code and error: 
> [https://gist.github.com/mserranom/4b2e0088b6000b892c38bd7f93d4fe73]
> Attached is a zip file for reproduction.
>  
> *Error message excerpt*
> {code:java}
> rg.apache.flink.table.api.TableException: findAndCreateTableSink failed.
> at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSink(TableFactoryUtil.java:87)
> at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSink(TableFactoryUtil.java:77)
> ...
> Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could 
> not find a suitable table factory for 
> 'org.apache.flink.table.factories.TableSinkFactory' in
> the classpath.
> ...
> The following properties are requested:
> connector.path=file://./data.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=false
> format.property-version=1
> format.type=json
> schema.0.name=f0
> schema.0.type=BIGINT
> update-mode=append
> ...
> The following factories have been considered:
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> ...
> {code}
> *Code*
> {code:java}
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> EnvironmentSettings settings =
> 
> EnvironmentSettings.newInstance().useOldPlanner().inStreamingMode().build();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, 
> settings);DataStreamSource stream = env.fromElements(1L, 21L, 22L); 
>
> Table table = tableEnv.fromDataStream(stream);
> tableEnv.registerTable("data", table);tableEnv
> .connect(new FileSystem().path("file://./data.json"))
> .withSchema(new Schema().field("f0", Types.LONG))
> .withFormat(new Json().failOnMissingField(false).deriveSchema())
> .inAppendMode()
> .registerTableSink("sink");
> env.execute();
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field 
names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#issuecomment-542593114
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4c59ed745122aa79a0b561aff6f4ec382cb31211 (Fri Oct 25 
04:36:25 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
KurtYoung commented on a change in pull request #9909: [FLINK-14381][table] 
Partition field names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#discussion_r338883013
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/delegation/PlannerBase.scala
 ##
 @@ -192,7 +192,11 @@ abstract class PlannerBase(
 s"${classOf[OverwritableTableSink].getSimpleName} but actually 
got " +
 sink.getClass.getName)
   }
-  LogicalSink.create(input, sink, 
catalogSink.getTablePath.mkString("."))
+  LogicalSink.create(
+input,
+sink,
+catalogSink.getTablePath.mkString("."),
+
catalogManager.getTable(identifier).get().asInstanceOf[CatalogTable])
 
 Review comment:
   You didn't resolve this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid competition of checkpoint triggering

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid 
competition of checkpoint triggering
URL: https://github.com/apache/flink/pull/9853#issuecomment-539467530
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e21219f05751606aa0ba103896f4dfcdd4b379a6 (Fri Oct 25 
04:29:17 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @pnowojski [committer]
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] ifndef-SleePy commented on issue #9853: [FLINK-13904][checkpointing] Avoid competition of checkpoint triggering

2019-10-24 Thread GitBox
ifndef-SleePy commented on issue #9853: [FLINK-13904][checkpointing] Avoid 
competition of checkpoint triggering
URL: https://github.com/apache/flink/pull/9853#issuecomment-546195770
 
 
   Hi @pnowojski , thanks for reviewing! I have addressed the comments. Could 
you take a look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid competition of checkpoint triggering

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid 
competition of checkpoint triggering
URL: https://github.com/apache/flink/pull/9853#issuecomment-539467530
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e21219f05751606aa0ba103896f4dfcdd4b379a6 (Fri Oct 25 
04:26:15 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @pnowojski [committer]
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] ifndef-SleePy commented on a change in pull request #9853: [FLINK-13904][checkpointing] Avoid competition of checkpoint triggering

2019-10-24 Thread GitBox
ifndef-SleePy commented on a change in pull request #9853: 
[FLINK-13904][checkpointing] Avoid competition of checkpoint triggering
URL: https://github.com/apache/flink/pull/9853#discussion_r338881552
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java
 ##
 @@ -1466,11 +1477,24 @@ public void testSavepointsAreNotSubsumed() throws 
Exception {
final JobID jid = new JobID();
final long timestamp = System.currentTimeMillis();
 
+   final CountDownLatch checkpointTriggeredLatch1 = new 
CountDownLatch(2);
+   final CountDownLatch checkpointTriggeredLatch2 = new 
CountDownLatch(2);
+   final CountDownLatch checkpointTriggeredLatch3 = new 
CountDownLatch(2);
+   final CountDownLatch checkpointTriggeredLatch4 = new 
CountDownLatch(2);
+   final CountDownLatch checkpointTriggeredLatch5 = new 
CountDownLatch(2);
+   final CoundDownLatchCheckpointConsumers consumers =
+   new CoundDownLatchCheckpointConsumers(
+   new ArrayDeque() {{
+   add(checkpointTriggeredLatch1);
+   add(checkpointTriggeredLatch2);
+   add(checkpointTriggeredLatch3);
+   add(checkpointTriggeredLatch4);
+   add(checkpointTriggeredLatch5); }});
 
 Review comment:
   That's really a nice suggestion!
   I thought there might be some problems with a manually executor for these 
cases. But after did a testing, I realized all cases work well with manually 
executor. Moreover in this way, some time based case could be simplified a lot.
   BTW, I keep some "dead code" for now, for example, 
`TestingScheduledExecutor`, and the changes of 
`SimpleAckingTaskManagerGateway`. It might be useful for the later PR or others.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support primitive data types in Python user-defined functions

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support 
primitive data types in Python user-defined functions
URL: https://github.com/apache/flink/pull/9977#issuecomment-545444286
 
 
   
   ## CI report:
   
   * 89d33a5aeefd6934261cc185c306c2e55b6c8ad2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133195423)
   * 982528a22bd2cfdfdb367561464079946d158a4a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133219672)
   * e8feba0292bd3bd15320ba91f1ee1ef61aaf540f : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133308166)
   * bf9706edba4f0dd6a91d5ea4833cf462c2043cbd : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133381176)
   * afc9e2739bcf953d433179c2c48dfb1e1b8f4586 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid competition of checkpoint triggering

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid 
competition of checkpoint triggering
URL: https://github.com/apache/flink/pull/9853#issuecomment-539467530
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e21219f05751606aa0ba103896f4dfcdd4b379a6 (Fri Oct 25 
04:16:06 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @pnowojski [committer]
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-14408) In OldPlanner, UDF open method can not be invoke when SQL is optimized

2019-10-24 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-14408.
-
Resolution: Fixed

1.10.0: 12996bd8ef4f83f1c31df06a13aec6512c46999b

> In OldPlanner, UDF open method can not be invoke when SQL is optimized
> --
>
> Key: FLINK-14408
> URL: https://issues.apache.org/jira/browse/FLINK-14408
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.9.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For now, UDF open method can not be invoked when SQL is optimized. For 
> example, a SQL as follow:
> {code:java}
> SELECT MyUdf(1) as constantValue FROM MyTable
> {code}
> MyUdf.open can not be invoked in OldPlanner.
> So, we can construct a constantFunctionContext or a constantRuntimeContext 
> and invoke it just like BlinkPlanner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] ifndef-SleePy commented on a change in pull request #9853: [FLINK-13904][checkpointing] Avoid competition of checkpoint triggering

2019-10-24 Thread GitBox
ifndef-SleePy commented on a change in pull request #9853: 
[FLINK-13904][checkpointing] Avoid competition of checkpoint triggering
URL: https://github.com/apache/flink/pull/9853#discussion_r338879934
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
 ##
 @@ -153,8 +155,8 @@
/** A handle to the current periodic trigger, to cancel it when 
necessary. */
private ScheduledFuture currentPeriodicTrigger;
 
-   /** The timestamp (via {@link System#nanoTime()}) when the last 
checkpoint completed. */
-   private long lastCheckpointCompletionNanos;
+   /** The relative timestamp when the next checkpoint could be triggered. 
*/
+   private long earliestRelativeTimeNextCheckpointBeTriggered;
 
 Review comment:
   I think the problem is this field includes too much meaning. So it's not 
easy to give it a short and clear name. I reverted this field back to 
`lastCheckpointCompletionRelativeTime`. Calculates the 
`nextCheckpointTriggerRelativeTime` in `checkMinPauseBetweenCheckpoints`, it's 
much easier to understand with context. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9916: [FLINK-14408][Table-Planner]UDF's open method is invoked to initialize when sql is optimized in oldPlanner

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9916: [FLINK-14408][Table-Planner]UDF's 
open method is invoked to initialize when sql is optimized in oldPlanner
URL: https://github.com/apache/flink/pull/9916#issuecomment-542755222
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7b954e17cb5bc283a18fd218c93e417d59c4cf37 (Fri Oct 25 
04:13:02 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14408) In OldPlanner, UDF open method can not be invoke when SQL is optimized

2019-10-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14408:
---
Labels: pull-request-available  (was: )

> In OldPlanner, UDF open method can not be invoke when SQL is optimized
> --
>
> Key: FLINK-14408
> URL: https://issues.apache.org/jira/browse/FLINK-14408
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.9.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> For now, UDF open method can not be invoked when SQL is optimized. For 
> example, a SQL as follow:
> {code:java}
> SELECT MyUdf(1) as constantValue FROM MyTable
> {code}
> MyUdf.open can not be invoked in OldPlanner.
> So, we can construct a constantFunctionContext or a constantRuntimeContext 
> and invoke it just like BlinkPlanner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong closed pull request #9916: [FLINK-14408][Table-Planner]UDF's open method is invoked to initialize when sql is optimized in oldPlanner

2019-10-24 Thread GitBox
wuchong closed pull request #9916: [FLINK-14408][Table-Planner]UDF's open 
method is invoked to initialize when sql is optimized in oldPlanner
URL: https://github.com/apache/flink/pull/9916
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14511) Checking YARN queues should add "root" prefix

2019-10-24 Thread Zhanchun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959404#comment-16959404
 ] 

Zhanchun Zhang edited comment on FLINK-14511 at 10/25/19 4:09 AM:
--

 For fair scheduler, we can use `-yqu root.a.product` and `-yqu a.product` to 
submit applications to queue `root.a.product`.

We cannot use `-yqu product` to submit applications to queue `root.a.product`, 
as queue `product` does not exist, and got an exception like this:

{code:java}
2019-10-25 11:36:02,559 WARN org.apache.flink.yarn.YarnClusterDescriptor - The 
specified queue 'product' does not exist. Available queues: root.a, 
root.a.product, root.default, 
2019-10-25 11:36:03,065 WARN org.apache.flink.yarn.YarnClusterDescriptor - The 
configuration directory ('/home/yarn/flink/conf') contains both LOG4J and 
Logback configuration files. Please delete or rename one of them.
2019-10-25 11:36:04,938 INFO org.apache.flink.yarn.YarnClusterDescriptor - 
Submitting application master application_1571906019943_0102
Error while deploying YARN cluster: Couldn't deploy Yarn session cluster
java.lang.RuntimeException: Couldn't deploy Yarn session cluster
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:389)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:839)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:627)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:624)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:624)
Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
application_1571906019943_0102 to YARN : Application rejected by queue 
placement policy, queue product does not exist.
at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:274)
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.startAppMaster(AbstractYarnClusterDescriptor.java:1013)
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployJm(AbstractYarnClusterDescriptor.java:452)
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:418)
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:384)
... 8 more
{code}


If we have a `root.a.product` queue in YARN cluster with FairScheduler,we can 
use `-yqu root.a.product` and `-yqu a.product` to submit applications to the 
cluster successfully, and cannot use `product` to submit applications as it not 
exist.

But if we use `-yqu a.product` to submit applications will get an warning as 
follows, because the queues gettting from `yarnClient.getAllQueues()` method 
are have a `root` prefix for FairScheduer like `root.a, root.a.product`, and 
leaf queue name for CapacityScheduler like `a, product`.

So, I think we should check `root` prefix before checking queues.

{code:java}
2019-10-25 12:04:20,711 WARN org.apache.flink.yarn.YarnClusterDescriptor - The 
specified queue 'a.product' does not exist. Available queues: root.a, 
root.a.product, root.default, ...
{code}



was (Author: dillon.):
 For fair scheduler, we can use `-yqu root.a.product` and `-yqu a.product` to 
submit applications to queue `root.a.product`.

We cannot use `-yqu product` to submit applications to queue `root.a.product`, 
as queue `product` does not exist, and got an exception like this:

{code:java}
2019-10-25 11:36:02,559 WARN org.apache.flink.yarn.YarnClusterDescriptor - The 
specified queue 'product' does not exist. Available queues: root.a, 
root.a.product, root.default, 
2019-10-25 11:36:03,065 WARN org.apache.flink.yarn.YarnClusterDescriptor - The 
configuration directory ('/home/yarn/flink/1200-test/flink-1.4.2-1200/conf') 
contains both LOG4J and Logback configuration files. Please delete or rename 
one of them.
2019-10-25 11:36:04,938 INFO org.apache.flink.yarn.YarnClusterDescriptor - 
Submitting application master application_1571906019943_0102
Error while deploying YARN cluster: Couldn't deploy Yarn session cluster
java.lang.RuntimeException: Couldn't deploy Yarn session cluster
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:389)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:839)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:627)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.jav

[GitHub] [flink] flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid competition of checkpoint triggering

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9853: [FLINK-13904][checkpointing] Avoid 
competition of checkpoint triggering
URL: https://github.com/apache/flink/pull/9853#issuecomment-539467530
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e21219f05751606aa0ba103896f4dfcdd4b379a6 (Fri Oct 25 
04:09:59 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @pnowojski [committer]
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9992: [FLINK-14481][RunTime/Network]change Flink's port check to 0 to 65535

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9992: [FLINK-14481][RunTime/Network]change 
Flink's port check to 0 to 65535
URL: https://github.com/apache/flink/pull/9992#issuecomment-546188093
 
 
   
   ## CI report:
   
   * c63d8e2bc6d7bc16c46940a81857211d8788c1fe : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133488582)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14511) Checking YARN queues should add "root" prefix

2019-10-24 Thread Zhanchun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959404#comment-16959404
 ] 

Zhanchun Zhang commented on FLINK-14511:


 For fair scheduler, we can use `-yqu root.a.product` and `-yqu a.product` to 
submit applications to queue `root.a.product`.

We cannot use `-yqu product` to submit applications to queue `root.a.product`, 
as queue `product` does not exist, and got an exception like this:

{code:java}
2019-10-25 11:36:02,559 WARN org.apache.flink.yarn.YarnClusterDescriptor - The 
specified queue 'product' does not exist. Available queues: root.a, 
root.a.product, root.default, 
2019-10-25 11:36:03,065 WARN org.apache.flink.yarn.YarnClusterDescriptor - The 
configuration directory ('/home/yarn/flink/1200-test/flink-1.4.2-1200/conf') 
contains both LOG4J and Logback configuration files. Please delete or rename 
one of them.
2019-10-25 11:36:04,938 INFO org.apache.flink.yarn.YarnClusterDescriptor - 
Submitting application master application_1571906019943_0102
Error while deploying YARN cluster: Couldn't deploy Yarn session cluster
java.lang.RuntimeException: Couldn't deploy Yarn session cluster
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:389)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:839)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:627)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli$1.call(FlinkYarnSessionCli.java:624)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:624)
Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
application_1571906019943_0102 to YARN : Application rejected by queue 
placement policy, queue product does not exist.
at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:274)
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.startAppMaster(AbstractYarnClusterDescriptor.java:1013)
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployJm(AbstractYarnClusterDescriptor.java:452)
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:418)
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:384)
... 8 more
{code}


If we have a `root.a.product` queue in YARN cluster with FairScheduler,we can 
use `-yqu root.a.product` and `-yqu a.product` to submit applications to the 
cluster successfully, and cannot use `product` to submit applications as it not 
exist.

But if we use `-yqu a.product` to submit applications will get an warning as 
follows, because the queues gettting from `yarnClient.getAllQueues()` method 
are have a `root` prefix for FairScheduer like `root.a, root.a.product`, and 
leaf queue name for CapacityScheduler like `a, product`.

So, I think we should check `root` prefix before checking queues.

{code:java}
2019-10-25 12:04:20,711 WARN org.apache.flink.yarn.YarnClusterDescriptor - The 
specified queue 'a.product' does not exist. Available queues: root.a, 
root.a.product, root.default, ...
{code}


> Checking YARN queues should add "root" prefix
> -
>
> Key: FLINK-14511
> URL: https://issues.apache.org/jira/browse/FLINK-14511
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: Zhanchun Zhang
>Assignee: Zhanchun Zhang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As we all know, all queues in the YARN cluster are children of the "root" 
> queue. While submitting an application to "root.product" queue with -qu 
> product parameter, the client logs that "The specified queue 'product' does 
> not exist. Available queues". But this queue is exist and we can still 
> submit application to YARN cluster, which is confusing for users. So I think 
> that when checking queues should add "root." prefix to the queue name.
> {code:java}
> List queues = yarnClient.getAllQueues();
> if (queues.size() > 0 && this.yarnQueue != null) { // check only if there are 
> queues configured in yarn and for this session.
>   boolean queueFound = false;
>   for (QueueInfo queue : queues) {
>   if (queue.getQueueName().equals(this.yarnQueue) {
>   queueFound = true;
>   break;
>   }
>   }
> 

[GitHub] [flink] flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9986: [FLINK-10933][kubernetes] Implement 
KubernetesSessionClusterEntrypoint.
URL: https://github.com/apache/flink/pull/9986#issuecomment-545887333
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 593bf42620faf09c1accbd692494646194e3d574 (Fri Oct 25 
04:04:54 UTC 2019)
   
   **Warnings:**
* **4 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] thousandhu commented on a change in pull request #9986: [FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.

2019-10-24 Thread GitBox
thousandhu commented on a change in pull request #9986: 
[FLINK-10933][kubernetes] Implement KubernetesSessionClusterEntrypoint.
URL: https://github.com/apache/flink/pull/9986#discussion_r338878342
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManager.java
 ##
 @@ -0,0 +1,351 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.ConfigurationUtils;
+import org.apache.flink.configuration.TaskManagerOptions;
+import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions;
+import org.apache.flink.kubernetes.kubeclient.FlinkKubeClient;
+import org.apache.flink.kubernetes.kubeclient.KubeClientFactory;
+import org.apache.flink.kubernetes.kubeclient.TaskManagerPodParameter;
+import org.apache.flink.kubernetes.kubeclient.resources.FlinkPod;
+import org.apache.flink.kubernetes.utils.Constants;
+import org.apache.flink.kubernetes.utils.KubernetesUtils;
+import org.apache.flink.runtime.clusterframework.ApplicationStatus;
+import 
org.apache.flink.runtime.clusterframework.ContaineredTaskManagerParameters;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.clusterframework.types.ResourceProfile;
+import org.apache.flink.runtime.concurrent.FutureUtils;
+import org.apache.flink.runtime.entrypoint.ClusterInformation;
+import org.apache.flink.runtime.entrypoint.parser.CommandLineOptions;
+import org.apache.flink.runtime.heartbeat.HeartbeatServices;
+import org.apache.flink.runtime.highavailability.HighAvailabilityServices;
+import org.apache.flink.runtime.metrics.groups.ResourceManagerMetricGroup;
+import org.apache.flink.runtime.resourcemanager.JobLeaderIdService;
+import org.apache.flink.runtime.resourcemanager.ResourceManager;
+import org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager;
+import org.apache.flink.runtime.rpc.FatalErrorHandler;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.util.FlinkException;
+import org.apache.flink.util.FlinkRuntimeException;
+import org.apache.flink.util.Preconditions;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.io.File;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * Kubernetes specific implementation of the {@link ResourceManager}.
+ */
+public class KubernetesResourceManager extends 
ResourceManager
+   implements FlinkKubeClient.PodCallbackHandler {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(KubernetesResourceManager.class);
+
+   public static final String ENV_RESOURCE_ID = "RESOURCE_ID";
+
+   private static final String TASK_MANAGER_PREFIX = "taskmanager";
+
+   private final ConcurrentMap 
workerNodeMap;
+
+   private final int numberOfTaskSlots;
+
+   private final int defaultTaskManagerMemoryMB;
+
+   private final double defaultCpus;
+
+   private final Collection slotsPerWorker;
+
+   private final Configuration flinkConfig;
+
+   private final AtomicLong maxPodId = new AtomicLong(0);
+
+   private final String clusterId;
+
+   private FlinkKubeClient kubeClient;
+
+   /** The number of pods requested, but not yet granted. */
+   private int numPendingPodRequests;
+
+   public KubernetesResourceManager(
+   RpcService rpcService,
+   String resourceManagerEndpointId,
+   ResourceID resourceId,
+   Configuration flinkConfig,
+   HighAvailabilityServices highAvailabilityServices,
+   HeartbeatServices heartbeatServices,
+   SlotManager slotManager,
+   JobLeaderIdService jobLeaderIdService,
+   ClusterInformation clusterInformation,
+   FatalE

[GitHub] [flink] flinkbot edited a comment on issue #9876: [FLINK-14134][table] Introduce LimitableTableSource for optimizing limit

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9876: [FLINK-14134][table] Introduce 
LimitableTableSource for optimizing limit
URL: https://github.com/apache/flink/pull/9876#issuecomment-540450322
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c465265044ea264422f0fac4dfd11b3c0b980d19 (Fri Oct 25 
04:02:51 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9894: [FLINK-14342][table][python] Remove method FunctionDefinition#getLanguage.

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9894: [FLINK-14342][table][python] Remove 
method FunctionDefinition#getLanguage.
URL: https://github.com/apache/flink/pull/9894#issuecomment-541555428
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 8ee158ce22c7f49e0da88cc088261959d5de62e5 (Fri Oct 25 
04:00:48 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #9876: [FLINK-14134][table] Introduce LimitableTableSource for optimizing limit

2019-10-24 Thread GitBox
JingsongLi commented on a change in pull request #9876: [FLINK-14134][table] 
Introduce LimitableTableSource for optimizing limit
URL: https://github.com/apache/flink/pull/9876#discussion_r338878083
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/PushLimitIntoTableSourceScanRule.scala
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical
+
+import org.apache.flink.table.api.TableException
+import org.apache.flink.table.plan.stats.TableStats
+import org.apache.flink.table.planner.plan.nodes.logical.{FlinkLogicalSort, 
FlinkLogicalTableSourceScan}
+import org.apache.flink.table.planner.plan.schema.{FlinkRelOptTable, 
TableSourceTable}
+import org.apache.flink.table.planner.plan.stats.FlinkStatistic
+import org.apache.flink.table.sources.LimitableTableSource
+
+import org.apache.calcite.plan.RelOptRule.{none, operand}
+import org.apache.calcite.plan.{RelOptRule, RelOptRuleCall}
+import org.apache.calcite.rel.core.{Sort, TableScan}
+import org.apache.calcite.rex.RexLiteral
+import org.apache.calcite.tools.RelBuilder
+
+/**
+  * Planner rule that tries to push limit into a [[LimitableTableSource]].
+  * The original limit will still be retained.
+  */
+class PushLimitIntoTableSourceScanRule extends RelOptRule(
+  operand(classOf[FlinkLogicalSort],
+operand(classOf[FlinkLogicalTableSourceScan], none)), 
"PushLimitIntoTableSourceScanRule") {
+
+  override def matches(call: RelOptRuleCall): Boolean = {
+val sort = call.rel(0).asInstanceOf[Sort]
+val fetch = sort.fetch
+val offset = sort.offset
+// Only push-down the limit whose offset equal zero. Because it is 
difficult to source based
+// push to handle the non-zero offset. And the non-zero offset usually 
appear together with
+// sort.
+val onlyLimit = sort.getCollation.getFieldCollations.isEmpty &&
+(offset == null || RexLiteral.intValue(offset) == 0) &&
+fetch != null
+
+var supportPushDown = false
+if (onlyLimit) {
+  supportPushDown = call.rel(1).asInstanceOf[TableScan]
+  .getTable.unwrap(classOf[TableSourceTable[_]]) match {
+case table: TableSourceTable[_] =>
+  table.tableSource match {
+case source: LimitableTableSource[_] => !source.isLimitPushedDown
+case _ => false
+  }
+case _ => false
+  }
+}
+supportPushDown
+  }
+
+  override def onMatch(call: RelOptRuleCall): Unit = {
+val sort = call.rel(0).asInstanceOf[Sort]
+val scan = call.rel(1).asInstanceOf[FlinkLogicalTableSourceScan]
+val relOptTable = scan.getTable.asInstanceOf[FlinkRelOptTable]
+val limit = RexLiteral.intValue(sort.fetch)
+val relBuilder = call.builder()
+val newRelOptTable = applyLimit(limit, relOptTable, relBuilder)
+val newScan = scan.copy(scan.getTraitSet, newRelOptTable)
+
+val newTableSource = 
newRelOptTable.unwrap(classOf[TableSourceTable[_]]).tableSource
+val oldTableSource = 
relOptTable.unwrap(classOf[TableSourceTable[_]]).tableSource
+
+if (newTableSource.asInstanceOf[LimitableTableSource[_]].isLimitPushedDown
+&& 
newTableSource.explainSource().equals(oldTableSource.explainSource)) {
+  throw new TableException("Failed to push limit into table source! "
+  + "table source with pushdown capability must override and change "
+  + "explainSource() API to explain the pushdown applied!")
+}
+call.transformTo(newScan)
 
 Review comment:
   maybe we should really retain the limit. what do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #9894: [FLINK-14342][table][python] Remove method FunctionDefinition#getLanguage.

2019-10-24 Thread GitBox
hequn8128 commented on issue #9894: [FLINK-14342][table][python] Remove method 
FunctionDefinition#getLanguage.
URL: https://github.com/apache/flink/pull/9894#issuecomment-546191130
 
 
   Merging...
   Thank you all for your review and update. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14519) Fail on invoking declineCheckpoint remotely

2019-10-24 Thread Jiayi Liao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959402#comment-16959402
 ] 

Jiayi Liao commented on FLINK-14519:


The problem is a bit weird because {{DeclineCheckpoint}} already wrap exception 
that can't be serialized into a {{SerializedThrowable}} object. But it seems 
not work to the {{RocksDBException}}, I'm going to add more logs to look into 
it.

> Fail on invoking declineCheckpoint remotely
> ---
>
> Key: FLINK-14519
> URL: https://issues.apache.org/jira/browse/FLINK-14519
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.9.1
>Reporter: Jiayi Liao
>Priority: Major
>
> On invoking {{declineCheckpoint}}, the reason field of {{DeclineCheckpoint}} 
> is not serializable when  it is a {{RocksDBException}} because 
> {{org.rocksdb.Status}} doesn't implement serializable.
> {panel:title=Exception}
> Caused by: java.io.IOException: Could not serialize 0th argument of method 
> declineCheckpoint. This indicates that the argument type [Ljava.lang.Object; 
> is not serializable. Arguments have to be serializable for remote rpc calls.
> at 
> org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation$MethodInvocation.writeObject(RemoteRpcInvocation.java:186)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1028)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
> at 
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:586)
> at 
> org.apache.flink.util.SerializedValue.(SerializedValue.java:52)
> at 
> org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation.(RemoteRpcInvocation.java:53)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.createRpcInvocationMessage(AkkaInvocationHandler.java:264)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.invokeRpc(AkkaInvocationHandler.java:200)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.invoke(AkkaInvocationHandler.java:129)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaInvocationHandler.invoke(FencedAkkaInvocationHandler.java:78)
> ... 18 more
> Caused by: java.io.NotSerializableException: org.rocksdb.Status
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
> at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at 
> java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:441)
> at java.lang.Throwable.writeObject(Throwable.java:985)
> at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1028)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at 
> java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
> at 
> org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation$MethodInvocation.writeObject(RemoteRpcInvocation.java:182)
> ... 33 more
> {panel}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support primitive data types in Python user-defined functions

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9977: [FLINK-14497][python] Support 
primitive data types in Python user-defined functions
URL: https://github.com/apache/flink/pull/9977#issuecomment-545428245
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit afc9e2739bcf953d433179c2c48dfb1e1b8f4586 (Fri Oct 25 
03:56:43 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangXingBo commented on issue #9977: [FLINK-14497][python] Support primitive data types in Python user-defined functions

2019-10-24 Thread GitBox
HuangXingBo commented on issue #9977: [FLINK-14497][python] Support primitive 
data types in Python user-defined functions
URL: https://github.com/apache/flink/pull/9977#issuecomment-546190451
 
 
   Thanks @hequn8128 @dianfu , I have addressed the comments at the latest push.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field 
names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#issuecomment-542617499
 
 
   
   ## CI report:
   
   * f20e67a260971f17d77c7ecc7c55b04136e432c5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132123750)
   * 3c1480f11f084fb2d24618cb8d8f1b992c955f96 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133312035)
   * e3ba6b1f3e6439e4ecbc62a56b778c58b06a2a67 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133325977)
   * e5c058685d595bd711f3d8f50060a3db136e5ae1 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133357676)
   * 4c59ed745122aa79a0b561aff6f4ec382cb31211 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133486511)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9894: [FLINK-14342][table][python] Remove method FunctionDefinition#getLanguage.

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9894: [FLINK-14342][table][python] Remove 
method FunctionDefinition#getLanguage.
URL: https://github.com/apache/flink/pull/9894#issuecomment-541559889
 
 
   
   ## CI report:
   
   * 89595e0af27e0ae61f0fcd956ec422f203a3ab95 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/131751296)
   * b1aed08d2bbeaf73f6dc2a293f912b368319531b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/131758748)
   * e9c95e6f51ff45b69df5d248782921aa8e6edaab : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132077078)
   * 9c05be8a14dbf6b31fd24f6114be36e1adec3b3a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132082371)
   * a27700b7e163407788a4aac0d66d18d0c0efdefc : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133308152)
   * 8ee158ce22c7f49e0da88cc088261959d5de62e5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/133486496)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14526) Support Hive version 1.1.0 and 1.1.1

2019-10-24 Thread Rui Li (Jira)
Rui Li created FLINK-14526:
--

 Summary: Support Hive version 1.1.0 and 1.1.1
 Key: FLINK-14526
 URL: https://issues.apache.org/jira/browse/FLINK-14526
 Project: Flink
  Issue Type: Task
  Components: Connectors / Hive
Reporter: Rui Li






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9783: [FLINK-14040][travis] Enable MiniCluster tests based on schedulerNG in Flink cron build

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9783: [FLINK-14040][travis] Enable 
MiniCluster tests based on schedulerNG in Flink cron build
URL: https://github.com/apache/flink/pull/9783#issuecomment-535848177
 
 
   
   ## CI report:
   
   * 5d9e3113e10bbf0220b4f8e080ecd1cddd9c9059 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/129411044)
   * 55c766190d205798534f6d09170e66463c832082 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/129544930)
   * 8c7b264e3d17dfa2eb2019ecb470cbe4ef9fa339 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/13191)
   * dd55f2f66aec3cbcee7211e2769496d0a5c873e2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/131961829)
   * 69dbf0fa215fa7cb7aaadd596fcaadd3747697d2 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132096181)
   * 16077687afe140b71bc91a99315c50219ac8c732 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/132279941)
   * 0332dd139f1b2857efe5de3852cf4bb4573575a3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132325960)
   * 7542ca7c45594de8ac6f7112e43d0a7bcd8615b6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/132476281)
   * 416f6a5535f1efe984fc57f13ac0d2a84c9b5fa8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/132537704)
   * 1e44c409a68a5746d40a573c754ae7805dc75e2f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/132570553)
   * c71c7a955c898f7e5409481d343478f8bea910e3 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/133151749)
   * 03d0ac5f752f45df7de536619fcb348197d009a0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133157714)
   * 0b61d830609a5453020737c8fd894a4ef9ba883d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133482493)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9992: [FLINK-14481][RunTime/Network]change Flink's port check to 0 to 65535

2019-10-24 Thread GitBox
flinkbot commented on issue #9992: [FLINK-14481][RunTime/Network]change Flink's 
port check to 0 to 65535
URL: https://github.com/apache/flink/pull/9992#issuecomment-546188093
 
 
   
   ## CI report:
   
   * c63d8e2bc6d7bc16c46940a81857211d8788c1fe : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9991: [FLINK-14478][python] optimize current python test cases to reduce test time.

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9991: [FLINK-14478][python] optimize 
current python test cases to reduce test time.
URL: https://github.com/apache/flink/pull/9991#issuecomment-546171997
 
 
   
   ## CI report:
   
   * df840c57b1035c328f8067b821418cab151f5fe0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133482472)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field names should be got from CatalogTable instead of source/sink

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9909: [FLINK-14381][table] Partition field 
names should be got from CatalogTable instead of source/sink
URL: https://github.com/apache/flink/pull/9909#issuecomment-542617499
 
 
   
   ## CI report:
   
   * f20e67a260971f17d77c7ecc7c55b04136e432c5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132123750)
   * 3c1480f11f084fb2d24618cb8d8f1b992c955f96 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133312035)
   * e3ba6b1f3e6439e4ecbc62a56b778c58b06a2a67 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133325977)
   * e5c058685d595bd711f3d8f50060a3db136e5ae1 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133357676)
   * 4c59ed745122aa79a0b561aff6f4ec382cb31211 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9894: [FLINK-14342][table][python] Remove method FunctionDefinition#getLanguage.

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9894: [FLINK-14342][table][python] Remove 
method FunctionDefinition#getLanguage.
URL: https://github.com/apache/flink/pull/9894#issuecomment-541559889
 
 
   
   ## CI report:
   
   * 89595e0af27e0ae61f0fcd956ec422f203a3ab95 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/131751296)
   * b1aed08d2bbeaf73f6dc2a293f912b368319531b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/131758748)
   * e9c95e6f51ff45b69df5d248782921aa8e6edaab : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132077078)
   * 9c05be8a14dbf6b31fd24f6114be36e1adec3b3a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/132082371)
   * a27700b7e163407788a4aac0d66d18d0c0efdefc : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133308152)
   * 8ee158ce22c7f49e0da88cc088261959d5de62e5 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9859: [FLINK-11405][rest]rest api can more exceptions by query parameter

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9859: [FLINK-11405][rest]rest api can more 
exceptions by query parameter
URL: https://github.com/apache/flink/pull/9859#issuecomment-539878774
 
 
   
   ## CI report:
   
   * c8bf66050c865904c402bdc9c079a6e4de1a064c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/131080663)
   * ec84f32bdcae0738d97ac60090691789334a279a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133306001)
   * 6a12e5941baf73958146f54365ff3f267c696e11 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133310272)
   * 4318f07b5554bcf1abc44e912345927c65c8f8a1 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133323118)
   * 052e4580c08f9723e2716dedac7d26d031c4b538 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133366819)
   * 27cd52fadf153982bc9771ba27a436caeebb17b8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133480918)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9992: [FLINK-14481][RunTime/Network]change Flink's port check to 0 to 65535

2019-10-24 Thread GitBox
flinkbot commented on issue #9992: [FLINK-14481][RunTime/Network]change Flink's 
port check to 0 to 65535
URL: https://github.com/apache/flink/pull/9992#issuecomment-546183832
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c63d8e2bc6d7bc16c46940a81857211d8788c1fe (Fri Oct 25 
03:18:37 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] AT-Fieldless opened a new pull request #9992: change Flink's port check to 0 to 65535

2019-10-24 Thread GitBox
AT-Fieldless opened a new pull request #9992: change Flink's port check to 0 to 
65535
URL: https://github.com/apache/flink/pull/9992
 
 
   ## What is the purpose of the change
   
   Current NettyConfig's port check is from 0 to 65536, and this pull request 
change it to 0 to 65535.
   
   ## Brief change log
   
 - *change Flink's port check to 0 to 65535
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (don't know)
 - The runtime per-record code paths (performance sensitive): (don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9990: [FLINK-14524] [flink-jdbc] Correct syntax for PostgreSQL dialect "upsert" statement

2019-10-24 Thread GitBox
flinkbot edited a comment on issue #9990: [FLINK-14524] [flink-jdbc] Correct 
syntax for PostgreSQL dialect "upsert" statement
URL: https://github.com/apache/flink/pull/9990#issuecomment-546166887
 
 
   
   ## CI report:
   
   * 56a7c678cf04569e01bf65807b8bb1ffbe7acdd0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133480910)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14525) buffer pool is destroyed

2019-10-24 Thread Jiayi Liao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16959386#comment-16959386
 ] 

Jiayi Liao commented on FLINK-14525:


I believe this is not the root cause. "Buffer pool is destroyed" is because the 
NettyShuffleEnvironment is closed, which is kind of a "normal" phenomenon when 
exception is thrown.

It'd be better if you can attach the full logs of the jobmanager and 
taskmanager. 

> buffer pool is destroyed
> 
>
> Key: FLINK-14525
> URL: https://issues.apache.org/jira/browse/FLINK-14525
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.7.2
>Reporter: Saqib
>Priority: Blocker
>
> Have a flink app running in standalone mode. The app runs ok in our non-prod 
> env. However on our prod server it throws this exception:
> Buffer pool is destroyed. 
>  
> This error is being thrown as a RuntimeException on the collect call, on the 
> flatmap function. The flatmap is just collecting a Tuple, 
> the Document is a XML Document object.
>  
> As mentioned the non prod env  (and we have multiple, DEV,QA,UAT) this is not 
> happening. The UAT box is spec-ed exactly as our Prod host with 4CPU. The 
> java version is the same too.
>  
> Not sure how to proceed.
>  
> Thanks
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14525) buffer pool is destroyed

2019-10-24 Thread Saqib (Jira)
Saqib created FLINK-14525:
-

 Summary: buffer pool is destroyed
 Key: FLINK-14525
 URL: https://issues.apache.org/jira/browse/FLINK-14525
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.7.2
Reporter: Saqib


Have a flink app running in standalone mode. The app runs ok in our non-prod 
env. However on our prod server it throws this exception:

Buffer pool is destroyed. 

 

This error is being thrown as a RuntimeException on the collect call, on the 
flatmap function. The flatmap is just collecting a Tuple, the 
Document is a XML Document object.

 

As mentioned the non prod env  (and we have multiple, DEV,QA,UAT) this is not 
happening. The UAT box is spec-ed exactly as our Prod host with 4CPU. The java 
version is the same too.

 

Not sure how to proceed.

 

Thanks

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   7   8   >