[GitHub] [flink] KurtYoung commented on a change in pull request #10119: [FLINK-14656] [table-planner-blink] blink planner should also fetch catalog statistics for permanent CatalogTableImpl

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10119: [FLINK-14656] 
[table-planner-blink] blink planner should also fetch catalog statistics for 
permanent CatalogTableImpl
URL: https://github.com/apache/flink/pull/10119#discussion_r344433674
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/catalog/CatalogStatisticsTest.java
 ##
 @@ -59,32 +63,75 @@
).build();
 
@Test
-   public void testGetStatsFromCatalog() throws Exception {
+   public void testGetStatsFromCatalogForConnectorCatalogTable() throws 
Exception {
EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build();
TableEnvironment tEnv = TableEnvironment.create(settings);
Catalog catalog = 
tEnv.getCatalog(tEnv.getCurrentCatalog()).orElse(null);
assertNotNull(catalog);
catalog.createTable(
-   ObjectPath.fromString("default_database.T1"),
-   ConnectorCatalogTable.source(new TestTableSource(true, 
tableSchema), true),
-   false);
+   ObjectPath.fromString("default_database.T1"),
+   ConnectorCatalogTable.source(new 
TestTableSource(true, tableSchema), true),
+   false);
catalog.createTable(
-   ObjectPath.fromString("default_database.T2"),
-   ConnectorCatalogTable.source(new TestTableSource(true, 
tableSchema), true),
-   false);
+   ObjectPath.fromString("default_database.T2"),
+   ConnectorCatalogTable.source(new 
TestTableSource(true, tableSchema), true),
+   false);
 
+   alterTableStatistics(catalog);
+
+   Table table = tEnv.sqlQuery("select * from T1, T2 where T1.s3 = 
T2.s3");
+   String result = tEnv.explain(table);
+   // T1 is broadcast side
+   String expected = 
TableTestUtil.readFromResource("/explain/testGetStatsFromCatalogForConnectorCatalogTable.out");
+   assertEquals(expected, TableTestUtil.replaceStageId(result));
+   }
+
+   @Test
+   public void testGetStatsFromCatalogForCatalogTableImpl() throws 
Exception {
+   EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build();
+   TableEnvironment tEnv = TableEnvironment.create(settings);
+   Catalog catalog = 
tEnv.getCatalog(tEnv.getCurrentCatalog()).orElse(null);
+   assertNotNull(catalog);
+
+   Map properties = new HashMap<>();
+   properties.put("connector.type", "filesystem");
+   properties.put("connector.property-version", "1");
+   properties.put("connector.path", "/path/to/csv");
+
+   properties.put("format.type", "csv");
+   properties.put("format.property-version", "1");
+   properties.put("format.field-delimiter", ";");
+
+   // schema
+   DescriptorProperties descriptorProperties = new 
DescriptorProperties(true);
+   descriptorProperties.putTableSchema("format.fields", 
tableSchema);
+   properties.putAll(descriptorProperties.asMap());
+
+   catalog.createTable(
+   ObjectPath.fromString("default_database.T1"),
+   new CatalogTableImpl(tableSchema, properties, 
""),
+   false);
+   catalog.createTable(
+   ObjectPath.fromString("default_database.T2"),
+   new CatalogTableImpl(tableSchema, properties, 
""),
+   false);
+
+   alterTableStatistics(catalog);
+
+   Table table = tEnv.sqlQuery("select * from T1, T2 where T1.s3 = 
T2.s3");
+   String result = tEnv.explain(table);
+   // T1 is broadcast side
 
 Review comment:
   Do we have other choices for validating the table stats? Relying on small 
table will be broadcasted in a join is kind of fragile IMO. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431194
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/calcite/FlinkTypeSystem.scala
 ##
 @@ -53,6 +53,9 @@ class FlinkTypeSystem extends RelDataTypeSystemImpl {
 case SqlTypeName.VARCHAR | SqlTypeName.CHAR | SqlTypeName.VARBINARY | 
SqlTypeName.BINARY =>
   Int.MaxValue
 
+// The maximal precision of TIMESTAMP is 3, change it to 9 to support 
nanoseconds precision
+case SqlTypeName.TIMESTAMP => 9
 
 Review comment:
   change 9 to `TimestampType.MAX_PRECISION`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431364
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala
 ##
 @@ -370,8 +373,56 @@ object GenerateUtils {
 generateNonNullLiteral(literalType, literalValue.toString, 
literalValue)
 
   case TIMESTAMP_WITHOUT_TIME_ZONE =>
-val millis = literalValue.asInstanceOf[Long]
-generateNonNullLiteral(literalType, millis + "L", millis)
+def getNanoOfMillisSinceEpoch(timestampString: TimestampString): Int = 
{
+  val v = timestampString.toString()
+  val length = v.length
+  val nanoOfSeconds = length match {
+case 19 | 20 => 0
+case _ =>
+  JInteger.valueOf(v.substring(20)) * pow(10, 9 - (length - 
20)).intValue()
+  }
+  nanoOfSeconds % 100
+}
+
+// TODO: we copied the logical of TimestampString::getMillisSinceEpoch 
since the copied
+//  DateTimeUtils.ymdToJulian is wrong.
+//  SEE CALCITE-1884
+def getMillisInSecond(timestampString: TimestampString): Int = {
+  val v = timestampString.toString()
+  val length = v.length
+  val milliOfSeconds = length match {
+case 19 => 0
+case 21 => JInteger.valueOf(v.substring(20)).intValue() * 100
+case 22 => JInteger.valueOf(v.substring(20)).intValue() * 10
+case 20 | 23 | _ => JInteger.valueOf(v.substring(20, 
23)).intValue()
+  }
+  milliOfSeconds
+}
+
+def getMillisSinceEpoch(timestampString: TimestampString): Long = {
+  val v = timestampString.toString()
+  val year = JInteger.valueOf(v.substring(0, 4))
+  val month = JInteger.valueOf(v.substring(5, 7))
+  val day = JInteger.valueOf(v.substring(8, 10))
+  val h = JInteger.valueOf(v.substring(11, 13))
+  val m = JInteger.valueOf(v.substring(14, 16))
+  val s = JInteger.valueOf(v.substring(17, 19))
+  val ms = getMillisInSecond(timestampString)
+  val d = SqlDateTimeUtils.ymdToJulian(year, month, day)
+  d * 8640L + h * 360L + m * 6L + s * 1000L + ms.toLong
+}
+
+val fieldTerm = newName("timestamp")
+val millis = 
literalValue.asInstanceOf[TimestampString].getMillisSinceEpoch
 
 Review comment:
   A little confused here, AFAIK you have changed all internal representation 
of TIMESTAMP to `SqlTimestamp`, right? Then why the literal here is a 
`TimestampString`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r34447
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala
 ##
 @@ -37,6 +37,17 @@ import java.util.{Locale, TimeZone}
 
 class TemporalTypesTest extends ExpressionTestBase {
 
 Review comment:
   Could you extract all `Timestamp` related test to a dedicated one? 
`TemporalTypesTest` seems weird. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344433295
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/ArrayTypeTest.scala
 ##
 @@ -94,6 +96,48 @@ class ArrayTypeTest extends ArrayTypeTestBase {
   "ARRAY[TIMESTAMP '1985-04-11 14:15:16', TIMESTAMP '2018-07-26 
17:18:19']",
   "[1985-04-11 14:15:16.000, 2018-07-26 17:18:19.000]")
 
+// localDateTime use DateTimeUtils.timestampStringToUnixDate to parse a 
time string,
+// which only support millisecond's precision.
+testTableApi(
+  Array(
+JLocalDateTime.of(1985, 4, 11, 14, 15, 16, 123456789),
+JLocalDateTime.of(2018, 7, 26, 17, 18, 19, 123456789)),
+"[1985-04-11T14:15:16.123456789, 2018-07-26T17:18:19.123456789]")
+
+testTableApi(
+  Array(
+JLocalDateTime.of(1985, 4, 11, 14, 15, 16, 123456700),
+JLocalDateTime.of(2018, 7, 26, 17, 18, 19, 123456700)),
+  "[1985-04-11T14:15:16.123456700, 2018-07-26T17:18:19.123456700]")
+
+testTableApi(
+  Array(
+JLocalDateTime.of(1985, 4, 11, 14, 15, 16, 123456000),
+JLocalDateTime.of(2018, 7, 26, 17, 18, 19, 123456000)),
+  "[1985-04-11T14:15:16.123456, 2018-07-26T17:18:19.123456]")
+
+testTableApi(
+  Array(
+JLocalDateTime.of(1985, 4, 11, 14, 15, 16, 12340),
+JLocalDateTime.of(2018, 7, 26, 17, 18, 19, 12340)),
+  "[1985-04-11T14:15:16.123400, 2018-07-26T17:18:19.123400]")
+
+testSqlApi(
+  "ARRAY[TIMESTAMP '1985-04-11 14:15:16.123456789', TIMESTAMP '2018-07-26 
17:18:19.123456789']",
+  "[1985-04-11T14:15:16.123456789, 2018-07-26T17:18:19.123456789]")
+
+testSqlApi(
+  "ARRAY[TIMESTAMP '1985-04-11 14:15:16.1234567', TIMESTAMP '2018-07-26 
17:18:19.1234567']",
+  "[1985-04-11T14:15:16.123456700, 2018-07-26T17:18:19.123456700]")
+
+testSqlApi(
+  "ARRAY[TIMESTAMP '1985-04-11 14:15:16.123456', TIMESTAMP '2018-07-26 
17:18:19.123456']",
+  "[1985-04-11T14:15:16.123456, 2018-07-26T17:18:19.123456]")
+
+testSqlApi(
+  "ARRAY[TIMESTAMP '1985-04-11 14:15:16.1234', TIMESTAMP '2018-07-26 
17:18:19.1234']",
+  "[1985-04-11T14:15:16.123400, 2018-07-26T17:18:19.123400]")
 
 Review comment:
   why the result isn't `1985-04-11T14:15:16.1234`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344433198
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/RexNodeExtractor.scala
 ##
 @@ -335,7 +336,7 @@ class RexNodeToExpressionConverter(
 
   case TIMESTAMP_WITHOUT_TIME_ZONE =>
 val v = literal.getValueAs(classOf[java.lang.Long])
 
 Review comment:
   Why the literal for `TIMESTAMP_WITHOUT_TIME_ZONE` is `long`? I remember 
somewhere else you presume the literal is `TimestampString`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431544
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/MethodCallGen.scala
 ##
 @@ -60,6 +63,10 @@ class MethodCallGen(method: Method) extends CallGenerator {
 // convert String to BinaryString if the return type is String
 if (method.getReturnType == classOf[String]) {
   s"$BINARY_STRING.fromString($call)"
+} else if ((method.getReturnType == classOf[Long]
 
 Review comment:
   add comment?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344430931
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/ExpressionConverter.java
 ##
 @@ -135,8 +138,26 @@ public RexNode visit(ValueLiteralExpression valueLiteral) 
{
return 
relBuilder.getRexBuilder().makeTimeLiteral(TimeString.fromCalendarFields(

valueAsCalendar(extractValue(valueLiteral, java.sql.Time.class))), 0);
case TIMESTAMP_WITHOUT_TIME_ZONE:
-   return 
relBuilder.getRexBuilder().makeTimestampLiteral(TimestampString.fromCalendarFields(
-   
valueAsCalendar(extractValue(valueLiteral, java.sql.Timestamp.class))), 3);
+   TimestampType timestampType = (TimestampType) 
type;
+   Class clazz = 
valueLiteral.getOutputDataType().getConversionClass();
+   LocalDateTime datetime = null;
+   if (clazz == LocalDateTime.class) {
+   datetime = 
valueLiteral.getValueAs(LocalDateTime.class)
+   .orElseThrow(() -> new 
TableException("Invalid literal."));
+   } else if (clazz == Timestamp.class) {
+   datetime = 
valueLiteral.getValueAs(Timestamp.class)
+   .orElseThrow(() -> new 
TableException("Invalid literal.")).toLocalDateTime();
+   } else {
+   throw new TableException("Invalid 
literal.");
 
 Review comment:
   record the illegal class here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431147
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/calcite/FlinkTypeFactory.scala
 ##
 @@ -424,11 +424,12 @@ object FlinkTypeFactory {
 // blink runner support precision 3, but for consistent with flink 
runner, we set to 0.
 new TimeType()
   case TIMESTAMP =>
-if (relDataType.getPrecision > 3) {
+val precision = relDataType.getPrecision
+if (precision > 9 || precision < 0) {
   throw new TableException(
-s"TIMESTAMP precision is not supported: 
${relDataType.getPrecision}")
+s"TIMESTAMP precision is not supported: ${precision}")
 
 Review comment:
   don't need `{}`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431214
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/calcite/FlinkTypeSystem.scala
 ##
 @@ -53,6 +53,9 @@ class FlinkTypeSystem extends RelDataTypeSystemImpl {
 case SqlTypeName.VARCHAR | SqlTypeName.CHAR | SqlTypeName.VARBINARY | 
SqlTypeName.BINARY =>
   Int.MaxValue
 
+// The maximal precision of TIMESTAMP is 3, change it to 9 to support 
nanoseconds precision
+case SqlTypeName.TIMESTAMP => 9
 
 Review comment:
   BTW, please also check line #46, I remember our default precision for 
TIMESTAMP is 6?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344433283
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/ArrayTypeTest.scala
 ##
 @@ -94,6 +96,48 @@ class ArrayTypeTest extends ArrayTypeTestBase {
   "ARRAY[TIMESTAMP '1985-04-11 14:15:16', TIMESTAMP '2018-07-26 
17:18:19']",
   "[1985-04-11 14:15:16.000, 2018-07-26 17:18:19.000]")
 
+// localDateTime use DateTimeUtils.timestampStringToUnixDate to parse a 
time string,
+// which only support millisecond's precision.
+testTableApi(
+  Array(
+JLocalDateTime.of(1985, 4, 11, 14, 15, 16, 123456789),
+JLocalDateTime.of(2018, 7, 26, 17, 18, 19, 123456789)),
+"[1985-04-11T14:15:16.123456789, 2018-07-26T17:18:19.123456789]")
+
+testTableApi(
+  Array(
+JLocalDateTime.of(1985, 4, 11, 14, 15, 16, 123456700),
+JLocalDateTime.of(2018, 7, 26, 17, 18, 19, 123456700)),
+  "[1985-04-11T14:15:16.123456700, 2018-07-26T17:18:19.123456700]")
+
+testTableApi(
+  Array(
+JLocalDateTime.of(1985, 4, 11, 14, 15, 16, 123456000),
+JLocalDateTime.of(2018, 7, 26, 17, 18, 19, 123456000)),
+  "[1985-04-11T14:15:16.123456, 2018-07-26T17:18:19.123456]")
+
+testTableApi(
+  Array(
+JLocalDateTime.of(1985, 4, 11, 14, 15, 16, 12340),
+JLocalDateTime.of(2018, 7, 26, 17, 18, 19, 12340)),
+  "[1985-04-11T14:15:16.123400, 2018-07-26T17:18:19.123400]")
+
+testSqlApi(
+  "ARRAY[TIMESTAMP '1985-04-11 14:15:16.123456789', TIMESTAMP '2018-07-26 
17:18:19.123456789']",
+  "[1985-04-11T14:15:16.123456789, 2018-07-26T17:18:19.123456789]")
+
+testSqlApi(
+  "ARRAY[TIMESTAMP '1985-04-11 14:15:16.1234567', TIMESTAMP '2018-07-26 
17:18:19.1234567']",
+  "[1985-04-11T14:15:16.123456700, 2018-07-26T17:18:19.123456700]")
 
 Review comment:
   why the result isn't `1985-04-11T14:15:16.1234567`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431614
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarFunctionCallGen.scala
 ##
 @@ -66,7 +69,15 @@ class ScalarFunctionCallGen(scalarFunction: ScalarFunction) 
extends CallGenerato
   boxedTypeTermForType(returnType)
 }
 val resultTerm = ctx.addReusableLocalVariable(resultTypeTerm, "result")
-val evalResult = 
s"$functionReference.eval(${parameters.map(_.resultTerm).mkString(", ")})"
+val evalResult =
+  if (returnType.getTypeRoot == 
LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE) {
 
 Review comment:
   I'm not sure about the changes of this class. 
   Why we have to treat this as a special case? Other types will have similar 
problems, like `BinaryString`. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431670
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
 ##
 @@ -174,8 +174,8 @@ object ScalarOperatorGens {
   (l, r) => s"$l $op ((int) ($r / ${MILLIS_PER_DAY}L))"
 }
   case TIMESTAMP_WITHOUT_TIME_ZONE =>
-generateOperatorIfNotNull(ctx, new TimestampType(), left, right) {
-  (l, r) => s"($l * ${MILLIS_PER_DAY}L) $op $r"
+generateOperatorIfNotNull(ctx, new TimestampType(3), left, right) {
 
 Review comment:
   Are you sure about this? Date plus INTERVAL_DAY_TIME will result in 
TimestampType(3) but not TimestampType(0)?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344433099
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/functions/utils/UserDefinedFunctionUtils.scala
 ##
 @@ -749,6 +749,9 @@ object UserDefinedFunctionUtils {
 val paraInternalType = fromDataTypeToLogicalType(parameterType)
 if (isAny(internal) && isAny(paraInternalType)) {
   getDefaultExternalClassForType(internal) == 
getDefaultExternalClassForType(paraInternalType)
+} else if ((isTimestamp(internal) && isLong(paraInternalType))
 
 Review comment:
   not sure about this. 
   cc @JingsongLi 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431350
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala
 ##
 @@ -370,8 +373,56 @@ object GenerateUtils {
 generateNonNullLiteral(literalType, literalValue.toString, 
literalValue)
 
   case TIMESTAMP_WITHOUT_TIME_ZONE =>
-val millis = literalValue.asInstanceOf[Long]
-generateNonNullLiteral(literalType, millis + "L", millis)
+def getNanoOfMillisSinceEpoch(timestampString: TimestampString): Int = 
{
+  val v = timestampString.toString()
+  val length = v.length
+  val nanoOfSeconds = length match {
+case 19 | 20 => 0
+case _ =>
+  JInteger.valueOf(v.substring(20)) * pow(10, 9 - (length - 
20)).intValue()
+  }
+  nanoOfSeconds % 100
+}
+
+// TODO: we copied the logical of TimestampString::getMillisSinceEpoch 
since the copied
+//  DateTimeUtils.ymdToJulian is wrong.
 
 Review comment:
   From the comment of `DateTimeUtils`, I think we already aware of 
CALCITE-1884, do you mean we didn't fix it in copied `DateTimeUtils`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431538
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/MethodCallGen.scala
 ##
 @@ -34,13 +33,17 @@ class MethodCallGen(method: Method) extends CallGenerator {
   returnType: LogicalType): GeneratedExpression = {
 generateCallIfArgsNotNull(ctx, returnType, operands, 
!method.getReturnType.isPrimitive) {
   originalTerms => {
-val terms = originalTerms.zip(method.getParameterTypes).map { case 
(term, clazz) =>
-  // convert the BinaryString parameter to String if the method 
parameter accept String
-  if (clazz == classOf[String]) {
-s"$term.toString()"
-  } else {
-term
-  }
+val terms = 
originalTerms.zipWithIndex.zip(method.getParameterTypes).map {
+  case ((term, i), clazz) =>
+// convert the BinaryString parameter to String if the method 
parameter accept String
+if (clazz == classOf[String]) {
+  s"$term.toString()"
+} else if ((clazz == classOf[Long] || clazz == 
classOf[java.lang.Long]) &&
+operands(i).resultType.getTypeRoot == 
LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE) {
+  s"$term.getMillisecond()"
 
 Review comment:
   add comment?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431856
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
 ##
 @@ -2184,7 +2236,20 @@ object ScalarOperatorGens {
   case TIME_WITHOUT_TIME_ZONE =>
 s"${qualifyMethod(BuiltInMethods.UNIX_TIME_TO_STRING)}($operandTerm)"
   case TIMESTAMP_WITHOUT_TIME_ZONE => // including rowtime indicator
-s"${qualifyMethod(BuiltInMethods.TIMESTAMP_TO_STRING)}($operandTerm, 
3)"
+// casting TimestampType to VARCHAR, if precision <= 3, keep the 
string representation
 
 Review comment:
   why not use precise precision for string representation when precision <= 3?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431166
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/calcite/FlinkTypeFactory.scala
 ##
 @@ -424,11 +424,12 @@ object FlinkTypeFactory {
 // blink runner support precision 3, but for consistent with flink 
runner, we set to 0.
 new TimeType()
   case TIMESTAMP =>
-if (relDataType.getPrecision > 3) {
+val precision = relDataType.getPrecision
+if (precision > 9 || precision < 0) {
 
 Review comment:
   change 9 to `TimestampType.MAX_PRECISION`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344433455
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/BinaryArray.java
 ##
 @@ -191,6 +191,22 @@ public Decimal getDecimal(int pos, int precision, int 
scale) {
return Decimal.readDecimalFieldFromSegments(segments, offset, 
offsetAndSize, precision, scale);
}
 
+   @Override
+   public SqlTimestamp getTimestamp(int pos, int precision) {
+   assertIndexIsValid(pos);
+
+   if (SqlTimestamp.isCompact(precision)) {
+   return 
SqlTimestamp.fromEpochMillis(segments[0].getLong(getElementOffset(pos, 8)));
 
 Review comment:
   I think always reading from `segments[0]` is wrong


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10105: 
[Flink-14599][table-planner-blink] Support precision of TimestampType in blink 
planner
URL: https://github.com/apache/flink/pull/10105#discussion_r344431811
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala
 ##
 @@ -1073,51 +1124,51 @@ object ScalarOperatorGens {
  (INTEGER, TIMESTAMP_WITHOUT_TIME_ZONE) |
  (BIGINT, TIMESTAMP_WITHOUT_TIME_ZONE) =>
   generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm => s"(((long) $operandTerm) * 1000)"
+operandTerm => s"$SQL_TIMESTAMP.fromEpochMillis(((long) $operandTerm) 
* 1000)"
   }
 
 // Float -> Timestamp
 // Double -> Timestamp
 case (FLOAT, TIMESTAMP_WITHOUT_TIME_ZONE) |
  (DOUBLE, TIMESTAMP_WITHOUT_TIME_ZONE) =>
   generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm => s"((long) ($operandTerm * 1000))"
+operandTerm => s"$SQL_TIMESTAMP.fromEpochMillis((long) ($operandTerm * 
1000))"
   }
 
 // Timestamp -> Tinyint
 case (TIMESTAMP_WITHOUT_TIME_ZONE, TINYINT) =>
   generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm => s"((byte) ($operandTerm / 1000))"
+operandTerm => s"((byte) ($operandTerm.getMillisecond() / 1000))"
   }
 
 // Timestamp -> Smallint
 case (TIMESTAMP_WITHOUT_TIME_ZONE, SMALLINT) =>
   generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm => s"((short) ($operandTerm / 1000))"
+operandTerm => s"((short) ($operandTerm.getMillisecond() / 1000))"
   }
 
 // Timestamp -> Int
 case (TIMESTAMP_WITHOUT_TIME_ZONE, INTEGER) =>
   generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm => s"((int) ($operandTerm / 1000))"
+operandTerm => s"((int) ($operandTerm.getMillisecond() / 1000))"
   }
 
 // Timestamp -> BigInt
 case (TIMESTAMP_WITHOUT_TIME_ZONE, BIGINT) =>
   generateUnaryOperatorIfNotNull(ctx, targetType, operand) {
-operandTerm => s"((long) ($operandTerm / 1000))"
+operandTerm => s"((long) ($operandTerm.getMillisecond() / 1000))"
 
 Review comment:
   Why this should divide 1000? I thought when casting timestamp to long, it 
should be milliseconds?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on a change in pull request #9711: [FLINK-14033] upload user artifacts for yarn job cluster

2019-11-08 Thread GitBox
HuangZhenQiu commented on a change in pull request #9711: [FLINK-14033] upload 
user artifacts for yarn job cluster
URL: https://github.com/apache/flink/pull/9711#discussion_r344431794
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/AbstractYarnClusterDescriptor.java
 ##
 @@ -738,6 +738,25 @@ public ApplicationReport startAppMaster(
// add user code jars from the provided JobGraph
: jobGraph.getUserJars().stream().map(f -> 
f.toUri()).map(File::new).collect(Collectors.toSet());
 
+   // only for per job mode
+   if (jobGraph != null) {
+   final Collection> userArtifacts = 
jobGraph.getUserArtifacts().entrySet().stream()
+   .map(entry -> Tuple2.of(entry.getKey(), new 
org.apache.flink.core.fs.Path(entry.getValue().filePath)))
+   .collect(Collectors.toList());
+
+   for (Tuple2 
userArtifact : userArtifacts) {
 
 Review comment:
   @wangyang0918 
   I updated the loop and added an integration test in yarn test module. Would 
you please take one more round of review?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] HuangZhenQiu commented on issue #9711: [FLINK-14033] upload user artifacts for yarn job cluster

2019-11-08 Thread GitBox
HuangZhenQiu commented on issue #9711: [FLINK-14033] upload user artifacts for 
yarn job cluster
URL: https://github.com/apache/flink/pull/9711#issuecomment-552072235
 
 
   @wangyang0918 @tillrohrmann 
   Would you please help to review the PR. One of our customers wants to use 
this feature for his job.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14037) Deserializing the input/output formats failed: unread block data

2019-11-08 Thread liupengcheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16970738#comment-16970738
 ] 

liupengcheng commented on FLINK-14037:
--

Can I continue working on it? [~chesnay] 

> Deserializing the input/output formats failed: unread block data
> 
>
> Key: FLINK-14037
> URL: https://issues.apache.org/jira/browse/FLINK-14037
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.9.0
> Environment: flink 1.9.0
> app jar use `flink-shaded-hadoop-2` dependencies to avoid some confilicts
>  
>Reporter: liupengcheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Recently, we encountered the following issue when testing flink 1.9.0:
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: Could not 
> retrieve the execution result. (JobID: 8ffbc071dda81d6f8005c02be8adde6b)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:255)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:539)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort$.main(FlinkTeraSort.scala:89)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort.main(FlinkTeraSort.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:382)
>   at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>   at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:263)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal 
> server error.,  

[GitHub] [flink] flinkbot edited a comment on issue #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10105: [Flink-14599][table-planner-blink] 
Support precision of TimestampType in blink planner
URL: https://github.com/apache/flink/pull/10105#issuecomment-550298200
 
 
   
   ## CI report:
   
   * 3d1de92308358bfc6ff7dbee8b28121322315909 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135247375)
   * 4246b6215399f29a2cabb3f85786edddb0648c2f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135635568)
   * 6127ca1915a079f76eccf2f2597d818ad490ae52 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135751117)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10083: [FLINK-14472][runtime]Implement 
back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#issuecomment-549714188
 
 
   
   ## CI report:
   
   * bdb7952a0a48b4e67f51a04db61cd96a1cbecbbc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134985484)
   * ec925b8f1f82ac3016e60f40a9d4ec37453d494e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135023924)
   * dd225a3645a95f9fb4cb43fff29164cbd7b27f8b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135230081)
   * 326d344996ce0e11547231a7534a97482d6027b7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135504152)
   * ad0bab393757a8f56d9b4addf32a94f508469923 : UNKNOWN
   * 59d85d8289d0b1b34d210440359f209c63cbf799 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135653981)
   * 26e80f7a3dff8ebe0431e7bb46ec7c47b4aeb034 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135750292)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10105: [Flink-14599][table-planner-blink] 
Support precision of TimestampType in blink planner
URL: https://github.com/apache/flink/pull/10105#issuecomment-550298200
 
 
   
   ## CI report:
   
   * 3d1de92308358bfc6ff7dbee8b28121322315909 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135247375)
   * 4246b6215399f29a2cabb3f85786edddb0648c2f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135635568)
   * 6127ca1915a079f76eccf2f2597d818ad490ae52 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135751117)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10105: [Flink-14599][table-planner-blink] Support precision of TimestampType in blink planner

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10105: [Flink-14599][table-planner-blink] 
Support precision of TimestampType in blink planner
URL: https://github.com/apache/flink/pull/10105#issuecomment-550298200
 
 
   
   ## CI report:
   
   * 3d1de92308358bfc6ff7dbee8b28121322315909 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135247375)
   * 4246b6215399f29a2cabb3f85786edddb0648c2f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135635568)
   * 6127ca1915a079f76eccf2f2597d818ad490ae52 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10083: [FLINK-14472][runtime]Implement 
back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#issuecomment-549714188
 
 
   
   ## CI report:
   
   * bdb7952a0a48b4e67f51a04db61cd96a1cbecbbc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134985484)
   * ec925b8f1f82ac3016e60f40a9d4ec37453d494e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135023924)
   * dd225a3645a95f9fb4cb43fff29164cbd7b27f8b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135230081)
   * 326d344996ce0e11547231a7534a97482d6027b7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135504152)
   * ad0bab393757a8f56d9b4addf32a94f508469923 : UNKNOWN
   * 59d85d8289d0b1b34d210440359f209c63cbf799 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135653981)
   * 26e80f7a3dff8ebe0431e7bb46ec7c47b4aeb034 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135750292)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10083: [FLINK-14472][runtime]Implement 
back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#issuecomment-549714188
 
 
   
   ## CI report:
   
   * bdb7952a0a48b4e67f51a04db61cd96a1cbecbbc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134985484)
   * ec925b8f1f82ac3016e60f40a9d4ec37453d494e : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135023924)
   * dd225a3645a95f9fb4cb43fff29164cbd7b27f8b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135230081)
   * 326d344996ce0e11547231a7534a97482d6027b7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135504152)
   * ad0bab393757a8f56d9b4addf32a94f508469923 : UNKNOWN
   * 59d85d8289d0b1b34d210440359f209c63cbf799 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135653981)
   * 26e80f7a3dff8ebe0431e7bb46ec7c47b4aeb034 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10084: [FLINK-14382][yarn] Incorrect handling of FLINK_PLUGINS_DIR on Yarn

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10084: [FLINK-14382][yarn] Incorrect 
handling of FLINK_PLUGINS_DIR on Yarn
URL: https://github.com/apache/flink/pull/10084#issuecomment-549724300
 
 
   
   ## CI report:
   
   * 0fca3b7914151ce638faa282a6c75096c6f404ed : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134988810)
   * e3825d38d11026310b0fa2e295d1771822a5c63a : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135005005)
   * dfeb8e6057b760a74ea554114d4bf7974a727e9a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135010794)
   * 4a28fdcf8599d49f80cfef13a45c4325a2b7e55b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135157529)
   * 4125d88542be1f188bc85bd92e48abcc72c7bd3d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135230098)
   * 9236fbc6bac84c59028dadc67fa38bf7ebf1e626 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135613273)
   * 6e10fa89a0dd482c6638bf0b66b33a28caf4ce3a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135617417)
   * 1c1ba364d57f2f1dec91606697982732b34a2048 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135628954)
   * 06ad7ff1d844e51dfdc22782eabb75d8dd45224a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135742136)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10672) Task stuck while writing output to flink

2019-11-08 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16970692#comment-16970692
 ] 

Yun Gao commented on FLINK-10672:
-

Hi venkata, very thanks for the reporting! From my side I think by design local 
input channel should not be stuck since when new source start producing data 
again, the source thread (which holds the result partition that the local input 
channel reads data from) will 
[notify|https://github.com/apache/flink/blob/6322618bb0f1b7942d86cb1b2b7bc55290d9e330/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/LocalInputChannel.java#L202]
 the gate about the new data, and then it will unblock the consumer task thread 
from the wait() call. Have you also changed this part of codes when 
incorporating the RDMA functionality ?

> Task stuck while writing output to flink
> 
>
> Key: FLINK-10672
> URL: https://issues.apache.org/jira/browse/FLINK-10672
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.5.4
> Environment: OS: Debuan rodente 4.17
> Flink version: 1.5.4
> ||Key||Value||
> |jobmanager.heap.mb|1024|
> |jobmanager.rpc.address|localhost|
> |jobmanager.rpc.port|6123|
> |metrics.reporter.jmx.class|org.apache.flink.metrics.jmx.JMXReporter|
> |metrics.reporter.jmx.port|9250-9260|
> |metrics.reporters|jmx|
> |parallelism.default|1|
> |rest.port|8081|
> |taskmanager.heap.mb|1024|
> |taskmanager.numberOfTaskSlots|1|
> |web.tmpdir|/tmp/flink-web-bdb73d6c-5b9e-47b5-9ebf-eed0a7c82c26|
>  
> h1. Overview
> ||Data Port||All Slots||Free Slots||CPU Cores||Physical Memory||JVM Heap 
> Size||Flink Managed Memory||
> |43501|1|0|12|62.9 GB|922 MB|642 MB|
> h1. Memory
> h2. JVM (Heap/Non-Heap)
> ||Type||Committed||Used||Maximum||
> |Heap|922 MB|575 MB|922 MB|
> |Non-Heap|68.8 MB|64.3 MB|-1 B|
> |Total|991 MB|639 MB|922 MB|
> h2. Outside JVM
> ||Type||Count||Used||Capacity||
> |Direct|3,292|105 MB|105 MB|
> |Mapped|0|0 B|0 B|
> h1. Network
> h2. Memory Segments
> ||Type||Count||
> |Available|3,194|
> |Total|3,278|
> h1. Garbage Collection
> ||Collector||Count||Time||
> |G1_Young_Generation|13|336|
> |G1_Old_Generation|1|21|
>Reporter: Ankur Goenka
>Assignee: Yun Gao
>Priority: Major
>  Labels: beam
> Attachments: 0.14_all_jobs.jpg, 1uruvakHxBu.png, 3aDKQ24WvKk.png, 
> Po89UGDn58V.png, WithBroadcastJob.png, jmx_dump.json, jmx_dump_detailed.json, 
> jstack_129827.log, jstack_163822.log, jstack_66985.log
>
>
> I am running a fairly complex pipleline with 200+ task.
> The pipeline works fine with small data (order of 10kb input) but gets stuck 
> with a slightly larger data (300kb input).
>  
> The task gets stuck while writing the output toFlink, more specifically it 
> gets stuck while requesting memory segment in local buffer pool. The Task 
> manager UI shows that it has enough memory and memory segments to work with.
> The relevant stack trace is 
> {quote}"grpc-default-executor-0" #138 daemon prio=5 os_prio=0 
> tid=0x7fedb0163800 nid=0x30b7f in Object.wait() [0x7fedb4f9]
>  java.lang.Thread.State: TIMED_WAITING (on object monitor)
>  at (C/C++) 0x7fef201c7dae (Unknown Source)
>  at (C/C++) 0x7fef1f2aea07 (Unknown Source)
>  at (C/C++) 0x7fef1f241cd3 (Unknown Source)
>  at java.lang.Object.wait(Native Method)
>  - waiting on <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:247)
>  - locked <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:204)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:213)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:144)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:107)
>  at 
> org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65)
>  at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:42)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:26)
>  at 
> org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.collect(ChainedFlatMapDriver.java:80)
>  at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>  at 
> 

[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344423706
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/api/stream/ExplainTest.xml
 ##
 @@ -638,47 +638,44 @@ SortLimit(orderBy=[a ASC], offset=[0], fetch=[5], 
updateAsRetraction=[false], ac
 LogicalSink(name=[`default_catalog`.`default_database`.`appendSink1`], 
fields=[a, b])
 +- LogicalProject(id1=[$0], EXPR$1=[$2])
+- LogicalAggregate(group=[{0, 1}], EXPR$1=[LISTAGG($2, $3)])
-  +- LogicalProject(id1=[$0], $f1=[TUMBLE($1, 8000:INTERVAL SECOND)], 
text=[$2], $f3=[_UTF-16LE'#'])
- +- LogicalProject(id1=[$0], ts=[$2], text=[$1])
-+- LogicalFilter(condition=[AND(=($0, $3), >($2, -($7, 
30:INTERVAL MINUTE)), <($2, +($7, 18:INTERVAL MINUTE)))])
-   +- LogicalJoin(condition=[true], joinType=[inner])
-  :- LogicalWatermarkAssigner(fields=[id1, text, rowtime], 
rowtimeField=[rowtime], watermarkDelay=[0])
-  :  +- LogicalTableScan(table=[[default_catalog, 
default_database, T1]])
-  +- LogicalWatermarkAssigner(fields=[id2, cnt, name, goods, 
rowtime], rowtimeField=[rowtime], watermarkDelay=[0])
- +- LogicalTableScan(table=[[default_catalog, 
default_database, T2]])
+  +- LogicalProject(id1=[$0], $f1=[TUMBLE($2, 8000:INTERVAL SECOND)], 
text=[$1], $f3=[_UTF-16LE'#'])
 
 Review comment:
   Could you explain why this file is changed? I didn't expect any of your 
changes should effect this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344422724
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/calcite/FlinkPlannerImpl.scala
 ##
 @@ -151,37 +151,36 @@ class FlinkPlannerImpl(
 new SqlExprToRexConverterImpl(config, typeFactory, cluster, tableRowType)
   }
 
-  /** Implements [[org.apache.calcite.plan.RelOptTable.ViewExpander]]
-* interface for [[org.apache.calcite.tools.Planner]]. */
-  class ViewExpanderImpl extends ViewExpander {
-
-override def expandView(
-rowType: RelDataType,
-queryString: String,
-schemaPath: util.List[String],
-viewPath: util.List[String]): RelRoot = {
-
-  val sqlNode = parser.parse(queryString)
-  val catalogReader = catalogReaderSupplier.apply(false)
-.withSchemaPath(schemaPath)
-  val validator =
-new FlinkCalciteSqlValidator(operatorTable, catalogReader, typeFactory)
-  validator.setIdentifierExpansion(true)
-  val validatedSqlNode = validator.validate(sqlNode)
-  val sqlToRelConverter = new SqlToRelConverter(
-new ViewExpanderImpl,
-validator,
-catalogReader,
-cluster,
-convertletTable,
-sqlToRelConverterConfig)
-  root = sqlToRelConverter.convertQuery(validatedSqlNode, true, false)
-  root = root.withRel(sqlToRelConverter.flattenTypes(root.project(), true))
-  root = root.withRel(RelDecorrelator.decorrelateQuery(root.project()))
-  FlinkPlannerImpl.this.root
-}
+  override def getCluster: RelOptCluster = cluster
+
+  override def expandView(
+  rowType: RelDataType,
+  queryString: String,
+  schemaPath: util.List[String],
+  viewPath: util.List[String]): RelRoot = {
+
+val sqlNode: SqlNode = parser.parse(queryString)
 
 Review comment:
   According to scala coding style:
   
   > You should almost never annotate the type of a private field or a local 
variable. However, you may wish to still display the type where the assigned 
value has a complex or non-obvious form.
   
   I don't think any local variable in this function needs to declare type 
explicitly, since most types are showed by variable name


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344423689
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/batch/sql/DagOptimizationTest.xml
 ##
 @@ -31,13 +31,11 @@ 
LogicalSink(name=[`default_catalog`.`default_database`.`sink2`], fields=[b, cnt]
  +- LogicalProject(b=[$1], a=[$0])
 +- LogicalUnion(all=[true])
:- LogicalProject(a=[$0], b=[$1], c=[$2])
-   :  +- LogicalProject(a=[$0], b=[$1], c=[$2])
-   : +- LogicalFilter(condition=[LIKE($2, _UTF-16LE'%hello%')])
-   :+- LogicalTableScan(table=[[default_catalog, 
default_database, MyTable, source: [TestTableSource(a, b, c)]]])
+   :  +- LogicalFilter(condition=[LIKE($2, _UTF-16LE'%hello%')])
 
 Review comment:
   Could you explain why this file is changed? I didn't expect any of your 
changes should effect this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344424833
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/PhysicalTableSourceScan.scala
 ##
 @@ -53,8 +52,9 @@ abstract class PhysicalTableSourceScan(
   protected[flink] val tableSource: TableSource[_] = 
tableSourceTable.tableSource
 
   override def deriveRowType(): RelDataType = {
-val flinkTypeFactory = 
cluster.getTypeFactory.asInstanceOf[FlinkTypeFactory]
-tableSourceTable.getRowType(flinkTypeFactory)
+// TableScan row type should always keep same with its
 
 Review comment:
   Not sure about this, if we had project pushed down, will the row type still 
the same with the table's?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344422330
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/operations/SqlToOperationConverter.java
 ##
 @@ -201,30 +201,44 @@ private Operation convertSqlQuery(SqlNode node) {
 *
 * The returned table schema contains columns (a:int, b:varchar, 
c:timestamp).
 *
-* @param sqlCreateTable sql create table node.
+* @param sqlCreateTable sql create table node
 * @return TableSchema
 */
private TableSchema createTableSchema(SqlCreateTable sqlCreateTable) {
-   if (sqlCreateTable.containsComputedColumn()) {
-   throw new SqlConversionException("Computed columns for 
DDL is not supported yet!");
-   }
-   TableSchema.Builder builder = new TableSchema.Builder();
-   SqlValidator validator = flinkPlanner.getOrCreateSqlValidator();
-   // setup table columns
+   // Setup table columns.
SqlNodeList columnList = sqlCreateTable.getColumnList();
-   Map nameToTypeMap = new HashMap<>();
+   // Collect the physical fields info first.
+   Map nameToType = new HashMap<>();
+   final SqlValidator validator = 
flinkPlanner.getOrCreateSqlValidator();
for (SqlNode node : columnList.getList()) {
if (node instanceof SqlTableColumn) {
SqlTableColumn column = (SqlTableColumn) node;
RelDataType relType = column.getType()
.deriveType(validator, 
column.getType().getNullable());
String name = column.getName().getSimple();
-   nameToTypeMap.put(name, relType);
-   DataType dataType = 
TypeConversions.fromLogicalToDataType(
-   
FlinkTypeFactory.toLogicalType(relType));
-   builder.field(name, dataType);
-   } else if (node instanceof SqlBasicCall) {
-   // TODO: computed column ...
+   nameToType.put(name, relType);
+   }
+   }
+   final TableSchema.Builder builder = new TableSchema.Builder();
+   // Build the table schema.
+   for (SqlNode node : columnList) {
+   if (node instanceof SqlTableColumn) {
+   SqlTableColumn column = (SqlTableColumn) node;
+   final String fieldName = 
column.getName().getSimple();
+   assert nameToType.containsKey(fieldName);
+   builder.field(fieldName,
+   TypeConversions.fromLogicalToDataType(
+   
FlinkTypeFactory.toLogicalType(nameToType.get(fieldName;
+   } else {
+   assert node.getKind() == SqlKind.AS;
 
 Review comment:
   don't assert here, take the type you wanted using `else if () {`, and if 
you are sure about no third situation would happen, through an exception in 
another `else { }`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344424391
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/schema/FlinkRelOptTable.scala
 ##
 @@ -44,15 +42,16 @@ import org.apache.calcite.util.{ImmutableBitSet, Util}
 import java.util.{List => JList, Set => JSet}
 
 import scala.collection.JavaConversions._
+import scala.collection.JavaConverters._
 
 /**
   * [[FlinkRelOptTable]] wraps a [[FlinkTable]]
   *
-  * @param schema  the [[RelOptSchema]] this table belongs to
-  * @param rowType the type of rows returned by this table
-  * @param names   the identifier for this table. The identifier must be 
unique with
-  *respect to the Connection producing this table.
-  * @param table   wrapped flink table
+  * @param schema the [[RelOptSchema]] this table belongs to
 
 Review comment:
   revert this? I didn't see any problem with old format.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344421938
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/LegacyTypeInformationType.java
 ##
 @@ -50,7 +50,14 @@
private final TypeInformation typeInfo;
 
public LegacyTypeInformationType(LogicalTypeRoot logicalTypeRoot, 
TypeInformation typeInfo) {
-   super(true, logicalTypeRoot);
+   this(true, logicalTypeRoot, typeInfo);
+   }
+
+   public LegacyTypeInformationType(
+   boolean nullable,
 
 Review comment:
   Could you explain why you adding this? AFAIK TypeInformation don't have 
nullability, and I also can't find any place calling this modified constructor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344423566
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/stream/sql/MiniBatchIntervalInferTest.xml
 ##
 @@ -221,76 +221,70 @@ Calc(select=[a, b])
 LogicalSink(name=[`default_catalog`.`default_database`.`appendSink1`], 
fields=[a, b])
 +- LogicalProject(id1=[$1], EXPR$1=[$2])
+- LogicalAggregate(group=[{0, 1}], EXPR$1=[LISTAGG($2, $3)])
-  +- LogicalProject($f0=[HOP($2, 12000:INTERVAL SECOND, 4000:INTERVAL 
SECOND)], id1=[$0], text=[$1], $f3=[_UTF-16LE'*'])
- +- LogicalProject(id1=[$1], text=[$2], ts=[TUMBLE_ROWTIME($0)])
-+- LogicalAggregate(group=[{0, 1}], text=[LISTAGG($2, $3)])
-   +- LogicalProject($f0=[TUMBLE($1, 6000:INTERVAL SECOND)], 
id1=[$0], text=[$2], $f3=[_UTF-16LE'#'])
-  +- LogicalProject(id1=[$0], ts=[$1], text=[$2])
- +- LogicalFilter(condition=[AND(=($0, $3), >($1, -($4, 
30:INTERVAL MINUTE)), <($1, +($4, 18:INTERVAL MINUTE)))])
-+- LogicalJoin(condition=[true], joinType=[inner])
-   :- LogicalWatermarkAssigner(fields=[id1, rowtime, 
text], rowtimeField=[rowtime], watermarkDelay=[0])
-   :  +- LogicalTableScan(table=[[default_catalog, 
default_database, T1]])
-   +- LogicalWatermarkAssigner(fields=[id2, rowtime, 
cnt, name, goods], rowtimeField=[rowtime], watermarkDelay=[0])
-  +- LogicalTableScan(table=[[default_catalog, 
default_database, T2]])
+  +- LogicalProject($f0=[HOP(TUMBLE_ROWTIME($0), 12000:INTERVAL SECOND, 
4000:INTERVAL SECOND)], id1=[$1], text=[$2], $f3=[_UTF-16LE'*'])
 
 Review comment:
   Could you explain why this file is changed? I didn't expect any of your 
changes should effect this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344423549
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/stream/sql/DagOptimizationTest.xml
 ##
 @@ -31,13 +31,11 @@ 
LogicalSink(name=[`default_catalog`.`default_database`.`retractSink`], fields=[b
  +- LogicalProject(b=[$1], a=[$0])
 +- LogicalUnion(all=[true])
:- LogicalProject(a=[$0], b=[$1], c=[$2])
-   :  +- LogicalProject(a=[$0], b=[$1], c=[$2])
-   : +- LogicalFilter(condition=[LIKE($2, _UTF-16LE'%hello%')])
-   :+- LogicalTableScan(table=[[default_catalog, 
default_database, MyTable, source: [TestTableSource(a, b, c)]]])
+   :  +- LogicalFilter(condition=[LIKE($2, _UTF-16LE'%hello%')])
 
 Review comment:
   Could you explain why this file is changed? I didn't expect any of your 
changes should effect this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344424324
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/schema/FlinkRelOptTable.scala
 ##
 @@ -65,6 +64,16 @@ class FlinkRelOptTable protected(
   // Sets a bigger default value to avoid broadcast join.
   val DEFAULT_ROWCOUNT: Double = 1E8
 
+  lazy val columnExprs: Map[String, String] = table match {
+case cct : TableSourceTable[_] =>
 
 Review comment:
   why call it cct?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344424499
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/schema/FlinkRelOptTable.scala
 ##
 @@ -221,12 +230,40 @@ class FlinkRelOptTable protected(
 val cluster: RelOptCluster = context.getCluster
 if (table.isInstanceOf[TranslatableTable]) {
   table.asInstanceOf[TranslatableTable].toRel(context, this)
-} else if (Hook.ENABLE_BINDABLE.get(false)) {
-  LogicalTableScan.create(cluster, this)
-} else if (CalcitePrepareImpl.ENABLE_ENUMERABLE) {
-  EnumerableTableScan.create(cluster, this)
 } else {
-  throw new AssertionError
+  if (!context.isInstanceOf[FlinkToRelContext]) {
+// If the transform comes from a RelOptRule,
+// returns the scan directly.
+LogicalTableScan.create(cluster, this)
+  } else {
+// Get row type of physical fields.
+val physicalFields = getRowType.getFieldList
 
 Review comment:
   If you are warping lines, make each call a separate line, like:
   >val physicalFields = getRowType
   .getFieldList
   .filter(f => !columnExprs.contains(f.getName))
   .toList`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on a change in pull request #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-08 Thread GitBox
KurtYoung commented on a change in pull request #10123: 
[FLINK-14665][table-planner-blink] Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#discussion_r344424418
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/schema/FlinkRelOptTable.scala
 ##
 @@ -221,12 +230,40 @@ class FlinkRelOptTable protected(
 val cluster: RelOptCluster = context.getCluster
 if (table.isInstanceOf[TranslatableTable]) {
   table.asInstanceOf[TranslatableTable].toRel(context, this)
-} else if (Hook.ENABLE_BINDABLE.get(false)) {
 
 Review comment:
   why deleting these?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14037) Deserializing the input/output formats failed: unread block data

2019-11-08 Thread liupengcheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16970690#comment-16970690
 ] 

liupengcheng commented on FLINK-14037:
--

[~chesnay] hi, I notice that this Jira is closed as duplicated, but we've 
already have a lot of discussions and even PR reviewed, does it mean the PR 
will not be accepted? 

> Deserializing the input/output formats failed: unread block data
> 
>
> Key: FLINK-14037
> URL: https://issues.apache.org/jira/browse/FLINK-14037
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.9.0
> Environment: flink 1.9.0
> app jar use `flink-shaded-hadoop-2` dependencies to avoid some confilicts
>  
>Reporter: liupengcheng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Recently, we encountered the following issue when testing flink 1.9.0:
> {code:java}
> org.apache.flink.client.program.ProgramInvocationException: Could not 
> retrieve the execution result. (JobID: 8ffbc071dda81d6f8005c02be8adde6b)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:255)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:539)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort$.main(FlinkTeraSort.scala:89)
>   at 
> com.github.ehiggs.spark.terasort.FlinkTeraSort.main(FlinkTeraSort.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:382)
>   at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>   at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:263)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at 

[GitHub] [flink] flinkbot edited a comment on issue #10084: [FLINK-14382][yarn] Incorrect handling of FLINK_PLUGINS_DIR on Yarn

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10084: [FLINK-14382][yarn] Incorrect 
handling of FLINK_PLUGINS_DIR on Yarn
URL: https://github.com/apache/flink/pull/10084#issuecomment-549724300
 
 
   
   ## CI report:
   
   * 0fca3b7914151ce638faa282a6c75096c6f404ed : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134988810)
   * e3825d38d11026310b0fa2e295d1771822a5c63a : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135005005)
   * dfeb8e6057b760a74ea554114d4bf7974a727e9a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135010794)
   * 4a28fdcf8599d49f80cfef13a45c4325a2b7e55b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135157529)
   * 4125d88542be1f188bc85bd92e48abcc72c7bd3d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135230098)
   * 9236fbc6bac84c59028dadc67fa38bf7ebf1e626 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135613273)
   * 6e10fa89a0dd482c6638bf0b66b33a28caf4ce3a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135617417)
   * 1c1ba364d57f2f1dec91606697982732b34a2048 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135628954)
   * 06ad7ff1d844e51dfdc22782eabb75d8dd45224a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135742136)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14684) Add Pinterest to Chinese Powered By page

2019-11-08 Thread Hequn Cheng (Jira)
Hequn Cheng created FLINK-14684:
---

 Summary: Add Pinterest to Chinese Powered By page
 Key: FLINK-14684
 URL: https://issues.apache.org/jira/browse/FLINK-14684
 Project: Flink
  Issue Type: New Feature
  Components: chinese-translation
Reporter: Hequn Cheng
 Fix For: 1.10.0


Pinterest was added to the English Powered By page with commit:
[51f7e3ced85b94dcbe3c051069379d22c88fbc5c|https://github.com/apache/flink-web/pull/281]

It should be added to the Chinese Powered By (and index.html) page as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10090: [FLINK-14560] [runtime] The value of taskmanager.memory.size in flink…

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10090: [FLINK-14560] [runtime] The value of 
taskmanager.memory.size in flink…
URL: https://github.com/apache/flink/pull/10090#issuecomment-549891301
 
 
   
   ## CI report:
   
   * 0947500a3ee71f62fd992e8b856f1f6c9d9d68cb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135081985)
   * 19aa6c886e3084b1a00351754c9e9eed39fd8019 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135644725)
   * 974d39956940cfd53114c9f8667b80235d5c0a99 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135653995)
   * c1bd67424db724990265fa7e42629c0c94fe0591 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135668832)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10084: [FLINK-14382][yarn] Incorrect handling of FLINK_PLUGINS_DIR on Yarn

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10084: [FLINK-14382][yarn] Incorrect 
handling of FLINK_PLUGINS_DIR on Yarn
URL: https://github.com/apache/flink/pull/10084#issuecomment-549724300
 
 
   
   ## CI report:
   
   * 0fca3b7914151ce638faa282a6c75096c6f404ed : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134988810)
   * e3825d38d11026310b0fa2e295d1771822a5c63a : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135005005)
   * dfeb8e6057b760a74ea554114d4bf7974a727e9a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135010794)
   * 4a28fdcf8599d49f80cfef13a45c4325a2b7e55b : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135157529)
   * 4125d88542be1f188bc85bd92e48abcc72c7bd3d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135230098)
   * 9236fbc6bac84c59028dadc67fa38bf7ebf1e626 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135613273)
   * 6e10fa89a0dd482c6638bf0b66b33a28caf4ce3a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135617417)
   * 1c1ba364d57f2f1dec91606697982732b34a2048 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135628954)
   * 06ad7ff1d844e51dfdc22782eabb75d8dd45224a : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10090: [FLINK-14560] [runtime] The value of taskmanager.memory.size in flink…

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10090: [FLINK-14560] [runtime] The value of 
taskmanager.memory.size in flink…
URL: https://github.com/apache/flink/pull/10090#issuecomment-549891301
 
 
   
   ## CI report:
   
   * 0947500a3ee71f62fd992e8b856f1f6c9d9d68cb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135081985)
   * 19aa6c886e3084b1a00351754c9e9eed39fd8019 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135644725)
   * 974d39956940cfd53114c9f8667b80235d5c0a99 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135653995)
   * c1bd67424db724990265fa7e42629c0c94fe0591 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135668832)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] faaronzheng commented on issue #10090: [FLINK-14560] [runtime] The value of taskmanager.memory.size in flink…

2019-11-08 Thread GitBox
faaronzheng commented on issue #10090: [FLINK-14560] [runtime] The value of 
taskmanager.memory.size in flink…
URL: https://github.com/apache/flink/pull/10090#issuecomment-552043082
 
 
   > * @flinkbot run travis
   
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10140: [FLINK-14660][sql cli] add 'SHOW MODULES' to SQL CLI

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10140: [FLINK-14660][sql cli] add 'SHOW 
MODULES' to SQL CLI
URL: https://github.com/apache/flink/pull/10140#issuecomment-552037315
 
 
   
   ## CI report:
   
   * 2e18887577cd377bbb09e24747fa9e793ae4538b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135734757)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10121: [FLINK-14420][doc] Add documentation for pluggable module

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10121: [FLINK-14420][doc] Add documentation 
for pluggable module
URL: https://github.com/apache/flink/pull/10121#issuecomment-551327230
 
 
   
   ## CI report:
   
   * 6af61a5c1ef5db243b2125f40b40ca244c4449ab : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135553093)
   * db91f009fea212a53f5135db46a5f63d48313f88 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135732621)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10140: [FLINK-14660][sql cli] add 'SHOW MODULES' to SQL CLI

2019-11-08 Thread GitBox
flinkbot commented on issue #10140: [FLINK-14660][sql cli] add 'SHOW MODULES' 
to SQL CLI
URL: https://github.com/apache/flink/pull/10140#issuecomment-552037315
 
 
   
   ## CI report:
   
   * 2e18887577cd377bbb09e24747fa9e793ae4538b : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10121: [FLINK-14420][doc] Add documentation for pluggable module

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10121: [FLINK-14420][doc] Add documentation 
for pluggable module
URL: https://github.com/apache/flink/pull/10121#issuecomment-551327230
 
 
   
   ## CI report:
   
   * 6af61a5c1ef5db243b2125f40b40ca244c4449ab : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135553093)
   * db91f009fea212a53f5135db46a5f63d48313f88 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135732621)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class loader

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let 
`StandaloneJobClusterEntrypoint` use user code class loader
URL: https://github.com/apache/flink/pull/10076#issuecomment-549350838
 
 
   
   ## CI report:
   
   * 20b4b8b96eb3e73db1e627b334fb7fb487aa3bb8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134854523)
   * 1ff44182144ddcf9834a60b6961846625190d38d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135437308)
   * b14cfe9f967fae56b9841820c2e3e7b7a23bee01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135474356)
   * f0b8bf41c69a1b171d29cedfc0a77b88a54028e4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135713991)
   * 6ec2aaa93f8e87cd1fdb8dcbac4b8ff091514221 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135727198)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10121: [FLINK-14420][doc] Add documentation for pluggable module

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10121: [FLINK-14420][doc] Add documentation 
for pluggable module
URL: https://github.com/apache/flink/pull/10121#issuecomment-551327230
 
 
   
   ## CI report:
   
   * 6af61a5c1ef5db243b2125f40b40ca244c4449ab : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135553093)
   * db91f009fea212a53f5135db46a5f63d48313f88 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10140: [FLINK-14660][sql cli] add 'SHOW MODULES' to SQL CLI

2019-11-08 Thread GitBox
flinkbot commented on issue #10140: [FLINK-14660][sql cli] add 'SHOW MODULES' 
to SQL CLI
URL: https://github.com/apache/flink/pull/10140#issuecomment-552030424
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2e18887577cd377bbb09e24747fa9e793ae4538b (Fri Nov 08 
23:27:18 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14660) add 'SHOW MODULES' sql command

2019-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14660:
---
Labels: pull-request-available  (was: )

> add 'SHOW MODULES' sql command
> --
>
> Key: FLINK-14660
> URL: https://issues.apache.org/jira/browse/FLINK-14660
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 opened a new pull request #10140: [FLINK-14660][sql cli] add 'SHOW MODULES' to SQL CLI

2019-11-08 Thread GitBox
bowenli86 opened a new pull request #10140: [FLINK-14660][sql cli] add 'SHOW 
MODULES' to SQL CLI
URL: https://github.com/apache/flink/pull/10140
 
 
   ### What is the purpose of the change
   
   Add 'show modules' to sql client. 
   
   Since all the 'show xxx' are in sql client and not in sql parser, we add 
'show modules' to sql cli for now. All the 'show xxx' commands should be moved 
parser together all at once
   
   ## Brief change log
   
   - add 'show modules' to sql cli
   - add `listModules` API to executor
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? ( docs later)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #10140: [FLINK-14660][sql cli] add 'SHOW MODULES' to SQL CLI

2019-11-08 Thread GitBox
bowenli86 commented on issue #10140: [FLINK-14660][sql cli] add 'SHOW MODULES' 
to SQL CLI
URL: https://github.com/apache/flink/pull/10140#issuecomment-552030115
 
 
   @xuefuz @lirui-apache @zjuwangg 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-14673) Shouldn't expect HMS client to throw NoSuchObjectException for non-existing function

2019-11-08 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-14673.

Resolution: Fixed

master: 1806a373fcec1e981601bd755d3d3652dc61219a

> Shouldn't expect HMS client to throw NoSuchObjectException for non-existing 
> function
> 
>
> Key: FLINK-14673
> URL: https://issues.apache.org/jira/browse/FLINK-14673
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14673) Shouldn't expect HMS client to throw NoSuchObjectException for non-existing function

2019-11-08 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li reassigned FLINK-14673:


Assignee: Rui Li

> Shouldn't expect HMS client to throw NoSuchObjectException for non-existing 
> function
> 
>
> Key: FLINK-14673
> URL: https://issues.apache.org/jira/browse/FLINK-14673
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14579) enable SQL CLI to configure modules via yaml config

2019-11-08 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-14579.

Resolution: Fixed

master: 501f640454f2841166841cdd809b5c29893caffd

> enable SQL CLI to configure modules via yaml config
> ---
>
> Key: FLINK-14579
> URL: https://issues.apache.org/jira/browse/FLINK-14579
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 closed pull request #10133: [FLINK-14673][hive] Shouldn't expect HMS client to throw NoSuchObject…

2019-11-08 Thread GitBox
bowenli86 closed pull request #10133: [FLINK-14673][hive] Shouldn't expect HMS 
client to throw NoSuchObject…
URL: https://github.com/apache/flink/pull/10133
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #10133: [FLINK-14673][hive] Shouldn't expect HMS client to throw NoSuchObject…

2019-11-08 Thread GitBox
bowenli86 commented on issue #10133: [FLINK-14673][hive] Shouldn't expect HMS 
client to throw NoSuchObject…
URL: https://github.com/apache/flink/pull/10133#issuecomment-552027435
 
 
   LGTM, merging


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14673) Shouldn't expect HMS client to throw NoSuchObjectException for non-existing function

2019-11-08 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-14673:
-
Fix Version/s: 1.10.0

> Shouldn't expect HMS client to throw NoSuchObjectException for non-existing 
> function
> 
>
> Key: FLINK-14673
> URL: https://issues.apache.org/jira/browse/FLINK-14673
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 closed pull request #10093: [FLINK-14579][sql cli] enable SQL CLI to configure modules via yaml config

2019-11-08 Thread GitBox
bowenli86 closed pull request #10093: [FLINK-14579][sql cli] enable SQL CLI to 
configure modules via yaml config
URL: https://github.com/apache/flink/pull/10093
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #10121: [FLINK-14420][doc] Add documentation for pluggable module

2019-11-08 Thread GitBox
bowenli86 commented on issue #10121: [FLINK-14420][doc] Add documentation for 
pluggable module
URL: https://github.com/apache/flink/pull/10121#issuecomment-552026581
 
 
   @sjwiesman thanks, please take another look


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10139: [FLINK-13095][state-processor-api] Provide an easy way to read / bootstrap window state using the State Processor API

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10139: [FLINK-13095][state-processor-api] 
Provide an easy way to read / bootstrap window state using the State Processor 
API
URL: https://github.com/apache/flink/pull/10139#issuecomment-552000397
 
 
   
   ## CI report:
   
   * 0169be09c93fd3829577231a69fcfef441395fc0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135721162)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class loader

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let 
`StandaloneJobClusterEntrypoint` use user code class loader
URL: https://github.com/apache/flink/pull/10076#issuecomment-549350838
 
 
   
   ## CI report:
   
   * 20b4b8b96eb3e73db1e627b334fb7fb487aa3bb8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134854523)
   * 1ff44182144ddcf9834a60b6961846625190d38d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135437308)
   * b14cfe9f967fae56b9841820c2e3e7b7a23bee01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135474356)
   * f0b8bf41c69a1b171d29cedfc0a77b88a54028e4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135713991)
   * 6ec2aaa93f8e87cd1fdb8dcbac4b8ff091514221 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135727198)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class loader

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let 
`StandaloneJobClusterEntrypoint` use user code class loader
URL: https://github.com/apache/flink/pull/10076#issuecomment-549350838
 
 
   
   ## CI report:
   
   * 20b4b8b96eb3e73db1e627b334fb7fb487aa3bb8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134854523)
   * 1ff44182144ddcf9834a60b6961846625190d38d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135437308)
   * b14cfe9f967fae56b9841820c2e3e7b7a23bee01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135474356)
   * f0b8bf41c69a1b171d29cedfc0a77b88a54028e4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135713991)
   * 6ec2aaa93f8e87cd1fdb8dcbac4b8ff091514221 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10139: [FLINK-13095][state-processor-api] Provide an easy way to read / bootstrap window state using the State Processor API

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10139: [FLINK-13095][state-processor-api] 
Provide an easy way to read / bootstrap window state using the State Processor 
API
URL: https://github.com/apache/flink/pull/10139#issuecomment-552000397
 
 
   
   ## CI report:
   
   * 0169be09c93fd3829577231a69fcfef441395fc0 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135721162)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10139: [FLINK-13095][state-processor-api] Provide an easy way to read / bootstrap window state using the State Processor API

2019-11-08 Thread GitBox
flinkbot commented on issue #10139: [FLINK-13095][state-processor-api] Provide 
an easy way to read / bootstrap window state using the State Processor API
URL: https://github.com/apache/flink/pull/10139#issuecomment-552000397
 
 
   
   ## CI report:
   
   * 0169be09c93fd3829577231a69fcfef441395fc0 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class loader

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let 
`StandaloneJobClusterEntrypoint` use user code class loader
URL: https://github.com/apache/flink/pull/10076#issuecomment-549350838
 
 
   
   ## CI report:
   
   * 20b4b8b96eb3e73db1e627b334fb7fb487aa3bb8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134854523)
   * 1ff44182144ddcf9834a60b6961846625190d38d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135437308)
   * b14cfe9f967fae56b9841820c2e3e7b7a23bee01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135474356)
   * f0b8bf41c69a1b171d29cedfc0a77b88a54028e4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135713991)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on issue #10139: [FLINK-13095][state-processor-api] Provide an easy way to read / bootstrap window state using the State Processor API

2019-11-08 Thread GitBox
sjwiesman commented on issue #10139: [FLINK-13095][state-processor-api] Provide 
an easy way to read / bootstrap window state using the State Processor API
URL: https://github.com/apache/flink/pull/10139#issuecomment-551993951
 
 
   @flinkbot attention @tzulitai 
   
   The docs for the state processor api need a serious overhaul. If it's 
alright with you I'd like to leave this PR with just JavaDoc. I've already set 
aside some time next week to rewrite the docs for this component and will add 
window content at that time. 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10139: [FLINK-13095][state-processor-api] Provide an easy way to read / bootstrap window state using the State Processor API

2019-11-08 Thread GitBox
flinkbot commented on issue #10139: [FLINK-13095][state-processor-api] Provide 
an easy way to read / bootstrap window state using the State Processor API
URL: https://github.com/apache/flink/pull/10139#issuecomment-551993757
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 0169be09c93fd3829577231a69fcfef441395fc0 (Fri Nov 08 
21:20:15 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13095) Provide an easy way to read / bootstrap window state using the State Processor API

2019-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13095:
---
Labels: pull-request-available  (was: )

> Provide an easy way to read / bootstrap window state using the State 
> Processor API
> --
>
> Key: FLINK-13095
> URL: https://issues.apache.org/jira/browse/FLINK-13095
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / State Processor
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sjwiesman opened a new pull request #10139: [FLINK-13095][state-processor-api] Provide an easy way to read / bootstrap window state using the State Processor API

2019-11-08 Thread GitBox
sjwiesman opened a new pull request #10139: [FLINK-13095][state-processor-api] 
Provide an easy way to read / bootstrap window state using the State Processor 
API
URL: https://github.com/apache/flink/pull/10139
 
 
   ## What is the purpose of the change
   
   Provide an easy way to read / bootstrap window state using the State 
Processor API.
   
   ## Brief changelog
   
   e5dcf6f Adds support for listing namespaces for a state type from 
`KeyedStateBackend`. This is required as windows are encoded as namespaces in 
the state backend and up until now there has never been a way to query 
namespaces, all operations assume the calling code already knows what namespace 
they are interested in interacting with.
   
   ffd3983 Refactors the `KeyedStateInputFormat` to support arbitrary user 
function types. It now follows the model of Task -> Operator -> Function where 
the input format is the task, `StateReaderOperator` is the low-level operator 
that can be implemented for any function type. In the future this allows us to 
support reading other low-level encodings such as stateful functions. 
   
   f6348bb Adds support for reading window operator state by implementing a 
`WindowStateReaderOperator` and `WindowReaderFunction`. 
   
   fbd88b7 Extracts the logic from `WindowedStream` into a builder class so 
that there is one definitive way to create and configure the window operator. I 
ran all the end to end tests to ensure everything works as expected.
   
   bf4e549 Cursory refactoring of some test utilities
   
0169be0 Adds support for bootstrapping window state. While this commit 
contains a lot of code it is all boilerplate due to the many ways users can 
configure the window operator. All user-facing changes simply sit on top of the 
`WindowOperatorBuilder`. 
   
   ## Verifying this change
   
   Unit and IT tests cases
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? JavaDoc
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class loader

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let 
`StandaloneJobClusterEntrypoint` use user code class loader
URL: https://github.com/apache/flink/pull/10076#issuecomment-549350838
 
 
   
   ## CI report:
   
   * 20b4b8b96eb3e73db1e627b334fb7fb487aa3bb8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134854523)
   * 1ff44182144ddcf9834a60b6961846625190d38d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135437308)
   * b14cfe9f967fae56b9841820c2e3e7b7a23bee01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135474356)
   * f0b8bf41c69a1b171d29cedfc0a77b88a54028e4 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135713991)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12047) State Processor API (previously named Savepoint Connector) to read / write / process savepoints

2019-11-08 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman updated FLINK-12047:
-
Component/s: (was: Runtime / State Backends)
 (was: API / DataStream)
 API / State Processor

> State Processor API (previously named Savepoint Connector) to read / write / 
> process savepoints
> ---
>
> Key: FLINK-12047
> URL: https://issues.apache.org/jira/browse/FLINK-12047
> Project: Flink
>  Issue Type: New Feature
>  Components: API / State Processor
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Seth Wiesman
>Priority: Major
>
> This JIRA tracks the ongoing efforts and discussions about a means to read / 
> write / process state in savepoints.
> There are already two known existing works (that was mentioned already in the 
> mailing lists) related to this:
> 1. Bravo [1]
> 2. https://github.com/sjwiesman/flink/tree/savepoint-connector
> Essentially, the two tools both provide a connector to read or write a Flink 
> savepoint, and allows to utilize Flink's processing APIs for querying / 
> processing the state in the savepoint.
> We should try to converge the efforts on this, and have a savepoint connector 
> like this in Flink.
> With this connector, the high-level benefits users should be able to achieve 
> with it are:
> 1. Create savepoints using existing data from other systems (i.e. 
> bootstrapping a Flink job's state with data in an external database).
> 2. Derive new state using existing state
> 3. Query state in savepoints, for example for debugging purposes
> 4. Migrate schema of state in savepoints offline, compared to the current 
> more limited approach of online migration on state access.
> 5. Change max parallelism of jobs, or any other kind of fixed configuration, 
> such as operator uids.
> [1] https://github.com/king/bravo



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13095) Provide an easy way to read / bootstrap window state using the State Processor API

2019-11-08 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman updated FLINK-13095:
-
Component/s: (was: Runtime / State Backends)
 (was: API / DataStream)
 API / State Processor

> Provide an easy way to read / bootstrap window state using the State 
> Processor API
> --
>
> Key: FLINK-13095
> URL: https://issues.apache.org/jira/browse/FLINK-13095
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / State Processor
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Seth Wiesman
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class loader

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10076: [FLINK-14465][runtime] Let 
`StandaloneJobClusterEntrypoint` use user code class loader
URL: https://github.com/apache/flink/pull/10076#issuecomment-549350838
 
 
   
   ## CI report:
   
   * 20b4b8b96eb3e73db1e627b334fb7fb487aa3bb8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134854523)
   * 1ff44182144ddcf9834a60b6961846625190d38d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135437308)
   * b14cfe9f967fae56b9841820c2e3e7b7a23bee01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135474356)
   * f0b8bf41c69a1b171d29cedfc0a77b88a54028e4 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-10672) Task stuck while writing output to flink

2019-11-08 Thread venkata subbarao chunduri (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16970360#comment-16970360
 ] 

venkata subbarao chunduri edited comment on FLINK-10672 at 11/8/19 8:28 PM:


Could it be due to local channel? It looks, so.

I am also faced with the same issue. Seems, the local channel [can go to 
wait|https://github.com/apache/flink/blob/6322618bb0f1b7942d86cb1b2b7bc55290d9e330/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java#L539]
 and may never wake up as data from the [local channel pulled in the same 
thread|https://github.com/apache/flink/blob/6322618bb0f1b7942d86cb1b2b7bc55290d9e330/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/LocalInputChannel.java#L104].
 Could you confirm?

The issue seems to happen in a flow as below:
 1. Source stops producing data for a moment. 
 2. Input gate sees no data on the channel and calls 
[wait()|https://github.com/venkatsc/flink/blob/7297bacfe14b9f814c308d95003c837115a6bdd2/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java#L539]
 expecting a notify in future. 
 3. Source starts producing data and consumes all the buffers on the partition 
and enters [indefinite loop 
|https://github.com/apache/flink/blob/7297bacfe14b9f814c308d95003c837115a6bdd2/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPool.java#L251]
 waiting for a new buffer. This stops any further writing for the partition.
{code:java}
at java.lang.Object.wait(Native Method)
 - waiting on <0xf6d56450> (a java.util.ArrayDeque)
 at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:247){code}
4. But, no buffers are going to be released as the local channel is stuck in 
wait.
{code:java}
at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:539){code}
The common factor in issue report and my modified flink code is, it has some 
additional threads in task manager (for example, GRPC threads in original issue 
report). Could these additional threads create any issues? 

This issue is clearly visible in presence of these additional threads (my 
customization, targeted for RDMA experiments uses a library that creates 
additional threads). Why could this happen?. Does anyone see a reason for it?

When tested with flink 1.8.1, the same job does not get stuck. But version with 
additional threads in the task manager, the scenario explained above happened 
and tasks got stuck forever.


was (Author: venkatsc):
Could it be due to local channel? It looks, so.

I am also faced with the same issue. Seems, the local channel [can go to 
wait|https://github.com/apache/flink/blob/6322618bb0f1b7942d86cb1b2b7bc55290d9e330/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java#L539]
 and may never wake up as data from the [local channel pulled in the same 
thread|https://github.com/apache/flink/blob/6322618bb0f1b7942d86cb1b2b7bc55290d9e330/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/LocalInputChannel.java#L104].
 Could you confirm?

The issue seems to happen in a flow as below:
 1. Source stops producing data for a moment. 
 2. Input gate sees no data on the channel and calls 
[wait()|https://github.com/venkatsc/flink/blob/7297bacfe14b9f814c308d95003c837115a6bdd2/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java#L539]
 expecting a notify in future. 
 3. Source starts producing data and consumes all the buffers on the partition 
and enters [indefinite loop 
|https://github.com/apache/flink/blob/7297bacfe14b9f814c308d95003c837115a6bdd2/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPool.java#L251]
 waiting for a new buffer. This stops any further writing for the partition.
{code:java}
log: at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:247
 ){code}
4. But, no buffers are going to be released as the local channel is stuck in 
wait.
{code:java}
at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:539){code}

The common factor in issue report and my modified flink code is, it has some 
additional threads in task manager (for example, GRPC threads in original issue 
report). Could these additional threads create any issues? 

This issue is clearly visible in presence of these additional threads (my 
customization, targeted for RDMA experiments uses a library that creates 
additional threads). Why could this happen?. Does anyone see a reason for it?

When tested with flink 1.8.1, the same job does not get 

[jira] [Comment Edited] (FLINK-10672) Task stuck while writing output to flink

2019-11-08 Thread venkata subbarao chunduri (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16970360#comment-16970360
 ] 

venkata subbarao chunduri edited comment on FLINK-10672 at 11/8/19 8:14 PM:


Could it be due to local channel? It looks, so.

I am also faced with the same issue. Seems, the local channel [can go to 
wait|https://github.com/apache/flink/blob/6322618bb0f1b7942d86cb1b2b7bc55290d9e330/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java#L539]
 and may never wake up as data from the [local channel pulled in the same 
thread|https://github.com/apache/flink/blob/6322618bb0f1b7942d86cb1b2b7bc55290d9e330/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/LocalInputChannel.java#L104].
 Could you confirm?

The issue seems to happen in a flow as below:
 1. Source stops producing data for a moment. 
 2. Input gate sees no data on the channel and calls 
[wait()|https://github.com/venkatsc/flink/blob/7297bacfe14b9f814c308d95003c837115a6bdd2/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java#L539]
 expecting a notify in future. 
 3. Source starts producing data and consumes all the buffers on the partition 
and enters [indefinite loop 
|https://github.com/apache/flink/blob/7297bacfe14b9f814c308d95003c837115a6bdd2/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPool.java#L251]
 waiting for a new buffer. This stops any further writing for the partition.
{code:java}
log: at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:247
 ){code}
4. But, no buffers are going to be released as the local channel is stuck in 
wait.
{code:java}
at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:539){code}

The common factor in issue report and my modified flink code is, it has some 
additional threads in task manager (for example, GRPC threads in original issue 
report). Could these additional threads create any issues? 

This issue is clearly visible in presence of these additional threads (my 
customization, targeted for RDMA experiments uses a library that creates 
additional threads). Why could this happen?. Does anyone see a reason for it?

When tested with flink 1.8.1, the same job does not get stuck. But version with 
additional threads in the task manager, the scenario explained above happened 
and tasks got stuck forever.


was (Author: venkatsc):
Could it be due to local channel? It looks, so.

I am also faced with the same issue. Seems, the local channel [can go to 
wait|https://github.com/apache/flink/blob/6322618bb0f1b7942d86cb1b2b7bc55290d9e330/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/SingleInputGate.java#L539]
 and may never wake up as data from the [local channel pulled in the same 
thread|https://github.com/apache/flink/blob/6322618bb0f1b7942d86cb1b2b7bc55290d9e330/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/LocalInputChannel.java#L104].
 Could you confirm?

> Task stuck while writing output to flink
> 
>
> Key: FLINK-10672
> URL: https://issues.apache.org/jira/browse/FLINK-10672
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.5.4
> Environment: OS: Debuan rodente 4.17
> Flink version: 1.5.4
> ||Key||Value||
> |jobmanager.heap.mb|1024|
> |jobmanager.rpc.address|localhost|
> |jobmanager.rpc.port|6123|
> |metrics.reporter.jmx.class|org.apache.flink.metrics.jmx.JMXReporter|
> |metrics.reporter.jmx.port|9250-9260|
> |metrics.reporters|jmx|
> |parallelism.default|1|
> |rest.port|8081|
> |taskmanager.heap.mb|1024|
> |taskmanager.numberOfTaskSlots|1|
> |web.tmpdir|/tmp/flink-web-bdb73d6c-5b9e-47b5-9ebf-eed0a7c82c26|
>  
> h1. Overview
> ||Data Port||All Slots||Free Slots||CPU Cores||Physical Memory||JVM Heap 
> Size||Flink Managed Memory||
> |43501|1|0|12|62.9 GB|922 MB|642 MB|
> h1. Memory
> h2. JVM (Heap/Non-Heap)
> ||Type||Committed||Used||Maximum||
> |Heap|922 MB|575 MB|922 MB|
> |Non-Heap|68.8 MB|64.3 MB|-1 B|
> |Total|991 MB|639 MB|922 MB|
> h2. Outside JVM
> ||Type||Count||Used||Capacity||
> |Direct|3,292|105 MB|105 MB|
> |Mapped|0|0 B|0 B|
> h1. Network
> h2. Memory Segments
> ||Type||Count||
> |Available|3,194|
> |Total|3,278|
> h1. Garbage Collection
> ||Collector||Count||Time||
> |G1_Young_Generation|13|336|
> |G1_Old_Generation|1|21|
>Reporter: Ankur Goenka
>Assignee: Yun Gao
>Priority: Major
>  Labels: beam
> Attachments: 0.14_all_jobs.jpg, 1uruvakHxBu.png, 3aDKQ24WvKk.png, 
> Po89UGDn58V.png, 

[GitHub] [flink] tillrohrmann commented on a change in pull request #10076: [FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class loader

2019-11-08 Thread GitBox
tillrohrmann commented on a change in pull request #10076: 
[FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class 
loader
URL: https://github.com/apache/flink/pull/10076#discussion_r344243667
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ClusterEntrypointUtil.java
 ##
 @@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.util;
+
+import org.apache.flink.configuration.ConfigConstants;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.util.Optional;
+
+/**
+ * Utility class for {@link 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint}.
+ */
+public class ClusterEntrypointUtil {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(ClusterEntrypointUtil.class);
+
+   /**
+* @return the user library directory.
+*/
+   public static Optional getUserLibDirectory() {
 
 Review comment:
   ```suggestion
public static Optional getUsrLibDirectory() {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #10076: [FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class loader

2019-11-08 Thread GitBox
tillrohrmann commented on a change in pull request #10076: 
[FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class 
loader
URL: https://github.com/apache/flink/pull/10076#discussion_r344243302
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ClusterEntrypointUtil.java
 ##
 @@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.util;
+
+import org.apache.flink.configuration.ConfigConstants;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.util.Optional;
+
+/**
+ * Utility class for {@link 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint}.
+ */
+public class ClusterEntrypointUtil {
 
 Review comment:
   ```suggestion
   public final class ClusterEntrypointUtils {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #10076: [FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class loader

2019-11-08 Thread GitBox
tillrohrmann commented on a change in pull request #10076: 
[FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class 
loader
URL: https://github.com/apache/flink/pull/10076#discussion_r344243583
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ClusterEntrypointUtil.java
 ##
 @@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.util;
+
+import org.apache.flink.configuration.ConfigConstants;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.util.Optional;
+
+/**
+ * Utility class for {@link 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint}.
+ */
+public class ClusterEntrypointUtil {
 
 Review comment:
   I'd suggest to add a private constructor as it is a utility class.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10138: [FLINK-14635][e2e tests] Use non-relocated imports for AWS SDK Kinesis classes in tests.

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10138: [FLINK-14635][e2e tests] Use 
non-relocated imports for AWS SDK Kinesis classes in tests.
URL: https://github.com/apache/flink/pull/10138#issuecomment-551936400
 
 
   
   ## CI report:
   
   * 73e1c5606a277532d09a4aade4a29b5b40aa1ffe : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135687300)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10138: [FLINK-14635][e2e tests] Use non-relocated imports for AWS SDK Kinesis classes in tests.

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10138: [FLINK-14635][e2e tests] Use 
non-relocated imports for AWS SDK Kinesis classes in tests.
URL: https://github.com/apache/flink/pull/10138#issuecomment-551936400
 
 
   
   ## CI report:
   
   * 73e1c5606a277532d09a4aade4a29b5b40aa1ffe : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135687300)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10137: Update FlinkKafkaProducer.java

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10137: Update FlinkKafkaProducer.java
URL: https://github.com/apache/flink/pull/10137#issuecomment-551884117
 
 
   
   ## CI report:
   
   * d0e22963e480b6072edf14dd296bb3e711264628 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135663509)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10138: [FLINK-14635][e2e tests] Use non-relocated imports for AWS SDK Kinesis classes in tests.

2019-11-08 Thread GitBox
flinkbot commented on issue #10138: [FLINK-14635][e2e tests] Use non-relocated 
imports for AWS SDK Kinesis classes in tests.
URL: https://github.com/apache/flink/pull/10138#issuecomment-551936400
 
 
   
   ## CI report:
   
   * 73e1c5606a277532d09a4aade4a29b5b40aa1ffe : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10117: [FLINK-14494][docs] Add ConfigOption type to the documentation generator

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10117: [FLINK-14494][docs] Add ConfigOption 
type to the documentation generator
URL: https://github.com/apache/flink/pull/10117#issuecomment-551124860
 
 
   
   ## CI report:
   
   * 6fb1d5a168ee34b529e33f332291fa3db6ec : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135468483)
   * ab79c3192e4b2eed0b99ec7e9b59eb82be3a5e01 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135621666)
   * 0a6385f7f74396960a213e1e6124a77026b76ce7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135673454)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10090: [FLINK-14560] [runtime] The value of taskmanager.memory.size in flink…

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10090: [FLINK-14560] [runtime] The value of 
taskmanager.memory.size in flink…
URL: https://github.com/apache/flink/pull/10090#issuecomment-549891301
 
 
   
   ## CI report:
   
   * 0947500a3ee71f62fd992e8b856f1f6c9d9d68cb : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135081985)
   * 19aa6c886e3084b1a00351754c9e9eed39fd8019 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135644725)
   * 974d39956940cfd53114c9f8667b80235d5c0a99 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135653995)
   * c1bd67424db724990265fa7e42629c0c94fe0591 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135668832)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9975: [FLINK-14477][coordination] Implement promotion logic on TaskExecutor

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #9975: [FLINK-14477][coordination] Implement 
promotion logic on TaskExecutor 
URL: https://github.com/apache/flink/pull/9975#issuecomment-545348136
 
 
   
   ## CI report:
   
   * 93c53cdbe080b7708e12f2f26dcd386684f8f125 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133151875)
   * 9be6cd8ac4742ac30c0b00d11858bff2183d76a7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133375898)
   * 7f40891d03679b2a6f2d9d2734195a4ba026b66c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134412784)
   * 2dfc9916fd9e5bf114017bae4f539da408af2be2 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135617483)
   * 6235537097817cf1bf600ca43ea3eeae7570feea : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135649460)
   * 465fd123fc2fe819c10d526857b4fddc38cf897a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135654018)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10138: [FLINK-14635][e2e tests] Use non-relocated imports for AWS SDK Kinesis classes in tests.

2019-11-08 Thread GitBox
flinkbot commented on issue #10138: [FLINK-14635][e2e tests] Use non-relocated 
imports for AWS SDK Kinesis classes in tests.
URL: https://github.com/apache/flink/pull/10138#issuecomment-551927988
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 73e1c5606a277532d09a4aade4a29b5b40aa1ffe (Fri Nov 08 
17:55:54 UTC 2019)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] StephanEwen opened a new pull request #10138: [FLINK-14635][e2e tests] Use non-relocated imports for AWS SDK Kinesis classes in tests.

2019-11-08 Thread GitBox
StephanEwen opened a new pull request #10138: [FLINK-14635][e2e tests] Use 
non-relocated imports for AWS SDK Kinesis classes in tests.
URL: https://github.com/apache/flink/pull/10138
 
 
   ## What is the purpose of the change
   
   This allows compiling the code base fully in the IDE again by changing the 
way that the AWS SDK classes are used in the `KinesisPubsubClient` in the 
Kineses end-2-end test.
   
   Some imports were referring to relocated classes from the connector, which 
cannot be resolved in the IDE because no shading happens during compilation.
   
   ## Brief change log
   
 - Instead of using relocated imports, use vanilla imports in 
`KinesisPubsubClient`.
 - We add a provided AWS SDK dependency for the non-relocated classes. This 
follows the pattern to not directly reference transitive dependencies, 
especially shaded ones. That way, we supports IDE and unit test compilation / 
execution.
 - For end-2-end test execution, we relocate the AWS SDK classes in the 
shade phase following the exact same pattern as the original relocation in the 
Kinesis connector.
 - We need to make sure we don't operate directly on relocated classes when 
simultaneously using non relocated classes (class cast exceptions). We use a 
utility in the Kinesis module to directly obtain stringified or byte[]-ified 
versions of the records, rather than objects backed by relocated classes.
   
   ## Verifying this change
   
   This changes a test which still runs and works.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **yes** (in a test)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **no**
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
 - If yes, how is the feature documented? **not applicable**
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] StephanEwen commented on issue #10138: [FLINK-14635][e2e tests] Use non-relocated imports for AWS SDK Kinesis classes in tests.

2019-11-08 Thread GitBox
StephanEwen commented on issue #10138: [FLINK-14635][e2e tests] Use 
non-relocated imports for AWS SDK Kinesis classes in tests.
URL: https://github.com/apache/flink/pull/10138#issuecomment-551927094
 
 
   @tweise Do you have any concerns merging this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14635) Don't use relocated imports in Kinesis End-2-End Tests

2019-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14635:
---
Labels: pull-request-available  (was: )

> Don't use relocated imports in Kinesis End-2-End Tests
> --
>
> Key: FLINK-14635
> URL: https://issues.apache.org/jira/browse/FLINK-14635
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.9.1
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> Using relocated imports in {{KinesisPubsubClient}} makes it not possible to 
> build the code in the IDE any more.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10118: [FLINK-14657] Generalize and move config utils from flink-yarn to flink-core

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10118: [FLINK-14657] Generalize and move 
config utils from flink-yarn to flink-core
URL: https://github.com/apache/flink/pull/10118#issuecomment-551139397
 
 
   
   ## CI report:
   
   * e778ea4b39b82aea97a12a29ea0d7ba4083dfedf : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135474386)
   * f315d71e6007ce786771745d0f17010bb16b4f54 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135649438)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10117: [FLINK-14494][docs] Add ConfigOption type to the documentation generator

2019-11-08 Thread GitBox
flinkbot edited a comment on issue #10117: [FLINK-14494][docs] Add ConfigOption 
type to the documentation generator
URL: https://github.com/apache/flink/pull/10117#issuecomment-551124860
 
 
   
   ## CI report:
   
   * 6fb1d5a168ee34b529e33f332291fa3db6ec : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135468483)
   * ab79c3192e4b2eed0b99ec7e9b59eb82be3a5e01 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135621666)
   * 0a6385f7f74396960a213e1e6124a77026b76ce7 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135673454)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   >