[GitHub] [flink] TisonKun commented on issue #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
TisonKun commented on issue #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054#issuecomment-584513302
 
 
   It looks a bit weird that both travis and azure complain with
   
   ```
   06:50:19.131 [ERROR]   
ConfigOptionsDocsCompletenessITCase.testCompleteness:75->compareDocumentedAndExistingOptions:179
 Documentation is outdated, please regenerate it according to the instructions 
in flink-docs/README.md.
Problems:
Option kubernetes.container.image.pull-secretes in class 
org.apache.flink.kubernetes.configuration.KubernetesConfigOptions is not 
documented.
Documented option kubernetes.container.image.pull-secrets does 
not exist.
   ```
   
   though I've seen the document updated. Could you please run maven command to 
re-generate configuration document to see if there are some string mismatch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW edited a comment on issue #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-10 Thread GitBox
zhijiangW edited a comment on issue #7368: [FLINK-10742][network] Let Netty use 
Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#issuecomment-584510940
 
 
   Thanks for the update @gaoyunhaii !
   
   While reviewing the core codes of `ZeroCopyNettyMessageDecoder` and 
`NettyMessageParser`, I have several concerns:
   
   1. It is hard to extend a new message type future, because it has to 
understand the implementation detail of above decoder and parser for properly 
judging this new message in different internal processes. It is better to make 
the decoder implementation detail transparent  for new message extension.
   
   2. There are many interactions among decoder, parser and `BufferResponse`, 
and it brings many intermediate states to maintain inside decoder and parser. 
It would make things more complex to understand and easy to cause potential 
bugs. E.g. The specific message header length is got from `NettyMessage` via 
interaction with parser and the intermediate state is maintained inside decoder.
   
   3. Actually the real decode work is mainly done by specific 
`NettyMessage#readFrom`, but now the decoder also involves in partial work 
which seems fragmented and not unified.
   
   Another possible options is to make `ZeroCopyNettyMessageDecoder` more 
light-weight and remove `NettyMessageParser`. We can delegate all the decode 
work to respective `NettyMessage`, and the current 
`ZeroCopyNettyMessageDecoder` can only handle the unified frame header for all 
the messages as before. We can refactor the current `NettyMessage#readFrom` 
method to define an abstract `decode` method instead. In order not to effect 
the existing messages on sender side, we can define a new `NettyClientMessage` 
to extend `NettyMessage` only for receiver side.
   
   The new defined `NettyClientMessage#decode` method takes the similar role as 
`RecordDeserializer`, it is aware of the implementation detail to parse the 
header and body properly, and return the proposed `DecodeResult` to indicate 
whether the header is fulfilled or the body is fulfilled or the received 
`ByteBuf` is fully consumed, etc.
   
   To do so, the `ZeroCopyNettyMessageDecoder` does not need to know the 
details of different `NettyMessage` instances and it only controls the general 
logic when to fire the full message to next handler. And if extending a new 
type of message future, it only needs to implement the abstract `decode` method 
for parsing itself, not aware of the upper `ZeroCopyNettyMessageDecoder`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on issue #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-10 Thread GitBox
zhijiangW commented on issue #7368: [FLINK-10742][network] Let Netty use 
Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#issuecomment-584510940
 
 
   Thanks for the update @gaoyunhaii !
   
   While reviewing the core codes of `ZeroCopyNettyMessageDecoder` and 
`NettyMessageParser`, I have several concerns:
   
   1. It is hard to extend a new message type future, because it has to 
understand the implementation detail of above decoder and parser for properly 
judging this new message in different internal processes. It is better to make 
the decoder implementation detail transparent  for new message extension.
   
   2. There are many interactions among decoder, parser and `BufferResponse`, 
and it brings many intermediate states to maintain inside decoder and parser. 
It would make things more complex to understand and easy to cause potential 
bugs. E.g. The specific message header length is got from `NettyMessage` via 
interaction with parser and the intermediate state is maintained inside decoder.
   
   3. Actually the real decode work is mainly done by specific 
`NettyMessage#readFrom`, but now the decoder also involves in partial work 
which seems fragmented and not unified.
   
   Another possible options is to make `ZeroCopyNettyMessageDecoder` more 
light-weight and remove `NettyMessageParser`. We can delegate all the decode 
work to respective `NettyMessage`, and the current 
`ZeroCopyNettyMessageDecoder` can only handle the unified frame header for all 
the messages as before. We can refactor the current `NettyMessage#readFrom` 
method to define an abstract `decode` method instead. In order not to effect 
the existing messages on sender side, we can define a new `NettyClientMessage` 
to extend `NettyMessage` only for receiver side.
   
   The new define `NettyClientMessage#decode` method takes the similar role as 
`RecordDeserializer`, it is aware of the implementation detail to parse header 
and body properly, and return the proposed `DecodeResult` to indicate whether 
the header is fulfilled or the body is fulfilled or the received `ByteBuf` is 
fully consumed, etc.
   
   To do so, the `ZeroCopyNettyMessageDecoder` does not need to know the 
details of different `NettyMessage` and it only controls the general logic when 
to fire the full message to next handler. And if extending new message future, 
it only needs to implement the abstract `decode` method for parsing itself, not 
aware of the upper `ZeroCopyNettyMessageDecoder`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15987) SELECT 1.0e0 / 0.0e0 throws NumberFormatException

2020-02-10 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-15987:
---

 Summary: SELECT 1.0e0 / 0.0e0 throws NumberFormatException
 Key: FLINK-15987
 URL: https://issues.apache.org/jira/browse/FLINK-15987
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.10.0
Reporter: Caizhi Weng


{code:sql}
SELECT 1.0e / 0.0e
{code}

throws the following exception

{code:java}
Caused by: java.lang.NumberFormatException: Infinite or NaN
at java.math.BigDecimal.(BigDecimal.java:895)
at java.math.BigDecimal.(BigDecimal.java:872)
at 
org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:189)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:695)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:616)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:301)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:319)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:560)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:419)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:256)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:215)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:202)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:248)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:151)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:682)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:355)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:343)
at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428)
at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryInternal$11(LocalExecutor.java:610)
at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:240)
at 

[GitHub] [flink] flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054#issuecomment-584462322
 
 
   
   ## CI report:
   
   * 47f64cd0ddbb74874daff1607a30a999f4554ce1 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148308767) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5037)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-10 Thread GitBox
JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] 
Add Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r377474439
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/batch/sql/SinkTest.scala
 ##
 @@ -70,24 +69,82 @@ class SinkTest extends TableTestBase {
   }
 
   @Test
-  def testCatalogTableSink(): Unit = {
-val schemaBuilder = new TableSchema.Builder()
-schemaBuilder.fields(Array("i"), Array(DataTypes.INT()))
-val schema = schemaBuilder.build()
-val sink = util.createCollectTableSink(schema.getFieldNames, Array(INT))
-val catalog = Mockito.spy(new GenericInMemoryCatalog("dummy"))
-val factory = Mockito.mock(classOf[TableSinkFactory[_]])
-
Mockito.when[Optional[_]](catalog.getTableFactory).thenReturn(Optional.of(factory))
-Mockito.when[TableSink[_]](factory.createTableSink(
-  ArgumentMatchers.any(), ArgumentMatchers.any())).thenReturn(sink)
-util.tableEnv.registerCatalog(catalog.getName, catalog)
-util.tableEnv.useCatalog(catalog.getName)
-val catalogTable = new CatalogTableImpl(schema, Map[String, 
String]().asJava, "")
-catalog.createTable(new ObjectPath("default", "tbl"), catalogTable, false)
-util.tableEnv.sqlQuery("select 1").insertInto("tbl")
+  def testTableSourceSinkFactory(): Unit = {
+val factory = new TestContextTableFactory
+util.tableEnv.getConfig.getConfiguration.setBoolean(factory.needContain, 
true)
+util.tableEnv.registerCatalog("cat", new GenericInMemoryCatalog("default") 
{
 
 Review comment:
   I'll modify these things by multi commits.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054#issuecomment-584462322
 
 
   
   ## CI report:
   
   * 47f64cd0ddbb74874daff1607a30a999f4554ce1 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/148308767) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5037)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-10 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r377463034
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/batch/sql/SinkTest.scala
 ##
 @@ -70,24 +69,82 @@ class SinkTest extends TableTestBase {
   }
 
   @Test
-  def testCatalogTableSink(): Unit = {
-val schemaBuilder = new TableSchema.Builder()
-schemaBuilder.fields(Array("i"), Array(DataTypes.INT()))
-val schema = schemaBuilder.build()
-val sink = util.createCollectTableSink(schema.getFieldNames, Array(INT))
-val catalog = Mockito.spy(new GenericInMemoryCatalog("dummy"))
-val factory = Mockito.mock(classOf[TableSinkFactory[_]])
-
Mockito.when[Optional[_]](catalog.getTableFactory).thenReturn(Optional.of(factory))
-Mockito.when[TableSink[_]](factory.createTableSink(
-  ArgumentMatchers.any(), ArgumentMatchers.any())).thenReturn(sink)
-util.tableEnv.registerCatalog(catalog.getName, catalog)
-util.tableEnv.useCatalog(catalog.getName)
-val catalogTable = new CatalogTableImpl(schema, Map[String, 
String]().asJava, "")
-catalog.createTable(new ObjectPath("default", "tbl"), catalogTable, false)
-util.tableEnv.sqlQuery("select 1").insertInto("tbl")
+  def testTableSourceSinkFactory(): Unit = {
+val factory = new TestContextTableFactory
+util.tableEnv.getConfig.getConfiguration.setBoolean(factory.needContain, 
true)
+util.tableEnv.registerCatalog("cat", new GenericInMemoryCatalog("default") 
{
 
 Review comment:
   As I mentioned above, this test doesn't cover another path, i.e. 
`CatalogSourceTable#findAndCreateTableSource`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-10 Thread GitBox
JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] 
Add Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r377461934
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/batch/sql/SinkTest.scala
 ##
 @@ -70,24 +69,82 @@ class SinkTest extends TableTestBase {
   }
 
   @Test
-  def testCatalogTableSink(): Unit = {
-val schemaBuilder = new TableSchema.Builder()
-schemaBuilder.fields(Array("i"), Array(DataTypes.INT()))
-val schema = schemaBuilder.build()
-val sink = util.createCollectTableSink(schema.getFieldNames, Array(INT))
-val catalog = Mockito.spy(new GenericInMemoryCatalog("dummy"))
-val factory = Mockito.mock(classOf[TableSinkFactory[_]])
-
Mockito.when[Optional[_]](catalog.getTableFactory).thenReturn(Optional.of(factory))
-Mockito.when[TableSink[_]](factory.createTableSink(
-  ArgumentMatchers.any(), ArgumentMatchers.any())).thenReturn(sink)
-util.tableEnv.registerCatalog(catalog.getName, catalog)
-util.tableEnv.useCatalog(catalog.getName)
-val catalogTable = new CatalogTableImpl(schema, Map[String, 
String]().asJava, "")
-catalog.createTable(new ObjectPath("default", "tbl"), catalogTable, false)
-util.tableEnv.sqlQuery("select 1").insertInto("tbl")
+  def testTableSourceSinkFactory(): Unit = {
+val factory = new TestContextTableFactory
+util.tableEnv.getConfig.getConfiguration.setBoolean(factory.needContain, 
true)
+util.tableEnv.registerCatalog("cat", new GenericInMemoryCatalog("default") 
{
 
 Review comment:
   What do you mean? Is this not a test cover?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-10 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r377460964
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessage.java
 ##
 @@ -351,38 +352,74 @@ ByteBuf write(ByteBufAllocator allocator) throws 
IOException {
 
CompositeByteBuf composityBuf = 
allocator.compositeDirectBuffer();
composityBuf.addComponent(headerBuf);
-   composityBuf.addComponent(buffer);
+   composityBuf.addComponent(buffer.asByteBuf());
// update writer index since we have data 
written to the components:
-   
composityBuf.writerIndex(headerBuf.writerIndex() + buffer.writerIndex());
+   
composityBuf.writerIndex(headerBuf.writerIndex() + 
buffer.asByteBuf().writerIndex());
return composityBuf;
}
catch (Throwable t) {
if (headerBuf != null) {
headerBuf.release();
}
-   buffer.release();
+   buffer.recycleBuffer();
 
ExceptionUtils.rethrowIOException(t);
return null; // silence the compiler
}
}
 
-   static BufferResponse readFrom(ByteBuf buffer) {
-   InputChannelID receiverId = 
InputChannelID.fromByteBuf(buffer);
-   int sequenceNumber = buffer.readInt();
-   int backlog = buffer.readInt();
-   boolean isBuffer = buffer.readBoolean();
-   boolean isCompressed = buffer.readBoolean();
-   int size = buffer.readInt();
+   /**
+* Parses the message header part and composes a new 
BufferResponse with an empty data buffer. The
+* data buffer will be filled in later. This method is used in 
credit-based network stack.
+*
+* @param messageHeader the serialized message header.
+* @param bufferAllocator the allocator for network buffer.
+* @return a BufferResponse object with the header parsed and 
the data buffer to fill in later. The
+*  data buffer will be null if the target 
channel has been released or the buffer size is 0.
+*/
+   static BufferResponse readFrom(ByteBuf messageHeader, 
NetworkBufferAllocator bufferAllocator) {
+   InputChannelID receiverId = 
InputChannelID.fromByteBuf(messageHeader);
+   int sequenceNumber = messageHeader.readInt();
+   int backlog = messageHeader.readInt();
+   boolean isBuffer = messageHeader.readBoolean();
+   boolean isCompressed = messageHeader.readBoolean();
+   int size = messageHeader.readInt();
+
+   Buffer dataBuffer = null;
+   if (size != 0) {
+   if (isBuffer) {
+   dataBuffer = 
bufferAllocator.allocatePooledNetworkBuffer(receiverId);
+   } else {
+   dataBuffer = 
bufferAllocator.allocateUnPooledNetworkBuffer(size);
+   }
+   } else {
+   dataBuffer = 
bufferAllocator.allocateUnPooledNetworkBuffer(0);
+   }
+
+   if (dataBuffer == null) {
+   dataBuffer = 
bufferAllocator.getPlaceHolderBuffer();
+   }
 
-   ByteBuf retainedSlice = buffer.readSlice(size).retain();
-   return new BufferResponse(retainedSlice, isBuffer, 
isCompressed, sequenceNumber, receiverId, backlog);
+   dataBuffer.setCompressed(isCompressed);
+
+   return new BufferResponse(
+   dataBuffer,
+   isBuffer,
+   isCompressed,
+   sequenceNumber,
+   receiverId,
+   backlog,
+   size);
+   }
+
+   static int getMessageHeaderLength(int lengthWithoutFrameHeader) 
{
 
 Review comment:
   it is not clean to pass a length which is never used.


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-10 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r377461190
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessage.java
 ##
 @@ -449,6 +486,10 @@ static ErrorResponse readFrom(ByteBuf buffer) throws 
Exception {
}
}
}
+
+   static int getMessageHeaderLength(int lengthWithoutFrameHeader) 
{
+   return lengthWithoutFrameHeader;
 
 Review comment:
   Actually this way is still hacky, the specific message should be aware of 
its length, but it is still understood by the upper layer and take a round to 
get it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054#issuecomment-584462322
 
 
   
   ## CI report:
   
   * 47f64cd0ddbb74874daff1607a30a999f4554ce1 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/148308767) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5037)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-10 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r377460964
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessage.java
 ##
 @@ -351,38 +352,74 @@ ByteBuf write(ByteBufAllocator allocator) throws 
IOException {
 
CompositeByteBuf composityBuf = 
allocator.compositeDirectBuffer();
composityBuf.addComponent(headerBuf);
-   composityBuf.addComponent(buffer);
+   composityBuf.addComponent(buffer.asByteBuf());
// update writer index since we have data 
written to the components:
-   
composityBuf.writerIndex(headerBuf.writerIndex() + buffer.writerIndex());
+   
composityBuf.writerIndex(headerBuf.writerIndex() + 
buffer.asByteBuf().writerIndex());
return composityBuf;
}
catch (Throwable t) {
if (headerBuf != null) {
headerBuf.release();
}
-   buffer.release();
+   buffer.recycleBuffer();
 
ExceptionUtils.rethrowIOException(t);
return null; // silence the compiler
}
}
 
-   static BufferResponse readFrom(ByteBuf buffer) {
-   InputChannelID receiverId = 
InputChannelID.fromByteBuf(buffer);
-   int sequenceNumber = buffer.readInt();
-   int backlog = buffer.readInt();
-   boolean isBuffer = buffer.readBoolean();
-   boolean isCompressed = buffer.readBoolean();
-   int size = buffer.readInt();
+   /**
+* Parses the message header part and composes a new 
BufferResponse with an empty data buffer. The
+* data buffer will be filled in later. This method is used in 
credit-based network stack.
+*
+* @param messageHeader the serialized message header.
+* @param bufferAllocator the allocator for network buffer.
+* @return a BufferResponse object with the header parsed and 
the data buffer to fill in later. The
+*  data buffer will be null if the target 
channel has been released or the buffer size is 0.
+*/
+   static BufferResponse readFrom(ByteBuf messageHeader, 
NetworkBufferAllocator bufferAllocator) {
+   InputChannelID receiverId = 
InputChannelID.fromByteBuf(messageHeader);
+   int sequenceNumber = messageHeader.readInt();
+   int backlog = messageHeader.readInt();
+   boolean isBuffer = messageHeader.readBoolean();
+   boolean isCompressed = messageHeader.readBoolean();
+   int size = messageHeader.readInt();
+
+   Buffer dataBuffer = null;
+   if (size != 0) {
+   if (isBuffer) {
+   dataBuffer = 
bufferAllocator.allocatePooledNetworkBuffer(receiverId);
+   } else {
+   dataBuffer = 
bufferAllocator.allocateUnPooledNetworkBuffer(size);
+   }
+   } else {
+   dataBuffer = 
bufferAllocator.allocateUnPooledNetworkBuffer(0);
+   }
+
+   if (dataBuffer == null) {
+   dataBuffer = 
bufferAllocator.getPlaceHolderBuffer();
+   }
 
-   ByteBuf retainedSlice = buffer.readSlice(size).retain();
-   return new BufferResponse(retainedSlice, isBuffer, 
isCompressed, sequenceNumber, receiverId, backlog);
+   dataBuffer.setCompressed(isCompressed);
+
+   return new BufferResponse(
+   dataBuffer,
+   isBuffer,
+   isCompressed,
+   sequenceNumber,
+   receiverId,
+   backlog,
+   size);
+   }
+
+   static int getMessageHeaderLength(int lengthWithoutFrameHeader) 
{
 
 Review comment:
   Actually this way is still hacky, the specific message should be aware of 
its length, but it is still 

[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-10 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r377460951
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/batch/sql/SinkTest.scala
 ##
 @@ -70,24 +69,82 @@ class SinkTest extends TableTestBase {
   }
 
   @Test
-  def testCatalogTableSink(): Unit = {
-val schemaBuilder = new TableSchema.Builder()
-schemaBuilder.fields(Array("i"), Array(DataTypes.INT()))
-val schema = schemaBuilder.build()
-val sink = util.createCollectTableSink(schema.getFieldNames, Array(INT))
-val catalog = Mockito.spy(new GenericInMemoryCatalog("dummy"))
-val factory = Mockito.mock(classOf[TableSinkFactory[_]])
-
Mockito.when[Optional[_]](catalog.getTableFactory).thenReturn(Optional.of(factory))
-Mockito.when[TableSink[_]](factory.createTableSink(
-  ArgumentMatchers.any(), ArgumentMatchers.any())).thenReturn(sink)
-util.tableEnv.registerCatalog(catalog.getName, catalog)
-util.tableEnv.useCatalog(catalog.getName)
-val catalogTable = new CatalogTableImpl(schema, Map[String, 
String]().asJava, "")
-catalog.createTable(new ObjectPath("default", "tbl"), catalogTable, false)
-util.tableEnv.sqlQuery("select 1").insertInto("tbl")
+  def testTableSourceSinkFactory(): Unit = {
+val factory = new TestContextTableFactory
+util.tableEnv.getConfig.getConfiguration.setBoolean(factory.needContain, 
true)
+util.tableEnv.registerCatalog("cat", new GenericInMemoryCatalog("default") 
{
 
 Review comment:
   But without a test to cover the new interface, we have no idea whether it 
works as expected.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-10 Thread GitBox
JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] 
Add Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r377458723
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/TableSinkFactory.java
 ##
 @@ -53,4 +55,39 @@
return createTableSink(table.toProperties());
}
 
+   /**
+* Creates and configures a {@link TableSink} based on the given
+{@link Context}.
+*
+* @param context context of this table sink.
+* @return the configured table sink.
+*/
+   default TableSink createTableSink(Context context) {
 
 Review comment:
   I don't know... we can deprecate `createTableSink(ObjectPath, 
CatalogTable)`. CC: @twalthr 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-10 Thread GitBox
JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] 
Add Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r377458562
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/factories/TableFactoryUtil.java
 ##
 @@ -103,4 +103,15 @@
return Optional.empty();
}
 
+   /**
 
 Review comment:
   I'll push another commit to `Support create table source/sink by context in 
legacy planner`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-10 Thread GitBox
JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] 
Add Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r377458333
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/batch/sql/SinkTest.scala
 ##
 @@ -70,24 +69,82 @@ class SinkTest extends TableTestBase {
   }
 
   @Test
-  def testCatalogTableSink(): Unit = {
-val schemaBuilder = new TableSchema.Builder()
-schemaBuilder.fields(Array("i"), Array(DataTypes.INT()))
-val schema = schemaBuilder.build()
-val sink = util.createCollectTableSink(schema.getFieldNames, Array(INT))
-val catalog = Mockito.spy(new GenericInMemoryCatalog("dummy"))
-val factory = Mockito.mock(classOf[TableSinkFactory[_]])
-
Mockito.when[Optional[_]](catalog.getTableFactory).thenReturn(Optional.of(factory))
-Mockito.when[TableSink[_]](factory.createTableSink(
-  ArgumentMatchers.any(), ArgumentMatchers.any())).thenReturn(sink)
-util.tableEnv.registerCatalog(catalog.getName, catalog)
-util.tableEnv.useCatalog(catalog.getName)
-val catalogTable = new CatalogTableImpl(schema, Map[String, 
String]().asJava, "")
-catalog.createTable(new ObjectPath("default", "tbl"), catalogTable, false)
-util.tableEnv.sqlQuery("select 1").insertInto("tbl")
+  def testTableSourceSinkFactory(): Unit = {
+val factory = new TestContextTableFactory
+util.tableEnv.getConfig.getConfiguration.setBoolean(factory.needContain, 
true)
+util.tableEnv.registerCatalog("cat", new GenericInMemoryCatalog("default") 
{
 
 Review comment:
   Yes, you are right, but I want to modify this step by step.
   The previous method `createTableSink(ObjectPath tablePath, CatalogTable 
table)` is only work in `createTableSinkForCatalogTable` too.
   after these commits looks good to you, I will create following commit to fix 
this.
   What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15933) update content of how generic table schema is stored in hive via HiveCatalog

2020-02-10 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-15933.

Fix Version/s: 1.10.0
   Resolution: Fixed

1.10: 220fc5b6c04ac3c4d9a59b33177092136b9006b1

> update content of how generic table schema is stored in hive via HiveCatalog
> 
>
> Key: FLINK-15933
> URL: https://issues.apache.org/jira/browse/FLINK-15933
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> FLINK-15858  updated how generic table schema is stored in hive metastore, 
> need to go thru the documentation to update related content, like
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/hive_catalog.html#step-4-start-sql-client-and-create-a-kafka-table-with-flink-sql-ddl]
>  
> cc [~lzljs3620320]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 closed pull request #11029: [FLINK-15933][catalog][doc] Update content of how generic table schem…

2020-02-10 Thread GitBox
bowenli86 closed pull request #11029: [FLINK-15933][catalog][doc] Update 
content of how generic table schem…
URL: https://github.com/apache/flink/pull/11029
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhengcanbin commented on issue #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
zhengcanbin commented on issue #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054#issuecomment-584488205
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-10 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r377451955
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/TableSinkFactory.java
 ##
 @@ -53,4 +55,39 @@
return createTableSink(table.toProperties());
}
 
+   /**
+* Creates and configures a {@link TableSink} based on the given
+{@link Context}.
+*
+* @param context context of this table sink.
+* @return the configured table sink.
+*/
+   default TableSink createTableSink(Context context) {
+   return createTableSink(
+   context.getObjectIdentifier().toObjectPath(),
+   context.getTable());
+   }
+
+   /**
+* Context of table sink creation. Contains table information and
+environment information.
+*/
+   interface Context {
+
+   /**
+* @return full identifier of the given {@link CatalogTable}.
+*/
+   ObjectIdentifier getObjectIdentifier();
+
+   /**
+* @return table {@link CatalogTable} instance.
+*/
+   CatalogTable getTable();
+
+   /**
+* @return readable config of this table environment.
 
 Review comment:
   Add a description that the configuration gives the factory instance the 
ability to access `TableConfig#getConfiguration()` which holds the current 
TableEnvironment session configurations.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-10 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r377454733
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/batch/sql/SinkTest.scala
 ##
 @@ -70,24 +69,82 @@ class SinkTest extends TableTestBase {
   }
 
   @Test
-  def testCatalogTableSink(): Unit = {
-val schemaBuilder = new TableSchema.Builder()
-schemaBuilder.fields(Array("i"), Array(DataTypes.INT()))
-val schema = schemaBuilder.build()
-val sink = util.createCollectTableSink(schema.getFieldNames, Array(INT))
-val catalog = Mockito.spy(new GenericInMemoryCatalog("dummy"))
-val factory = Mockito.mock(classOf[TableSinkFactory[_]])
-
Mockito.when[Optional[_]](catalog.getTableFactory).thenReturn(Optional.of(factory))
-Mockito.when[TableSink[_]](factory.createTableSink(
-  ArgumentMatchers.any(), ArgumentMatchers.any())).thenReturn(sink)
-util.tableEnv.registerCatalog(catalog.getName, catalog)
-util.tableEnv.useCatalog(catalog.getName)
-val catalogTable = new CatalogTableImpl(schema, Map[String, 
String]().asJava, "")
-catalog.createTable(new ObjectPath("default", "tbl"), catalogTable, false)
-util.tableEnv.sqlQuery("select 1").insertInto("tbl")
+  def testTableSourceSinkFactory(): Unit = {
+val factory = new TestContextTableFactory
+util.tableEnv.getConfig.getConfiguration.setBoolean(factory.needContain, 
true)
+util.tableEnv.registerCatalog("cat", new GenericInMemoryCatalog("default") 
{
 
 Review comment:
   This only guarantees one of the paths, i.e. via 
`TableFactoryUtil#createTableSinkForCatalogTable`, which is only used by Hive. 
However, the another path is not covered, i.e. via 
`CatalogSourceTable#findAndCreateTableSource` which is used by most users.
   
   Maybe you can upgrade the existing `TestCollectionTableFactory` to use the 
new interface and expose a session configuration, e.g. `collection.is-bounded`, 
the flag will be accessed via `Context#getConfiguration` and pass to the 
created `CollectionTableSource`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-10 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r377452118
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/TableSinkFactory.java
 ##
 @@ -53,4 +55,39 @@
return createTableSink(table.toProperties());
}
 
+   /**
+* Creates and configures a {@link TableSink} based on the given
+{@link Context}.
+*
+* @param context context of this table sink.
+* @return the configured table sink.
+*/
+   default TableSink createTableSink(Context context) {
 
 Review comment:
   Should we deprecate the other `createTableSink` and `createTableSource` 
interfaces?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-10 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r377452557
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/factories/TableFactoryUtil.java
 ##
 @@ -103,4 +103,15 @@
return Optional.empty();
}
 
+   /**
 
 Review comment:
   We should make sure `TableSinkFactory#createTableSink(context)` is the only 
entry to be invoked by the planner. However,  the above 
`createTableSinkForCatalogTable` method still calling the old `createTableSink` 
method in old planner. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10745: [FLINK-15445][connectors/jdbc] JDBC Table Source didn't work for Type…

2020-02-10 Thread GitBox
wuchong commented on a change in pull request #10745: 
[FLINK-15445][connectors/jdbc] JDBC Table Source didn't work for Type…
URL: https://github.com/apache/flink/pull/10745#discussion_r377450538
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/dialect/JDBCDialects.java
 ##
 @@ -46,10 +59,84 @@
return Optional.empty();
}
 
-   private static class DerbyDialect implements JDBCDialect {
+   private abstract static class AbstractDialect implements JDBCDialect {
+
+   @Override
+   public void validate(TableSchema schema) throws 
ValidationException {
+   for (int i = 0; i < schema.getFieldCount(); i++) {
+   DataType dt = schema.getFieldDataType(i).get();
+   String fieldName = schema.getFieldName(i).get();
+
+   // TODO: We can't convert VARBINARY(n) data 
type to
+   //  
PrimitiveArrayTypeInfo.BYTE_PRIMITIVE_ARRAY_TYPE_INFO in 
LegacyTypeInfoDataTypeConverter
+   //  when n is smaller than Integer.MAX_VALUE
+   if 
(unsupportedTypes().contains(dt.getLogicalType().getTypeRoot()) ||
+   (!(dt.getLogicalType() 
instanceof LegacyTypeInformationType) &&
+   (VARBINARY == 
dt.getLogicalType().getTypeRoot()
+   && Integer.MAX_VALUE != 
((VarBinaryType) dt.getLogicalType()).getLength( {
+   throw new ValidationException(
+   String.format("The 
dialect don't support type: %s.", dt.toString()));
 
 Review comment:
   String.format("The %s dialect doesn't support type: %s.", dialectName, 
dt.toString())


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10745: [FLINK-15445][connectors/jdbc] JDBC Table Source didn't work for Type…

2020-02-10 Thread GitBox
wuchong commented on a change in pull request #10745: 
[FLINK-15445][connectors/jdbc] JDBC Table Source didn't work for Type…
URL: https://github.com/apache/flink/pull/10745#discussion_r377450437
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCTableSourceITCase.java
 ##
 @@ -18,93 +18,167 @@
 
 package org.apache.flink.api.java.io.jdbc;
 
-import org.apache.flink.streaming.api.datastream.DataStream;
 import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
 import org.apache.flink.table.api.EnvironmentSettings;
-import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableException;
 import org.apache.flink.table.api.java.StreamTableEnvironment;
 import org.apache.flink.table.runtime.utils.StreamITCase;
+import org.apache.flink.test.util.AbstractTestBase;
 import org.apache.flink.types.Row;
 
-import org.junit.BeforeClass;
+import org.junit.After;
+import org.junit.Before;
 import org.junit.Test;
 
-import java.util.ArrayList;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Arrays;
 import java.util.List;
 
+
 /**
- * IT case for {@link JDBCTableSource}.
+ * ITCase for {@link JDBCTableSource}.
  */
-public class JDBCTableSourceITCase extends JDBCTestBase {
-
-   private static final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
-   private static final EnvironmentSettings bsSettings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
-   private static final StreamTableEnvironment tEnv = 
StreamTableEnvironment.create(env, bsSettings);
-
-   static final String TABLE_SOURCE_SQL = "CREATE TABLE books (" +
-   " id int, " +
-   " title varchar, " +
-   " author varchar, " +
-   " price double, " +
-   " qty int " +
-   ") with (" +
-   " 'connector.type' = 'jdbc', " +
-   " 'connector.url' = 'jdbc:derby:memory:ebookshop', " +
-   " 'connector.table' = 'books', " +
-   " 'connector.driver' = 'org.apache.derby.jdbc.EmbeddedDriver' " 
+
-   ")";
-
-   @BeforeClass
-   public static void createTable() {
-   tEnv.sqlUpdate(TABLE_SOURCE_SQL);
+public class JDBCTableSourceITCase extends AbstractTestBase {
+
+   public static final String DRIVER_CLASS = 
"org.apache.derby.jdbc.EmbeddedDriver";
+   public static final String DB_URL = "jdbc:derby:memory:test";
+   public static final String INPUT_TABLE = "jdbcSource";
+
+   @Before
+   public void before() throws ClassNotFoundException, SQLException {
+   System.setProperty("derby.stream.error.field", 
JDBCTestBase.class.getCanonicalName() + ".DEV_NULL");
+   Class.forName(DRIVER_CLASS);
+
+   try (
+   Connection conn = DriverManager.getConnection(DB_URL + 
";create=true");
+   Statement statement = conn.createStatement()) {
+   statement.executeUpdate("CREATE TABLE " + INPUT_TABLE + 
" (" +
+   "id BIGINT NOT NULL," +
+   "timestamp6_col TIMESTAMP, " +
+   "timestamp9_col TIMESTAMP, " +
+   "time_col TIME, " +
+   "real_col FLOAT(23), " +// A 
precision of 23 or less makes FLOAT equivalent to REAL.
+   "double_col FLOAT(24)," +   // A 
precision of 24 or greater makes FLOAT equivalent to DOUBLE PRECISION.
+   "decimal_col DECIMAL(10, 4))");
+   statement.executeUpdate("INSERT INTO " + INPUT_TABLE + 
" VALUES (" +
+   "1, TIMESTAMP('2020-01-01 
15:35:00.123456'), TIMESTAMP('2020-01-01 15:35:00.123456789'), " +
+   "TIME('15:35:00'), 1.175E-37, 
1.79769E+308, 100.1234)");
+   statement.executeUpdate("INSERT INTO " + INPUT_TABLE + 
" VALUES (" +
+   "2, TIMESTAMP('2020-01-01 
15:36:01.123456'), TIMESTAMP('2020-01-01 15:36:01.123456789'), " +
+   "TIME('15:36:01'), -1.175E-37, 
-1.79769E+308, 101.1234)");
+   }
+   }
+
+   @After
+   public void clearOutputTable() throws Exception {
+   Class.forName(DRIVER_CLASS);
+   try (
+   Connection conn = DriverManager.getConnection(DB_URL);
+   Statement stat = conn.createStatement()) {
+   stat.execute("DROP TABLE " + INPUT_TABLE);
+   }
}
 
@Test
-   public void testFieldsProjection() throws Exception {
-  

[GitHub] [flink] wuchong commented on a change in pull request #10745: [FLINK-15445][connectors/jdbc] JDBC Table Source didn't work for Type…

2020-02-10 Thread GitBox
wuchong commented on a change in pull request #10745: 
[FLINK-15445][connectors/jdbc] JDBC Table Source didn't work for Type…
URL: https://github.com/apache/flink/pull/10745#discussion_r377440708
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/dialect/JDBCDialects.java
 ##
 @@ -46,10 +59,84 @@
return Optional.empty();
}
 
-   private static class DerbyDialect implements JDBCDialect {
+   private abstract static class AbstractDialect implements JDBCDialect {
+
+   @Override
+   public void validate(TableSchema schema) throws 
ValidationException {
+   for (int i = 0; i < schema.getFieldCount(); i++) {
+   DataType dt = schema.getFieldDataType(i).get();
+   String fieldName = schema.getFieldName(i).get();
+
+   // TODO: We can't convert VARBINARY(n) data 
type to
+   //  
PrimitiveArrayTypeInfo.BYTE_PRIMITIVE_ARRAY_TYPE_INFO in 
LegacyTypeInfoDataTypeConverter
+   //  when n is smaller than Integer.MAX_VALUE
+   if 
(unsupportedTypes().contains(dt.getLogicalType().getTypeRoot()) ||
+   (!(dt.getLogicalType() 
instanceof LegacyTypeInformationType) &&
+   (VARBINARY == 
dt.getLogicalType().getTypeRoot()
+   && Integer.MAX_VALUE != 
((VarBinaryType) dt.getLogicalType()).getLength( {
+   throw new ValidationException(
+   String.format("The 
dialect don't support type: %s.", dt.toString()));
+   }
+
+   // only validate precision of DECIMAL type for 
blink planner
+   if (!(dt.getLogicalType() instanceof 
LegacyTypeInformationType)
+   && DECIMAL == 
dt.getLogicalType().getTypeRoot()) {
+   int precision = ((DecimalType) 
dt.getLogicalType()).getPrecision();
+   if (precision > maxDecimalPrecision()
+   || precision < 
minDecimalPrecision()) {
+   throw new ValidationException(
+   
String.format("The precision of %s is out of the range [%d, %d].",
+   
fieldName,
+   
minDecimalPrecision(),
+   
maxDecimalPrecision()));
+   }
+   }
+
+   // only validate precision of DECIMAL type for 
blink planner
+   if (!(dt.getLogicalType() instanceof 
LegacyTypeInformationType)
+   && TIMESTAMP_WITHOUT_TIME_ZONE 
== dt.getLogicalType().getTypeRoot()) {
+   int precision = ((TimestampType) 
dt.getLogicalType()).getPrecision();
+   if (precision > maxTimestampPrecision()
+   || precision < 
minTimestampPrecision()) {
+   throw new ValidationException(
+   
String.format("The precision of %s is out of the range [%d, %d].",
 
 Review comment:
   The same to the DECIMAL type.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10745: [FLINK-15445][connectors/jdbc] JDBC Table Source didn't work for Type…

2020-02-10 Thread GitBox
wuchong commented on a change in pull request #10745: 
[FLINK-15445][connectors/jdbc] JDBC Table Source didn't work for Type…
URL: https://github.com/apache/flink/pull/10745#discussion_r377440051
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/dialect/JDBCDialects.java
 ##
 @@ -46,10 +59,84 @@
return Optional.empty();
}
 
-   private static class DerbyDialect implements JDBCDialect {
+   private abstract static class AbstractDialect implements JDBCDialect {
+
+   @Override
+   public void validate(TableSchema schema) throws 
ValidationException {
+   for (int i = 0; i < schema.getFieldCount(); i++) {
+   DataType dt = schema.getFieldDataType(i).get();
+   String fieldName = schema.getFieldName(i).get();
+
+   // TODO: We can't convert VARBINARY(n) data 
type to
+   //  
PrimitiveArrayTypeInfo.BYTE_PRIMITIVE_ARRAY_TYPE_INFO in 
LegacyTypeInfoDataTypeConverter
+   //  when n is smaller than Integer.MAX_VALUE
+   if 
(unsupportedTypes().contains(dt.getLogicalType().getTypeRoot()) ||
+   (!(dt.getLogicalType() 
instanceof LegacyTypeInformationType) &&
+   (VARBINARY == 
dt.getLogicalType().getTypeRoot()
 
 Review comment:
   Could we just simply `dt.getLogicalType() instanceof VarBinaryType ` to 
match it is a VarBinaryType? I think currently there isn't a 
`LegacyTypeInformationType` which is VARBINARY.  The same to the below if 
branches.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10745: [FLINK-15445][connectors/jdbc] JDBC Table Source didn't work for Type…

2020-02-10 Thread GitBox
wuchong commented on a change in pull request #10745: 
[FLINK-15445][connectors/jdbc] JDBC Table Source didn't work for Type…
URL: https://github.com/apache/flink/pull/10745#discussion_r377440632
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/dialect/JDBCDialects.java
 ##
 @@ -46,10 +59,84 @@
return Optional.empty();
}
 
-   private static class DerbyDialect implements JDBCDialect {
+   private abstract static class AbstractDialect implements JDBCDialect {
+
+   @Override
+   public void validate(TableSchema schema) throws 
ValidationException {
+   for (int i = 0; i < schema.getFieldCount(); i++) {
+   DataType dt = schema.getFieldDataType(i).get();
+   String fieldName = schema.getFieldName(i).get();
+
+   // TODO: We can't convert VARBINARY(n) data 
type to
+   //  
PrimitiveArrayTypeInfo.BYTE_PRIMITIVE_ARRAY_TYPE_INFO in 
LegacyTypeInfoDataTypeConverter
+   //  when n is smaller than Integer.MAX_VALUE
+   if 
(unsupportedTypes().contains(dt.getLogicalType().getTypeRoot()) ||
+   (!(dt.getLogicalType() 
instanceof LegacyTypeInformationType) &&
+   (VARBINARY == 
dt.getLogicalType().getTypeRoot()
+   && Integer.MAX_VALUE != 
((VarBinaryType) dt.getLogicalType()).getLength( {
+   throw new ValidationException(
+   String.format("The 
dialect don't support type: %s.", dt.toString()));
+   }
+
+   // only validate precision of DECIMAL type for 
blink planner
+   if (!(dt.getLogicalType() instanceof 
LegacyTypeInformationType)
+   && DECIMAL == 
dt.getLogicalType().getTypeRoot()) {
+   int precision = ((DecimalType) 
dt.getLogicalType()).getPrecision();
+   if (precision > maxDecimalPrecision()
+   || precision < 
minDecimalPrecision()) {
+   throw new ValidationException(
+   
String.format("The precision of %s is out of the range [%d, %d].",
+   
fieldName,
+   
minDecimalPrecision(),
+   
maxDecimalPrecision()));
+   }
+   }
+
+   // only validate precision of DECIMAL type for 
blink planner
+   if (!(dt.getLogicalType() instanceof 
LegacyTypeInformationType)
+   && TIMESTAMP_WITHOUT_TIME_ZONE 
== dt.getLogicalType().getTypeRoot()) {
+   int precision = ((TimestampType) 
dt.getLogicalType()).getPrecision();
+   if (precision > maxTimestampPrecision()
+   || precision < 
minTimestampPrecision()) {
+   throw new ValidationException(
+   
String.format("The precision of %s is out of the range [%d, %d].",
 
 Review comment:
   Improve the error message a bit more:
   
   ```java
   String.format("The precision of filed '%s' is out of the TIMESTAMP precision 
range [%d, %d] supported by the %s dialect.",
   fieldName,
   minTimestampPrecision(),
   maxTimestampPrecision(),
   dialectName);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15986) support setting or changing session properties in Flink SQL

2020-02-10 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034140#comment-17034140
 ] 

Jark Wu commented on FLINK-15986:
-

Hi [~phoenixjiangnan], could you elaborate this a bit more? Where do you want 
to add such an API? 
Currently, we already support {{SET}} command in SQL CLI to set or change 
properties/configurations.

> support setting or changing session properties in Flink SQL
> ---
>
> Key: FLINK-15986
> URL: https://issues.apache.org/jira/browse/FLINK-15986
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Ecosystem
>Reporter: Bowen Li
>Assignee: Kurt Young
>Priority: Critical
> Fix For: 1.11.0
>
>
> as Flink SQL is more and more critical for user running batch jobs, 
> experiments, and OLAP exploration, it's important than ever to support 
> setting and changing session properties in Flink SQL.
>  
> Use cases include switching SQL dialects at runtime, switching job mode 
> between "streaming" and "batch", changing other params defined in 
> flink-conf.yaml and default-sql-client.yaml 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-10 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r377447483
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessageParser.java
 ##
 @@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.runtime.io.network.NetworkClientHandler;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+
+import java.net.ProtocolException;
+
+/**
+ * Responsible for offering the message header length and parsing the message 
header part of the message.
+ */
+class NettyMessageParser {
+/**
+ * Indicates how to deal with the data buffer part of the current message.
+ */
+public enum DataBufferAction {
 
 Review comment:
   Another concern for this enum is that if we add another new type message 
future, it is hard to know which enum type it belongs to and how it adjust the 
logics for the new message in below `#parseMessageHeader`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15986) support setting or changing session properties in Flink SQL

2020-02-10 Thread Bowen Li (Jira)
Bowen Li created FLINK-15986:


 Summary: support setting or changing session properties in Flink 
SQL
 Key: FLINK-15986
 URL: https://issues.apache.org/jira/browse/FLINK-15986
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API, Table SQL / Ecosystem
Reporter: Bowen Li
Assignee: Kurt Young
 Fix For: 1.11.0


as Flink SQL is more and more critical for user running batch jobs, 
experiments, and OLAP exploration, it's important than ever to support setting 
and changing session properties in Flink SQL.

 

Use cases include switching SQL dialects at runtime, switching job mode between 
"streaming" and "batch", changing other params defined in flink-conf.yaml and 
default-sql-client.yaml 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054#issuecomment-584462322
 
 
   
   ## CI report:
   
   * 47f64cd0ddbb74874daff1607a30a999f4554ce1 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148308767) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5037)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11031: [FLINK-15914][checkpointing][metrics] Miss the checkpoint related metrics for the case of two inputs

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #11031: 
[FLINK-15914][checkpointing][metrics] Miss the checkpoint related metrics for 
the case of two inputs
URL: https://github.com/apache/flink/pull/11031#issuecomment-582869867
 
 
   
   ## CI report:
   
   * 80543b67036d2b2d589bfc677e583199b96e4a61 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/147698790) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4898)
 
   * c411f7735ea814410840a46ecfde839340d4cf3b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148141129) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4987)
 
   * 5cd1f9bc43cf71f6e548b76fadff0ce7f1bfa9e2 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148201366) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5014)
 
   * 54550e7b277e46737782f8c5844ca12c5cc517c3 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148312822) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5038)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-10 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r377444798
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessageParser.java
 ##
 @@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.runtime.io.network.NetworkClientHandler;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.shaded.netty4.io.netty.buffer.ByteBuf;
+
+import java.net.ProtocolException;
+
+/**
+ * Responsible for offering the message header length and parsing the message 
header part of the message.
+ */
+class NettyMessageParser {
+/**
+ * Indicates how to deal with the data buffer part of the current message.
+ */
+public enum DataBufferAction {
 
 Review comment:
   This clarification actually has two dimensions and would bring ambiguous. 
One dimension is for distinguish between `BufferResponse` and other messages. 
Another dimension is for distinguish only for `BufferResponse` whether the 
respective channel is released or not. 
   
   I guess we can refactor the related process to avoid this enum type. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-12256) Implement Confluent Schema Registry Catalog

2020-02-10 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li reassigned FLINK-12256:


Assignee: Bowen Li  (was: Artsem Semianenka)

> Implement Confluent Schema Registry Catalog
> ---
>
> Key: FLINK-12256
> URL: https://issues.apache.org/jira/browse/FLINK-12256
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka, Table SQL / Client
>Affects Versions: 1.9.0
>Reporter: Artsem Semianenka
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
>  KafkaReadableCatalog is a special implementation of ReadableCatalog 
> interface (which introduced in 
> [FLIP-30|https://cwiki.apache.org/confluence/display/FLINK/FLIP-30%3A+Unified+Catalog+APIs]
>  )  to retrieve meta information such topic name/schema of the topic from 
> Apache Kafka and Confluent Schema Registry. 
> New ReadableCatalog allows a user to run SQL queries like:
> {code:java}
> Select * form kafka.topic_name  
> {code}
> without the need for manual definition of the table schema.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15985) offload runtime params from DDL to table hints in DML/queries

2020-02-10 Thread Bowen Li (Jira)
Bowen Li created FLINK-15985:


 Summary: offload runtime params from DDL to table hints in 
DML/queries
 Key: FLINK-15985
 URL: https://issues.apache.org/jira/browse/FLINK-15985
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: Bowen Li
Assignee: Danny Chen
 Fix For: 1.11.0


background:

Currently Flink DDL mixes three types of params all together: 
 * External data’s metadata: defines what the data looks like (schema), where 
it is (location/url), how it should be accessed (username/pwd)
 * Source/sink runtime params: defines how and usually how fast Flink 
source/sink reads/writes data, not affecting the results
 * Kafka “sink-partitioner”
 * Elastic “bulk-flush.interval/max-size/...”


 * Semantics params: defines aspects like how much data Flink reads/writes, how 
the result will look like
 * Kafka “startup-mode”, “offset”
 * Watermark, timestamp column

 

Problems of the current mix-up: Flink cannot leverage catalogs and external 
system metadata alone to run queries with all the non-metadata params involved 
in DDL. E.g. when we add a catalog for Confluent Schema Registry, the expected 
user experience should be that Flink users just configure the catalog with url 
and usr/pwd, and should be able to run queries immediately; however, that’s not 
the case right now because users still have to use DDL to define a bunch params 
like “startup-mode”, “offset”, timestamp column, etc, along with the schema 
redundantly. We’ve heard many user complaints on this.

 

cc [~ykt836] [~lirui] [~lzljs3620320] [~jark] [~twalthr] [~dwysakowicz]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15984) support hive stream table sink

2020-02-10 Thread Bowen Li (Jira)
Bowen Li created FLINK-15984:


 Summary: support hive stream table sink
 Key: FLINK-15984
 URL: https://issues.apache.org/jira/browse/FLINK-15984
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Hive
Reporter: Bowen Li
Assignee: Rui Li
 Fix For: 1.11.0


support hive stream table sink for stream processing



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12814) Support a traditional and scrolling view of result (non-interactive)

2020-02-10 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-12814:
-
Fix Version/s: 1.11.0

> Support a traditional and scrolling view of result (non-interactive)
> 
>
> Key: FLINK-12814
> URL: https://issues.apache.org/jira/browse/FLINK-12814
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Affects Versions: 1.8.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In table mode, we want to introduce a non-interactive view (so-called 
> FinalizedResult), which submit SQL statements(DQLs) in attach mode with a 
> user defined timeout, fetch results until the job finished/failed/timeout or 
> interrupted by user(Ctrl+C), and output them in a non-interactive way (the 
> behavior in change-log mode is under discussion)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14148) Investigate pushing predicate/projection to underlying Hive input format

2020-02-10 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-14148:
-
Fix Version/s: 1.11.0

> Investigate pushing predicate/projection to underlying Hive input format
> 
>
> Key: FLINK-14148
> URL: https://issues.apache.org/jira/browse/FLINK-14148
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15983) add native reader for Hive parquet files

2020-02-10 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-15983:
-
Description: 
add native reader for Hive parquet files, as well as benchmark that show how 
faster the native reader is compared to generic hive reader, and publish a blog 
about it

 

cc [~ykt836] [~lirui]

> add native reader for Hive parquet files
> 
>
> Key: FLINK-15983
> URL: https://issues.apache.org/jira/browse/FLINK-15983
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Hive
>Reporter: Bowen Li
>Assignee: Jingsong Lee
>Priority: Critical
> Fix For: 1.11.0
>
>
> add native reader for Hive parquet files, as well as benchmark that show how 
> faster the native reader is compared to generic hive reader, and publish a 
> blog about it
>  
> cc [~ykt836] [~lirui]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15983) add native reader for Hive parquet files

2020-02-10 Thread Bowen Li (Jira)
Bowen Li created FLINK-15983:


 Summary: add native reader for Hive parquet files
 Key: FLINK-15983
 URL: https://issues.apache.org/jira/browse/FLINK-15983
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Hive
Reporter: Bowen Li
Assignee: Jingsong Lee
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054#issuecomment-584462322
 
 
   
   ## CI report:
   
   * 47f64cd0ddbb74874daff1607a30a999f4554ce1 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148308767) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5037)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11031: [FLINK-15914][checkpointing][metrics] Miss the checkpoint related metrics for the case of two inputs

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #11031: 
[FLINK-15914][checkpointing][metrics] Miss the checkpoint related metrics for 
the case of two inputs
URL: https://github.com/apache/flink/pull/11031#issuecomment-582869867
 
 
   
   ## CI report:
   
   * 80543b67036d2b2d589bfc677e583199b96e4a61 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/147698790) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4898)
 
   * c411f7735ea814410840a46ecfde839340d4cf3b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148141129) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4987)
 
   * 5cd1f9bc43cf71f6e548b76fadff0ce7f1bfa9e2 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148201366) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5014)
 
   * 54550e7b277e46737782f8c5844ca12c5cc517c3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-10 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r377439121
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessage.java
 ##
 @@ -518,7 +559,7 @@ public String toString() {
 
static class TaskEventRequest extends NettyMessage {
 
-   private static final byte ID = 3;
 
 Review comment:
   irrelevant change


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-15546) Obscure error message from ScalarOperatorGens::generateCast

2020-02-10 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-15546.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

Fixed in 1.11.0 (master): 5614f2160c7756b78b37e66ef8a543c2c48550bd

> Obscure error message from ScalarOperatorGens::generateCast
> ---
>
> Key: FLINK-15546
> URL: https://issues.apache.org/jira/browse/FLINK-15546
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Rui Li
>Assignee: Zhenghua Gao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Consider the following case:
> {noformat}
> Flink SQL> describe foo;
> root
>  |-- x: ROW<`f1` DOUBLE, `f2` VARCHAR(10)>
> Flink SQL> insert into foo select row(1.1,'abc');
> [INFO] Submitting SQL update statement to the cluster...
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.planner.codegen.CodeGenException: Unsupported cast 
> from 'ROW' to 'ROW'.
> {noformat}
> Users are unlikely to figure out what goes wrong from the above error 
> message. Something like {{Unsupported cast from 'ROW' 
> to 'ROW'}} will be more helpful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-10 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r377438918
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NetworkBufferAllocator.java
 ##
 @@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.io.network.netty;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.runtime.io.network.NetworkClientHandler;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
+import org.apache.flink.runtime.io.network.buffer.NetworkBuffer;
+import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
+import 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * An allocator used for requesting buffers in the receiver side of netty 
handlers.
+ */
+public class NetworkBufferAllocator {
+private final NetworkClientHandler partitionRequestClientHandler;
+   private final Buffer placeHolderBuffer;
 
 Review comment:
   nit: formatting


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong merged pull request #11021: [FLINK-15546][table-planner-blink] Fix obscure error message from Sca…

2020-02-10 Thread GitBox
wuchong merged pull request #11021: [FLINK-15546][table-planner-blink] Fix 
obscure error message from Sca…
URL: https://github.com/apache/flink/pull/11021
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] Let Netty use Flink's buffers directly in credit-based mode

2020-02-10 Thread GitBox
zhijiangW commented on a change in pull request #7368: [FLINK-10742][network] 
Let Netty use Flink's buffers directly in credit-based mode
URL: https://github.com/apache/flink/pull/7368#discussion_r377438545
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessage.java
 ##
 @@ -84,7 +84,7 @@
 *  {@link NettyMessage} subclass ID
 *
 * @return a newly allocated direct buffer with header data written for 
{@link
-* NettyMessageDecoder}
+* NettyMessageEncoder}
 
 Review comment:
   This change is irrelevant and should be a separate hotfix commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10687: [FLINK-15364] Introduce streaming task using heap backend e2e tests for Mesos

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #10687: [FLINK-15364] Introduce streaming 
task using heap backend e2e tests for Mesos
URL: https://github.com/apache/flink/pull/10687#issuecomment-568881372
 
 
   
   ## CI report:
   
   * 016f5e1ce6dffdbe7dd69e63fae99895520a6f5c Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142313148) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3912)
 
   * d4c4910fcf29e762fbb6e27a0de9fd3529014670 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142444248) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3953)
 
   * d7a4237329fccbd820c01c7688df5a5822409d42 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/148304690) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5036)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #11029: [FLINK-15933][catalog][doc] Update content of how generic table schem…

2020-02-10 Thread GitBox
bowenli86 commented on issue #11029: [FLINK-15933][catalog][doc] Update content 
of how generic table schem…
URL: https://github.com/apache/flink/pull/11029#issuecomment-584468322
 
 
   > > > @bowenli86 We don't have any console output for "DESCRIBE mykafka;" 
now. I'll add the output for "DESCRIBE FORMATTED mykafka;" instead.
   > > 
   > > 
   > > why we don't have any console output for describing a table? Seems like 
a bug?
   > 
   > Because we have changed to store the generic table schema as properties. 
So from Hive perspective, the table doesn't have any columns.
   
   my bad, I thought it's a flink cli...
   
   LGTM, merging


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #11052: FLINK-15975 Use LinkedHashMap for deterministic iterations

2020-02-10 Thread GitBox
bowenli86 commented on issue #11052: FLINK-15975 Use LinkedHashMap for 
deterministic iterations
URL: https://github.com/apache/flink/pull/11052#issuecomment-584467896
 
 
   Hi, which test has been failing?
   
   There shouldn't any guarantee on the return of k-v in map object. Thus, 
instead of changing source code, we'd be better to change assertions in test 
code to cater to the indeterminacy, e.g. remove any assertion on ordering


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054#issuecomment-584462322
 
 
   
   ## CI report:
   
   * 47f64cd0ddbb74874daff1607a30a999f4554ce1 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/148308767) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5037)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15981) Control the direct memory in FileChannelBoundedData.FileBufferReader

2020-02-10 Thread zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034110#comment-17034110
 ] 

zhijiang commented on FLINK-15981:
--

Thanks for reporting this issue [~lzljs3620320]

Actually we also found this potential concern before, but always have not time 
for focusing on this improvement yet. It is feasible to make use of  existing 
`LocalBufferPool` for blocking partition. We can even reduce the buffer amount 
for every subpartition from current 2 to 1, which can further reduce the total 
required memory.

+1 to make it happen in release-1.11 and release-1.10.1 if possible.

> Control the direct memory in FileChannelBoundedData.FileBufferReader
> 
>
> Key: FLINK-15981
> URL: https://issues.apache.org/jira/browse/FLINK-15981
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Jingsong Lee
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> Now, the default blocking BoundedData is FileChannelBoundedData. In its 
> reader, will create new direct buffer 64KB.
> When parallelism greater than 100, users need configure 
> "taskmanager.memory.task.off-heap.size" to avoid direct memory OOM. It is 
> hard to configure, and it cost a lot of memory. Consider 1000 parallelism, 
> maybe we need 1GB+ for a task manager.
> This is not conducive to the scenario of less slots and large parallelism. 
> Batch jobs could run little by little, but memory shortage would consume a 
> lot.
> If we provided N-Input operators, maybe things will be worse. This means the 
> number of subpartitions that can be requested at the same time will be more. 
> We have no idea how much memory.
> Here are my rough thoughts:
>  * Obtain memory from network buffers.
>  * provide "The maximum number of subpartitions that can be requested at the 
> same time".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15982) 'Quickstarts Java nightly end-to-end test' is failed on travis

2020-02-10 Thread Jark Wu (Jira)
Jark Wu created FLINK-15982:
---

 Summary: 'Quickstarts Java nightly end-to-end test' is failed on 
travis
 Key: FLINK-15982
 URL: https://issues.apache.org/jira/browse/FLINK-15982
 Project: Flink
  Issue Type: Bug
  Components: Tests
Reporter: Jark Wu


{code:java}
==
Running 'Quickstarts Java nightly end-to-end test'
==
TEST_DATA_DIR: 
/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-42718423491
Flink dist directory: 
/home/travis/build/apache/flink/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
22:16:44.021 [INFO] Scanning for projects...
22:16:44.095 [INFO] 

22:16:44.095 [INFO] BUILD FAILURE
22:16:44.095 [INFO] 

22:16:44.098 [INFO] Total time: 0.095 s
22:16:44.099 [INFO] Finished at: 2020-02-10T22:16:44+00:00
22:16:44.143 [INFO] Final Memory: 5M/153M
22:16:44.143 [INFO] 

22:16:44.144 [ERROR] The goal you specified requires a project to execute but 
there is no POM in this directory 
(/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-42718423491).
 Please verify you invoked Maven from the correct directory. -> [Help 1]
22:16:44.144 [ERROR] 
22:16:44.145 [ERROR] To see the full stack trace of the errors, re-run Maven 
with the -e switch.
22:16:44.145 [ERROR] Re-run Maven using the -X switch to enable full debug 
logging.
22:16:44.145 [ERROR] 
22:16:44.145 [ERROR] For more information about the errors and possible 
solutions, please read the following articles:
22:16:44.145 [ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MissingProjectException
/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test_quickstarts.sh:
 line 57: cd: flink-quickstart-java: No such file or directory
cp: cannot create regular file 
'/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-42718423491/flink-quickstart-java/src/main/java/org/apache/flink/quickstart/Elasticsearch5SinkExample.java':
 No such file or directory
sed: can't read 
/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-42718423491/flink-quickstart-java/src/main/java/org/apache/flink/quickstart/Elasticsearch5SinkExample.java:
 No such file or directory
awk: fatal: cannot open file `pom.xml' for reading (No such file or directory)
sed: can't read pom.xml: No such file or directory
sed: can't read pom.xml: No such file or directory
22:16:45.312 [INFO] Scanning for projects...
22:16:45.386 [INFO] 

22:16:45.386 [INFO] BUILD FAILURE
22:16:45.386 [INFO] 

22:16:45.391 [INFO] Total time: 0.097 s
22:16:45.391 [INFO] Finished at: 2020-02-10T22:16:45+00:00
22:16:45.438 [INFO] Final Memory: 5M/153M
22:16:45.438 [INFO] 

22:16:45.440 [ERROR] The goal you specified requires a project to execute but 
there is no POM in this directory 
(/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-42718423491).
 Please verify you invoked Maven from the correct directory. -> [Help 1]
22:16:45.440 [ERROR] 
22:16:45.440 [ERROR] To see the full stack trace of the errors, re-run Maven 
with the -e switch.
22:16:45.440 [ERROR] Re-run Maven using the -X switch to enable full debug 
logging.
22:16:45.440 [ERROR] 
22:16:45.440 [ERROR] For more information about the errors and possible 
solutions, please read the following articles:
22:16:45.440 [ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MissingProjectException
/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/test_quickstarts.sh:
 line 73: cd: target: No such file or directory
java.io.FileNotFoundException: flink-quickstart-java-0.1.jar (No such file or 
directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.(ZipFile.java:225)
at java.util.zip.ZipFile.(ZipFile.java:155)
at java.util.zip.ZipFile.(ZipFile.java:126)
at sun.tools.jar.Main.list(Main.java:1115)
at sun.tools.jar.Main.run(Main.java:293)
at sun.tools.jar.Main.main(Main.java:1288)
Success: There are no flink core classes are contained in the jar.
Failure: Since Elasticsearch5SinkExample.class and other user classes are not 
included in the jar. 
[FAIL] Test script contains errors.
{code}

Here are some instances:
- 

[jira] [Commented] (FLINK-15966) Capture the call stack of RPC ask() calls.

2020-02-10 Thread zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034106#comment-17034106
 ] 

zhijiang commented on FLINK-15966:
--

Very helpful improvement, looking forward for it.

> Capture the call stack of RPC ask() calls.
> --
>
> Key: FLINK-15966
> URL: https://issues.apache.org/jira/browse/FLINK-15966
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> Currently, when an RPC ask() call fails, we get a rather unhelpful exception 
> with a stack trace from akka's internal scheduler.
> Instead, we should capture the call stack during the invocation and use it to 
> give a helpful error message when the RPC call failed. This is especially 
> helpful in cases where the future (and future handlers) are passed for later 
> asynchronous result handling (which is the common case in most components (JM 
> / TM / RM).
> The options should have a flag to turn it off, because when having a lot of 
> concurrent ask calls (hundreds of thousands, during large deploy phases), it 
> may be possible that the captured call.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
flinkbot commented on issue #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054#issuecomment-584462322
 
 
   
   ## CI report:
   
   * 47f64cd0ddbb74874daff1607a30a999f4554ce1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #10730: [FLINK-14802][orc][hive] Multi vectorized read version support for hive orc read

2020-02-10 Thread GitBox
lirui-apache commented on issue #10730: [FLINK-14802][orc][hive] Multi 
vectorized read version support for hive orc read
URL: https://github.com/apache/flink/pull/10730#issuecomment-584462175
 
 
   Thanks for updating. LGTM. Just left some minor comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15981) Control the direct memory in FileChannelBoundedData.FileBufferReader

2020-02-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034103#comment-17034103
 ] 

Xintong Song commented on FLINK-15981:
--

CC: [~azagrebin]

> Control the direct memory in FileChannelBoundedData.FileBufferReader
> 
>
> Key: FLINK-15981
> URL: https://issues.apache.org/jira/browse/FLINK-15981
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Jingsong Lee
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> Now, the default blocking BoundedData is FileChannelBoundedData. In its 
> reader, will create new direct buffer 64KB.
> When parallelism greater than 100, users need configure 
> "taskmanager.memory.task.off-heap.size" to avoid direct memory OOM. It is 
> hard to configure, and it cost a lot of memory. Consider 1000 parallelism, 
> maybe we need 1GB+ for a task manager.
> This is not conducive to the scenario of less slots and large parallelism. 
> Batch jobs could run little by little, but memory shortage would consume a 
> lot.
> If we provided N-Input operators, maybe things will be worse. This means the 
> number of subpartitions that can be requested at the same time will be more. 
> We have no idea how much memory.
> Here are my rough thoughts:
>  * Obtain memory from network buffers.
>  * provide "The maximum number of subpartitions that can be requested at the 
> same time".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on a change in pull request #10730: [FLINK-14802][orc][hive] Multi vectorized read version support for hive orc read

2020-02-10 Thread GitBox
lirui-apache commented on a change in pull request #10730: 
[FLINK-14802][orc][hive] Multi vectorized read version support for hive orc read
URL: https://github.com/apache/flink/pull/10730#discussion_r377427396
 
 

 ##
 File path: 
flink-formats/flink-orc/src/main/java/org/apache/flink/orc/shim/OrcShim.java
 ##
 @@ -46,22 +47,24 @@ RecordReader createRecordReader(
long splitStart,
long splitLength) throws IOException;
 
+   OrcVectorizedBatchWrapper createBatchWrapper(TypeDescription 
schema, int batchSize);
+
/**
 * Read the next row batch.
 */
-   boolean nextBatch(RecordReader reader, VectorizedRowBatch rowBatch) 
throws IOException;
+   boolean nextBatch(RecordReader reader, BATCH rowBatch) throws 
IOException;
 
/**
 * Default with orc dependent, we should use v2.3.0.
 */
-   static OrcShim defaultShim() {
+   static OrcShimV230 defaultShim() {
 
 Review comment:
   Can we change the return type to `OrcShim`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #10730: [FLINK-14802][orc][hive] Multi vectorized read version support for hive orc read

2020-02-10 Thread GitBox
lirui-apache commented on a change in pull request #10730: 
[FLINK-14802][orc][hive] Multi vectorized read version support for hive orc read
URL: https://github.com/apache/flink/pull/10730#discussion_r377427787
 
 

 ##
 File path: 
flink-formats/flink-orc-nohive/src/main/java/org/apache/flink/orc/nohive/shim/OrcNoHiveShim.java
 ##
 @@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.orc.nohive.shim;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.orc.OrcSplitReader;
+import org.apache.flink.orc.nohive.vector.OrcNoHiveBatchWrapper;
+import org.apache.flink.orc.shim.OrcShim;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.orc.OrcConf;
+import org.apache.orc.OrcFile;
+import org.apache.orc.Reader;
+import org.apache.orc.RecordReader;
+import org.apache.orc.TypeDescription;
+import org.apache.orc.storage.ql.exec.vector.VectorizedRowBatch;
+
+import java.io.IOException;
+import java.util.List;
+
+import static org.apache.flink.orc.shim.OrcShimV200.computeProjectionMask;
+import static org.apache.flink.orc.shim.OrcShimV200.getOffsetAndLengthForSplit;
+
+/**
+ * Shim for orc reader without hive dependents.
+ */
+public class OrcNoHiveShim implements OrcShim {
 
 Review comment:
   Misses a serialVersionUID


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #10730: [FLINK-14802][orc][hive] Multi vectorized read version support for hive orc read

2020-02-10 Thread GitBox
lirui-apache commented on a change in pull request #10730: 
[FLINK-14802][orc][hive] Multi vectorized read version support for hive orc read
URL: https://github.com/apache/flink/pull/10730#discussion_r377429861
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/read/HiveVectorizedOrcSplitReader.java
 ##
 @@ -62,18 +62,30 @@ public HiveVectorizedOrcSplitReader(
throw new IllegalArgumentException("Unknown split type: 
" + hadoopSplit);
}
 
-   this.reader = genPartColumnarRowReader(
-   hiveVersion,
-   conf,
-   fieldNames,
-   fieldTypes,
-   
split.getHiveTablePartition().getPartitionSpec(),
-   selectedFields,
-   new ArrayList<>(),
-   DEFAULT_SIZE,
-   new Path(fileSplit.getPath().toString()),
-   fileSplit.getStart(),
-   fileSplit.getLength());
+   this.reader = hiveVersion.startsWith("1.") ?
+   
OrcNoHiveSplitReaderUtil.genPartColumnarRowReader(
+   conf,
+   fieldNames,
+   fieldTypes,
+   
split.getHiveTablePartition().getPartitionSpec(),
+   selectedFields,
+   new ArrayList<>(),
+   DEFAULT_SIZE,
+   new 
Path(fileSplit.getPath().toString()),
+   fileSplit.getStart(),
+   fileSplit.getLength()) :
+   
org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(
 
 Review comment:
   I don't think we need the full package path here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15981) Control the direct memory in FileChannelBoundedData.FileBufferReader

2020-02-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034102#comment-17034102
 ] 

Xintong Song commented on FLINK-15981:
--

Thanks for creating this ticket, [~lzljs3620320].

+1 for obtaining memory from network buffer pool.

I think the alternative of limiting number of partitions read concurrently 
probably reduce the chance of direct memory oom. Even though, these read 
buffers are still not accounted into {{ -XX:MaxDirectMemorySize }}. It would be 
good to account these read buffers into something already accounted in {{ 
-XX:MaxDirectMemorySize }}, i.e. network buffer pool.

> Control the direct memory in FileChannelBoundedData.FileBufferReader
> 
>
> Key: FLINK-15981
> URL: https://issues.apache.org/jira/browse/FLINK-15981
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Jingsong Lee
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> Now, the default blocking BoundedData is FileChannelBoundedData. In its 
> reader, will create new direct buffer 64KB.
> When parallelism greater than 100, users need configure 
> "taskmanager.memory.task.off-heap.size" to avoid direct memory OOM. It is 
> hard to configure, and it cost a lot of memory. Consider 1000 parallelism, 
> maybe we need 1GB+ for a task manager.
> This is not conducive to the scenario of less slots and large parallelism. 
> Batch jobs could run little by little, but memory shortage would consume a 
> lot.
> If we provided N-Input operators, maybe things will be worse. This means the 
> number of subpartitions that can be requested at the same time will be more. 
> We have no idea how much memory.
> Here are my rough thoughts:
>  * Obtain memory from network buffers.
>  * provide "The maximum number of subpartitions that can be requested at the 
> same time".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10963: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #10963: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/10963#issuecomment-579814507
 
 
   
   ## CI report:
   
   * b687d5e1835912c7866547f19fe4aec7c1399433 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/146612052) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4658)
 
   * 876ba140fa47ade2f628818d97e5a41ad18afeb3 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/148304674) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5035)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15981) Control the direct memory in FileChannelBoundedData.FileBufferReader

2020-02-10 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034099#comment-17034099
 ] 

Jingsong Lee commented on FLINK-15981:
--

CC: [~xintongsong] [~zjwang] [~pnowojski] [~sewen]

> Control the direct memory in FileChannelBoundedData.FileBufferReader
> 
>
> Key: FLINK-15981
> URL: https://issues.apache.org/jira/browse/FLINK-15981
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Jingsong Lee
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> Now, the default blocking BoundedData is FileChannelBoundedData. In its 
> reader, will create new direct buffer 64KB.
> When parallelism greater than 100, users need configure 
> "taskmanager.memory.task.off-heap.size" to avoid direct memory OOM. It is 
> hard to configure, and it cost a lot of memory. Consider 1000 parallelism, 
> maybe we need 1GB+ for a task manager.
> This is not conducive to the scenario of less slots and large parallelism. 
> Batch jobs could run little by little, but memory shortage would consume a 
> lot.
> If we provided N-Input operators, maybe things will be worse. This means the 
> number of subpartitions that can be requested at the same time will be more. 
> We have no idea how much memory.
> Here are my rough thoughts:
>  * Obtain memory from network buffers.
>  * provide "The maximum number of subpartitions that can be requested at the 
> same time".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15981) Control the direct memory in FileChannelBoundedData.FileBufferReader

2020-02-10 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-15981:
-
Fix Version/s: 1.10.1

> Control the direct memory in FileChannelBoundedData.FileBufferReader
> 
>
> Key: FLINK-15981
> URL: https://issues.apache.org/jira/browse/FLINK-15981
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Jingsong Lee
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> Now, the default blocking BoundedData is FileChannelBoundedData. In its 
> reader, will create new direct buffer 64KB.
> When parallelism greater than 100, users need configure 
> "taskmanager.memory.task.off-heap.size" to avoid direct memory OOM. It is 
> hard to configure, and it cost a lot of memory. Consider 1000 parallelism, 
> maybe we need 1GB+ for a task manager.
> This is not conducive to the scenario of less slots and large parallelism. 
> Batch jobs could run little by little, but memory shortage would consume a 
> lot.
> If we provided N-Input operators, maybe things will be worse. This means the 
> number of subpartitions that can be requested at the same time will be more. 
> We have no idea how much memory.
> Here are my rough thoughts:
>  * Obtain memory from network buffers.
>  * provide "The maximum number of subpartitions that can be requested at the 
> same time".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15980) The notFollowedBy in the end of GroupPattern may be ignored

2020-02-10 Thread zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang reassigned FLINK-15980:


Assignee: shuai.xu

> The notFollowedBy in the end of GroupPattern may be ignored
> ---
>
> Key: FLINK-15980
> URL: https://issues.apache.org/jira/browse/FLINK-15980
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.9.0
>Reporter: shuai.xu
>Assignee: shuai.xu
>Priority: Major
>
> If we write a Pattern like this:
> Pattern group = Pattern.begin('A').notFollowedBy("B");
> Pattern pattern = Pattern.begin(group).followedBy("C");
> Let notFollowedBy as the last part of a GroupPattern.
> This pattern can be compile normally, but the notFollowedBy("B") doesn't work 
> in fact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15981) Control the direct memory in FileChannelBoundedData.FileBufferReader

2020-02-10 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-15981:


 Summary: Control the direct memory in 
FileChannelBoundedData.FileBufferReader
 Key: FLINK-15981
 URL: https://issues.apache.org/jira/browse/FLINK-15981
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Network
Affects Versions: 1.10.0
Reporter: Jingsong Lee
 Fix For: 1.11.0


Now, the default blocking BoundedData is FileChannelBoundedData. In its reader, 
will create new direct buffer 64KB.

When parallelism greater than 100, users need configure 
"taskmanager.memory.task.off-heap.size" to avoid direct memory OOM. It is hard 
to configure, and it cost a lot of memory. Consider 1000 parallelism, maybe we 
need 1GB+ for a task manager.

This is not conducive to the scenario of less slots and large parallelism. 
Batch jobs could run little by little, but memory shortage would consume a lot.

If we provided N-Input operators, maybe things will be worse. This means the 
number of subpartitions that can be requested at the same time will be more. We 
have no idea how much memory.

Here are my rough thoughts:
 * Obtain memory from network buffers.
 * provide "The maximum number of subpartitions that can be requested at the 
same time".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15980) The notFollowedBy in the end of GroupPattern may be ignored

2020-02-10 Thread shuai.xu (Jira)
shuai.xu created FLINK-15980:


 Summary: The notFollowedBy in the end of GroupPattern may be 
ignored
 Key: FLINK-15980
 URL: https://issues.apache.org/jira/browse/FLINK-15980
 Project: Flink
  Issue Type: Bug
  Components: Library / CEP
Affects Versions: 1.9.0
Reporter: shuai.xu


If we write a Pattern like this:
Pattern group = Pattern.begin('A').notFollowedBy("B");
Pattern pattern = Pattern.begin(group).followedBy("C");
Let notFollowedBy as the last part of a GroupPattern.

This pattern can be compile normally, but the notFollowedBy("B") doesn't work 
in fact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15875) Bump Beam to 2.19.0

2020-02-10 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-15875.
---
Resolution: Resolved

Merged to master via:
ef364bbb0b7a73fcbac25f4cc298aea861ad774a
d1bc10f60b7c53a34c4d2192df14f8b5a66868b7

> Bump Beam to 2.19.0
> ---
>
> Key: FLINK-15875
> URL: https://issues.apache.org/jira/browse/FLINK-15875
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently PyFlink depends on Beam's portability framework for Python UDF 
> execution. The current dependent version is 2.15.0. We should bump it to 
> 2.19.0(the latest version) as it includes several critical features/fixes, 
> e.g.
> 1) BEAM-7951: It allows to not serialize the window/timestamp/pane info 
> between the Java operator and the Python worker which could definitely 
> improve the performance a lot
> 2) BEAM-8935: It allows to fail fast if the Python worker start up failed. 
> Currently it takes 2 minutes to detect the failure if the Python worker is 
> started failed. 
> 3) BEAM-7948: It supports periodically flush the data between the Java 
> operator and the Python worker. This feature is especially useful for 
> streaming jobs and could improve the latency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10687: [FLINK-15364] Introduce streaming task using heap backend e2e tests for Mesos

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #10687: [FLINK-15364] Introduce streaming 
task using heap backend e2e tests for Mesos
URL: https://github.com/apache/flink/pull/10687#issuecomment-568881372
 
 
   
   ## CI report:
   
   * 016f5e1ce6dffdbe7dd69e63fae99895520a6f5c Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142313148) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3912)
 
   * d4c4910fcf29e762fbb6e27a0de9fd3529014670 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142444248) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3953)
 
   * d7a4237329fccbd820c01c7688df5a5822409d42 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/148304690) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5036)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
flinkbot commented on issue #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054#issuecomment-584457234
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 47f64cd0ddbb74874daff1607a30a999f4554ce1 (Tue Feb 11 
02:44:28 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15875) Bump Beam to 2.19.0

2020-02-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15875:
---
Labels: pull-request-available  (was: )

> Bump Beam to 2.19.0
> ---
>
> Key: FLINK-15875
> URL: https://issues.apache.org/jira/browse/FLINK-15875
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently PyFlink depends on Beam's portability framework for Python UDF 
> execution. The current dependent version is 2.15.0. We should bump it to 
> 2.19.0(the latest version) as it includes several critical features/fixes, 
> e.g.
> 1) BEAM-7951: It allows to not serialize the window/timestamp/pane info 
> between the Java operator and the Python worker which could definitely 
> improve the performance a lot
> 2) BEAM-8935: It allows to fail fast if the Python worker start up failed. 
> Currently it takes 2 minutes to detect the failure if the Python worker is 
> started failed. 
> 3) BEAM-7948: It supports periodically flush the data between the Java 
> operator and the Python worker. This feature is especially useful for 
> streaming jobs and could improve the latency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu closed pull request #11050: [FLINK-15875][python] Bump Beam to 2.19.0

2020-02-10 Thread GitBox
dianfu closed pull request #11050: [FLINK-15875][python] Bump Beam to 2.19.0
URL: https://github.com/apache/flink/pull/11050
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhengcanbin commented on issue #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
zhengcanbin commented on issue #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054#issuecomment-584457006
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhengcanbin opened a new pull request #11054: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
zhengcanbin opened a new pull request #11054: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/11054
 
 
   ## What is the purpose of the change
   
   It's likely that one would pull images from the private image registries, 
credentials can be passed with the Pod specification through the 
`imagePullSecrets` parameter,
   which refers to the k8s secret by name. Implementation wise we expose a new 
configuration option to the users and then pass it along to the K8S.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / **docs** / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10963: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #10963: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/10963#issuecomment-579814507
 
 
   
   ## CI report:
   
   * b687d5e1835912c7866547f19fe4aec7c1399433 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/146612052) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4658)
 
   * 876ba140fa47ade2f628818d97e5a41ad18afeb3 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/148304674) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5035)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] dianfu commented on issue #11050: [FLINK-15875][python] Bump Beam to 2.19.0

2020-02-10 Thread GitBox
dianfu commented on issue #11050: [FLINK-15875][python] Bump Beam to 2.19.0
URL: https://github.com/apache/flink/pull/11050#issuecomment-584455888
 
 
   @sunjincheng121 Thanks a lot for the review. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhengcanbin closed pull request #10963: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
zhengcanbin closed pull request #10963: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/10963
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15979) Fix the merged count is not accurate in CountDistinctWithMerge

2020-02-10 Thread Jark Wu (Jira)
Jark Wu created FLINK-15979:
---

 Summary: Fix the merged count is not accurate in 
CountDistinctWithMerge 
 Key: FLINK-15979
 URL: https://issues.apache.org/jira/browse/FLINK-15979
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Legacy Planner
Reporter: Jark Wu


As discussed in the user ML: 
https://lists.apache.org/thread.html/rc4b06c9931656c94dc993b124da3ff00f04099e41201c64788936c24%40%3Cuser.flink.apache.org%3E.

The current implementation of 
{{org.apache.flink.table.runtime.utils.JavaUserDefinedAggFunctions.CountDistinctWithMerge#merge}}
 in old planner is not correct which will have a wrong merged count. 

The test 
(org.apache.flink.table.runtime.stream.table.GroupWindowITCase#testEventTimeSessionGroupWindowOverTime)
 which uses this UDAF can't expose the bug because there are no distinct values 
in the test data.  

The class {{CountDistinctWithMerge}} is a testing implementation which is not a 
critical problem. Blink planner has a correct implementation: 
https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/plan/utils/JavaUserDefinedAggFunctions.java#L369



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sunjincheng121 commented on issue #11050: [FLINK-15875][python] Bump Beam to 2.19.0

2020-02-10 Thread GitBox
sunjincheng121 commented on issue #11050: [FLINK-15875][python] Bump Beam to 
2.19.0
URL: https://github.com/apache/flink/pull/11050#issuecomment-584455531
 
 
   @flinkbot approve all


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10687: [FLINK-15364] Introduce streaming task using heap backend e2e tests for Mesos

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #10687: [FLINK-15364] Introduce streaming 
task using heap backend e2e tests for Mesos
URL: https://github.com/apache/flink/pull/10687#issuecomment-568881372
 
 
   
   ## CI report:
   
   * 016f5e1ce6dffdbe7dd69e63fae99895520a6f5c Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142313148) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3912)
 
   * d4c4910fcf29e762fbb6e27a0de9fd3529014670 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142444248) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3953)
 
   * d7a4237329fccbd820c01c7688df5a5822409d42 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11053: [FLINK-15977][docs] Update pull request template to include Kubernetes as deployment candidates

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #11053: [FLINK-15977][docs] Update pull 
request template to include Kubernetes as deployment candidates
URL: https://github.com/apache/flink/pull/11053#issuecomment-584419067
 
 
   
   ## CI report:
   
   * be90c0f90a7a7cd752787c4710a0c4e5acbf74fe Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/148294930) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5034)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10963: [FLINK-15652][K8s] Support for imagePullSecrets K8s option.

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #10963: [FLINK-15652][K8s] Support for 
imagePullSecrets K8s option.
URL: https://github.com/apache/flink/pull/10963#issuecomment-579814507
 
 
   
   ## CI report:
   
   * b687d5e1835912c7866547f19fe4aec7c1399433 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/146612052) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4658)
 
   * 876ba140fa47ade2f628818d97e5a41ad18afeb3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15978) Publish Docerfiles for release 1.10.0

2020-02-10 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-15978:
--
Labels: pull-request-available  (was: )

> Publish Docerfiles for release 1.10.0
> -
>
> Key: FLINK-15978
> URL: https://issues.apache.org/jira/browse/FLINK-15978
> Project: Flink
>  Issue Type: Task
>  Components: Release System
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> Publish the Dockerfiles for 1.10.0 after the RC voting passed, to finalize 
> the release process as 
> [documented|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-docker] carp84 commented on issue #6: [FLINK-15978] Add GPG key and Dockerfiles for 1.10.0 release

2020-02-10 Thread GitBox
carp84 commented on issue #6: [FLINK-15978] Add GPG key and Dockerfiles for 
1.10.0 release
URL: https://github.com/apache/flink-docker/pull/6#issuecomment-584446961
 
 
   Note: we need to hold the merging until the vote of release candidate has 
passed. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-docker] carp84 opened a new pull request #6: [Flink 15978] Add GPG key and Dockerfiles for 1.10.0 release

2020-02-10 Thread GitBox
carp84 opened a new pull request #6: [Flink 15978] Add GPG key and Dockerfiles 
for 1.10.0 release
URL: https://github.com/apache/flink-docker/pull/6
 
 
   Add GPG key and Dockerfiles for Flink 1.10.0 release


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15978) Publish Docerfiles for release 1.10.0

2020-02-10 Thread Yu Li (Jira)
Yu Li created FLINK-15978:
-

 Summary: Publish Docerfiles for release 1.10.0
 Key: FLINK-15978
 URL: https://issues.apache.org/jira/browse/FLINK-15978
 Project: Flink
  Issue Type: Task
  Components: Release System
Affects Versions: 1.10.0
Reporter: Yu Li
Assignee: Yu Li
 Fix For: 1.10.0


Publish the Dockerfiles for 1.10.0 after the RC voting passed, to finalize the 
release process as 
[documented|https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sunjincheng121 commented on issue #11050: [FLINK-15875][python] Bump Beam to 2.19.0

2020-02-10 Thread GitBox
sunjincheng121 commented on issue #11050: [FLINK-15875][python] Bump Beam to 
2.19.0
URL: https://github.com/apache/flink/pull/11050#issuecomment-584443039
 
 
   @flinkbot approve-until architecture


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11053: [FLINK-15977][docs] Update pull request template to include Kubernetes as deployment candidates

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #11053: [FLINK-15977][docs] Update pull 
request template to include Kubernetes as deployment candidates
URL: https://github.com/apache/flink/pull/11053#issuecomment-584419067
 
 
   
   ## CI report:
   
   * be90c0f90a7a7cd752787c4710a0c4e5acbf74fe Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/148294930) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5034)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #11029: [FLINK-15933][catalog][doc] Update content of how generic table schem…

2020-02-10 Thread GitBox
lirui-apache commented on issue #11029: [FLINK-15933][catalog][doc] Update 
content of how generic table schem…
URL: https://github.com/apache/flink/pull/11029#issuecomment-584434817
 
 
   > > @bowenli86 We don't have any console output for "DESCRIBE mykafka;" now. 
I'll add the output for "DESCRIBE FORMATTED mykafka;" instead.
   > 
   > why we don't have any console output for describing a table? Seems like a 
bug?
   
   Because we have changed to store the generic table schema as properties. So 
from Hive perspective, the table doesn't have any columns.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11053: [FLINK-15977][docs] Update pull request template to include Kubernetes as deployment candidates

2020-02-10 Thread GitBox
flinkbot edited a comment on issue #11053: [FLINK-15977][docs] Update pull 
request template to include Kubernetes as deployment candidates
URL: https://github.com/apache/flink/pull/11053#issuecomment-584419067
 
 
   
   ## CI report:
   
   * be90c0f90a7a7cd752787c4710a0c4e5acbf74fe Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/148294930) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5034)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11053: [FLINK-15977][docs] Update pull request template to include Kubernetes as deployment candidates

2020-02-10 Thread GitBox
flinkbot commented on issue #11053: [FLINK-15977][docs] Update pull request 
template to include Kubernetes as deployment candidates
URL: https://github.com/apache/flink/pull/11053#issuecomment-584419067
 
 
   
   ## CI report:
   
   * be90c0f90a7a7cd752787c4710a0c4e5acbf74fe UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on issue #10963: [FLINK-15652][Kubernetes] Support for imagePullSecrets k8s option.

2020-02-10 Thread GitBox
TisonKun commented on issue #10963: [FLINK-15652][Kubernetes] Support for 
imagePullSecrets k8s option.
URL: https://github.com/apache/flink/pull/10963#issuecomment-584412625
 
 
   Thanks for your contribution @zhengcanbin . Changes LGTM.
   
   Would you please rebase on master and retest since it fails above? Ping me 
on tests pass.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11053: [FLINK-15977][docs] Update pull request template to include Kubernetes as deployment candidates

2020-02-10 Thread GitBox
flinkbot commented on issue #11053: [FLINK-15977][docs] Update pull request 
template to include Kubernetes as deployment candidates
URL: https://github.com/apache/flink/pull/11053#issuecomment-584412257
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit be90c0f90a7a7cd752787c4710a0c4e5acbf74fe (Mon Feb 10 
23:38:27 UTC 2020)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15977) Update pull request template to include Kubernetes as deployment candidates

2020-02-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15977:
---
Labels: pull-request-available  (was: )

> Update pull request template to include Kubernetes as deployment candidates
> ---
>
> Key: FLINK-15977
> URL: https://issues.apache.org/jira/browse/FLINK-15977
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun opened a new pull request #11053: [FLINK-15977][docs] Update pull request template to include Kubernetes as deployment candidates

2020-02-10 Thread GitBox
TisonKun opened a new pull request #11053: [FLINK-15977][docs] Update pull 
request template to include Kubernetes as deployment candidates
URL: https://github.com/apache/flink/pull/11053
 
 
   ## What is the purpose of the change
   
   Update pull request template to include Kubernetes as deployment candidates
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   
   cc @zentol 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15977) Update pull request template to include Kubernetes as deployment candidates

2020-02-10 Thread Zili Chen (Jira)
Zili Chen created FLINK-15977:
-

 Summary: Update pull request template to include Kubernetes as 
deployment candidates
 Key: FLINK-15977
 URL: https://issues.apache.org/jira/browse/FLINK-15977
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Zili Chen
Assignee: Zili Chen






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TisonKun commented on a change in pull request #10956: [FLINK-15646][client]Configurable K8s context support.

2020-02-10 Thread GitBox
TisonKun commented on a change in pull request #10956: 
[FLINK-15646][client]Configurable K8s context support.
URL: https://github.com/apache/flink/pull/10956#discussion_r377378089
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/KubeClientFactory.java
 ##
 @@ -42,18 +42,23 @@ public static FlinkKubeClient 
fromConfiguration(Configuration flinkConfig) {
 
final Config config;
 
+   final String kubeContext = 
flinkConfig.getString(KubernetesConfigOptions.CONTEXT);
+   if (kubeContext != null) {
+   LOG.info("Configuring K8S client using context {}.", 
kubeContext);
+   }
+
final String kubeConfigFile = 
flinkConfig.getString(KubernetesConfigOptions.KUBE_CONFIG_FILE);
if (kubeConfigFile != null) {
LOG.debug("Trying to load kubernetes config from file: 
{}.", kubeConfigFile);
try {
-   config = 
Config.fromKubeconfig(KubernetesUtils.getContentFromFile(kubeConfigFile));
+   config = Config.fromKubeconfig(kubeContext, 
KubernetesUtils.getContentFromFile(kubeConfigFile), null);
} catch (IOException e) {
throw new KubernetesClientException("Load 
kubernetes config failed.", e);
}
} else {
LOG.debug("Trying to load default kubernetes config.");
// Null means load from default context
 
 Review comment:
   We should update the comment here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TisonKun commented on a change in pull request #10956: [FLINK-15646][client]Configurable K8s context support.

2020-02-10 Thread GitBox
TisonKun commented on a change in pull request #10956: 
[FLINK-15646][client]Configurable K8s context support.
URL: https://github.com/apache/flink/pull/10956#discussion_r377378601
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/configuration/KubernetesConfigOptions.java
 ##
 @@ -29,6 +29,14 @@
 @PublicEvolving
 public class KubernetesConfigOptions {
 
+   public static final ConfigOption CONTEXT =
+   key("kubernetes.context")
+   .stringType()
+   .noDefaultValue()
+   .withDescription("The desired context from your K8s config file 
used to configure the K8s client for " +
 
 Review comment:
   and here s/K8s/Kubernetes/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   >