[jira] [Commented] (FLINK-19116) Support more kinds of data for expressions.lit in the Python Table API

2020-09-08 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192680#comment-17192680
 ] 

Dian Fu commented on FLINK-19116:
-

After discussing with [~zhongwei] offline, I think this problem doesn't exist 
and we could work around it. For example, If users want to create a literal of 
Date type, users could use `lit("2013-01-01").to_date`. I'm closing this ticket 
for now.

> Support more kinds of data for expressions.lit in the Python Table API
> --
>
> Key: FLINK-19116
> URL: https://issues.apache.org/jira/browse/FLINK-19116
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Dian Fu
>Priority: Major
> Fix For: 1.12.0
>
>
> As discussed in 
> https://github.com/apache/flink/pull/13278#discussion_r481005165
> For expressions.lit,
> {code}
> Current implementation only support the basic type like int, str, bool... It 
> would be better if we support more data type like BigDecimal, Timestamp and 
> other atomic/complex datatypes. We can find all supported types according the 
> Java class org.apache.flink.table.types.utils.ValueDataTypeConverter.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19116) Support more kinds of data for expressions.lit in the Python Table API

2020-09-08 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-19116.
---
Fix Version/s: (was: 1.12.0)
   Resolution: Not A Problem

> Support more kinds of data for expressions.lit in the Python Table API
> --
>
> Key: FLINK-19116
> URL: https://issues.apache.org/jira/browse/FLINK-19116
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Dian Fu
>Priority: Major
>
> As discussed in 
> https://github.com/apache/flink/pull/13278#discussion_r481005165
> For expressions.lit,
> {code}
> Current implementation only support the basic type like int, str, bool... It 
> would be better if we support more data type like BigDecimal, Timestamp and 
> other atomic/complex datatypes. We can find all supported types according the 
> Java class org.apache.flink.table.types.utils.ValueDataTypeConverter.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19167) Proccess Function Example could not work

2020-09-08 Thread tinny cat (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tinny cat updated FLINK-19167:
--
Description: 
Section "*Porccess Function Example*" of 
[https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/process_function.html]
 current is:
{code:java}
// Some comments here
@Override
public void processElement(
Tuple2 value, 
Context ctx, 
Collector> out) throws Exception {

// retrieve the current count
CountWithTimestamp current = state.value();
if (current == null) {
current = new CountWithTimestamp();
current.key = value.f0;
}

// update the state's count
current.count++;

// set the state's timestamp to the record's assigned event time 
timestamp
current.lastModified = ctx.timestamp();

// write the state back
state.update(current);

// schedule the next timer 60 seconds from the current event time
ctx.timerService().registerEventTimeTimer(current.lastModified + 6);
}

@Override
public void onTimer(
long timestamp, 
OnTimerContext ctx, 
Collector> out) throws Exception {

// get the state for the key that scheduled the timer
CountWithTimestamp result = state.value();

// check if this is an outdated timer or the latest timer
// this will be never happened
if (timestamp == result.lastModified + 6) {
// emit the state on timeout
out.collect(new Tuple2(result.key, result.count));
}
}
{code}
however, it should be: 
{code:java}
@Override
public void processElement(
Tuple2 value, 
Context ctx, 
Collector> out) throws Exception {

// retrieve the current count
CountWithTimestamp current = state.value();
if (current == null) {
current = new CountWithTimestamp();
current.key = value.f0;
}

// update the state's count
current.count++;

// set the state's timestamp to the record's assigned event time 
timestamp
// it should be the previous watermark
current.lastModified = ctx.timerService().currentWatermark();

// write the state back
state.update(current);

// schedule the next timer 60 seconds from the current event time
ctx.timerService().registerEventTimeTimer(current.lastModified + 6);
}

@Override
public void onTimer(
long timestamp, 
OnTimerContext ctx, 
Collector> out) throws Exception {

// get the state for the key that scheduled the timer
CountWithTimestamp result = state.value();

// check if this is an outdated timer or the latest timer
if (timestamp == result.lastModified + 6) {
// emit the state on timeout
out.collect(new Tuple2(result.key, result.count));
}
}
{code}

`current.lastModified = ctx.timestamp();` should be ` current.lastModified = 
ctx.timerService().currentWatermark();`  otherwise, `timestamp == 
result.lastModified + 6` will be never happend

  was:
Section "*Porccess Function Example*" of 
[https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/process_function.html]
 current is:
{code:java}
// Some comments here
@Override
public void processElement(
Tuple2 value, 
Context ctx, 
Collector> out) throws Exception {

// retrieve the current count
CountWithTimestamp current = state.value();
if (current == null) {
current = new CountWithTimestamp();
current.key = value.f0;
}

// update the state's count
current.count++;

// set the state's timestamp to the record's assigned event time 
timestamp
current.lastModified = ctx.timestamp();

// write the state back
state.update(current);

// schedule the next timer 60 seconds from the current event time
ctx.timerService().registerEventTimeTimer(current.lastModified + 6);
}

@Override
public void onTimer(
long timestamp, 
OnTimerContext ctx, 
Collector> out) throws Exception {

// get the state for the key that scheduled the timer
CountWithTimestamp result = state.value();

// check if this is an outdated timer or the latest timer
// this will be never happened
if (timestamp == result.lastModified + 6) {
// emit the state on timeout
out.collect(new Tuple2(result.key, result.count));
}
}
{code}
however, it should be: 
{code:java}
@Override
public void processElement(
Tuple2 value, 
Context ctx, 
Coll

[jira] [Updated] (FLINK-19167) Proccess Function Example could not work

2020-09-08 Thread tinny cat (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tinny cat updated FLINK-19167:
--
Description: 
Section "*Porccess Function Example*" of 
[https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/process_function.html]
 current is:
{code:java}
// Some comments here
@Override
public void processElement(
Tuple2 value, 
Context ctx, 
Collector> out) throws Exception {

// retrieve the current count
CountWithTimestamp current = state.value();
if (current == null) {
current = new CountWithTimestamp();
current.key = value.f0;
}

// update the state's count
current.count++;

// set the state's timestamp to the record's assigned event time 
timestamp
current.lastModified = ctx.timestamp();

// write the state back
state.update(current);

// schedule the next timer 60 seconds from the current event time
ctx.timerService().registerEventTimeTimer(current.lastModified + 6);
}

@Override
public void onTimer(
long timestamp, 
OnTimerContext ctx, 
Collector> out) throws Exception {

// get the state for the key that scheduled the timer
CountWithTimestamp result = state.value();

// check if this is an outdated timer or the latest timer
// this will be never happened
if (timestamp == result.lastModified + 6) {
// emit the state on timeout
out.collect(new Tuple2(result.key, result.count));
}
}
{code}
however, it should be: 
{code:java}
@Override
public void processElement(
Tuple2 value, 
Context ctx, 
Collector> out) throws Exception {

// retrieve the current count
CountWithTimestamp current = state.value();
if (current == null) {
current = new CountWithTimestamp();
current.key = value.f0;
}

// update the state's count
current.count++;

// set the state's timestamp to the record's assigned event time 
timestamp
// it should be the previous watermark
current.lastModified = ctx.timerService().currentWatermark();

// write the state back
state.update(current);

// schedule the next timer 60 seconds from the current event time
ctx.timerService().registerEventTimeTimer(current.lastModified + 6);
}

@Override
public void onTimer(
long timestamp, 
OnTimerContext ctx, 
Collector> out) throws Exception {

// get the state for the key that scheduled the timer
CountWithTimestamp result = state.value();

// check if this is an outdated timer or the latest timer
if (timestamp == result.lastModified + 6) {
// emit the state on timeout
out.collect(new Tuple2(result.key, result.count));
}
}
{code}

  was:
Section "*Porccess Function Example*" of 
[https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/process_function.html]
 current is:
{code:java}
// Some comments here
@Override
public void processElement(
Tuple2 value, 
Context ctx, 
Collector> out) throws Exception {

// retrieve the current count
CountWithTimestamp current = state.value();
if (current == null) {
current = new CountWithTimestamp();
current.key = value.f0;
}

// update the state's count
current.count++;

// set the state's timestamp to the record's assigned event time 
timestamp
current.lastModified = ctx.timestamp();

// write the state back
state.update(current);

// schedule the next timer 60 seconds from the current event time
ctx.timerService().registerEventTimeTimer(current.lastModified + 6);
}

@Override
public void onTimer(
long timestamp, 
OnTimerContext ctx, 
Collector> out) throws Exception {

// get the state for the key that scheduled the timer
CountWithTimestamp result = state.value();

// check if this is an outdated timer or the latest timer
// this will be never happened
if (timestamp == result.lastModified + 6) {
// emit the state on timeout
out.collect(new Tuple2(result.key, result.count));
}
}
{code}
however, it should be:
{code:java}
@Override
public void processElement(
Tuple2 value, 
Context ctx, 
Collector> out) throws Exception {

// retrieve the current count
CountWithTimestamp current = state.value();
if (current == null) {
current = new CountWithTimestamp(

[GitHub] [flink] flinkbot edited a comment on pull request #13348: [FLINK-19119][python][docs] Update the documentation to use Expression instead of strings in the Python Table API

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13348:
URL: https://github.com/apache/flink/pull/13348#issuecomment-688594233


   
   ## CI report:
   
   * 2e3b454f97a9db63b5fc12d7504ebffe091f9035 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6319)
 
   * eb8f73886475ac78f0ed6e6384c8439371a5a1d6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on pull request #13289: [FLINK-18548][table-planner] support flexible syntax for Temporal table join

2020-09-08 Thread GitBox


leonardBang commented on pull request #13289:
URL: https://github.com/apache/flink/pull/13289#issuecomment-689347470


   @danny0405 I have addressed your comments, Could you have a more look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-docker] rmetzger commented on pull request #23: [FLINK-14241][test]Add arm64 support for docker e2e test

2020-09-08 Thread GitBox


rmetzger commented on pull request #23:
URL: https://github.com/apache/flink-docker/pull/23#issuecomment-689347085


   I'm sorry that I have to say it again, but I don't think the current 
approach is correct.
   
   The `generate.sh` script generates a Dockerfile and metadata file, then this 
file is used to create a stackbrew file, which is committed here: 
https://github.com/docker-library/official-images/blob/master/library/flink.
   Then DockerHub does some magic to turn this into the official images.
   
   If we want to add support for arm64 to the official Flink docker images, we 
probably need to generate an additional Dockerfile and metadata file with an 
updated FROM, and a correct `release.metadata` file.
   
   I believe that your change works on ARM64 with the Flink e2e tests, but this 
repository is mostly used for building the official Flink docker images, and we 
need to make sure we are not breaking them with this change.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13341: [FLINK-15854][hive][table-planner-blink] Use the new type inference f…

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13341:
URL: https://github.com/apache/flink/pull/13341#issuecomment-688346241


   
   ## CI report:
   
   * b12ee5981dccf9dce8e59d457d00088b77016b83 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6358)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19168) Upgrade Kafka client version

2020-09-08 Thread darion yaphet (Jira)
darion yaphet created FLINK-19168:
-

 Summary: Upgrade Kafka client version
 Key: FLINK-19168
 URL: https://issues.apache.org/jira/browse/FLINK-19168
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Reporter: darion yaphet


Currently we are using Kafka Client 0.11.0.2 which is released at 2017 and the 
latest version is 2.6.0. I don't know why don't update it ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19167) Proccess Function Example could not work

2020-09-08 Thread tinny cat (Jira)
tinny cat created FLINK-19167:
-

 Summary: Proccess Function Example could not work
 Key: FLINK-19167
 URL: https://issues.apache.org/jira/browse/FLINK-19167
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.11.1
Reporter: tinny cat


Section "*Porccess Function Example*" of 
[https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/process_function.html]
 current is:
{code:java}
// Some comments here
@Override
public void processElement(
Tuple2 value, 
Context ctx, 
Collector> out) throws Exception {

// retrieve the current count
CountWithTimestamp current = state.value();
if (current == null) {
current = new CountWithTimestamp();
current.key = value.f0;
}

// update the state's count
current.count++;

// set the state's timestamp to the record's assigned event time 
timestamp
current.lastModified = ctx.timestamp();

// write the state back
state.update(current);

// schedule the next timer 60 seconds from the current event time
ctx.timerService().registerEventTimeTimer(current.lastModified + 6);
}

@Override
public void onTimer(
long timestamp, 
OnTimerContext ctx, 
Collector> out) throws Exception {

// get the state for the key that scheduled the timer
CountWithTimestamp result = state.value();

// check if this is an outdated timer or the latest timer
// this will be never happened
if (timestamp == result.lastModified + 6) {
// emit the state on timeout
out.collect(new Tuple2(result.key, result.count));
}
}
{code}
however, it should be:
{code:java}
@Override
public void processElement(
Tuple2 value, 
Context ctx, 
Collector> out) throws Exception {

// retrieve the current count
CountWithTimestamp current = state.value();
if (current == null) {
current = new CountWithTimestamp();
current.key = value.f0;
}

// update the state's count
current.count++;

// set the state's timestamp to the record's assigned event time 
timestamp
// it should be the previous watermark
current.lastModified = ctx.timerService().currentWatermark();

// write the state back
state.update(current);

// schedule the next timer 60 seconds from the current event time
ctx.timerService().registerEventTimeTimer(current.lastModified + 6);
}

@Override
public void onTimer(
long timestamp, 
OnTimerContext ctx, 
Collector> out) throws Exception {

// get the state for the key that scheduled the timer
CountWithTimestamp result = state.value();

// check if this is an outdated timer or the latest timer
if (timestamp == result.lastModified + 6) {
// emit the state on timeout
out.collect(new Tuple2(result.key, result.count));
}
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk commented on pull request #13044: [FLINK-18641][checkpointing] Fix CheckpointCoordinator to work with ExternallyInducedSource

2020-09-08 Thread GitBox


zhuzhurk commented on pull request #13044:
URL: https://github.com/apache/flink/pull/13044#issuecomment-689338596


   Thanks for preparing and merging this fix! @becketqin @pnowojski 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on a change in pull request #13275: [FLINK-19064][hbase] HBaseRowDataInputFormat is leaking resources

2020-09-08 Thread GitBox


rmetzger commented on a change in pull request #13275:
URL: https://github.com/apache/flink/pull/13275#discussion_r485362609



##
File path: 
flink-connectors/flink-connector-hbase/src/main/java/org/apache/flink/connector/hbase/source/AbstractTableInputFormat.java
##
@@ -94,25 +102,24 @@ public 
AbstractTableInputFormat(org.apache.hadoop.conf.Configuration hConf) {
 */
protected abstract T mapResultToOutType(Result r);
 
-   /**
-* Creates a {@link Scan} object and opens the {@link HTable} 
connection.
-*
-* These are opened here because they are needed in the 
createInputSplits
-* which is called before the openInputFormat method.
-*
-* The connection is opened in this method and closed in {@link 
#closeInputFormat()}.
-*
-* @param parameters The configuration that is to be used
-* @see Configuration
-*/
-   public abstract void configure(Configuration parameters);
+   @Override
+   public void configure(Configuration parameters) {
+   }
 
protected org.apache.hadoop.conf.Configuration getHadoopConfiguration() 
{
return 
HBaseConfigurationUtil.deserializeConfiguration(serializedConfig, 
HBaseConfigurationUtil.getHBaseConfiguration());
}
 
+   /**
+* Creates a {@link Scan} object and opens the {@link HTable} 
connection.
+* The connection is opened in this method and closed in {@link 
#close()}.
+*
+* @param split The split to be opened.
+* @throws IOException Thrown, if the spit could not be opened due to 
an I/O problem.
+*/
@Override
public void open(TableInputSplit split) throws IOException {
+   initTable();
if (table == null) {
throw new IOException("The HBase table has not been 
opened! " +
"This needs to be done in configure().");

Review comment:
   I believe this (and below) exception messages are not correct anymore.

##
File path: 
flink-connectors/flink-connector-hbase/src/main/java/org/apache/flink/connector/hbase/source/AbstractTableInputFormat.java
##
@@ -186,24 +193,22 @@ public void close() throws IOException {
if (resultScanner != null) {
resultScanner.close();
}
-   } finally {
-   resultScanner = null;
-   }
-   }
-
-   @Override
-   public void closeInputFormat() throws IOException {
-   try {
if (table != null) {
table.close();
}
+   if (connection != null) {
+   connection.close();
+   }
} finally {
+   resultScanner = null;
table = null;
+   connection = null;
}
}
 
@Override
public TableInputSplit[] createInputSplits(final int minNumSplits) 
throws IOException {
+   initTable();
if (table == null) {
throw new IOException("The HBase table has not been 
opened! " +
"This needs to be done in configure().");

Review comment:
   Revisit exception message.

##
File path: 
flink-connectors/flink-connector-hbase/src/main/java/org/apache/flink/connector/hbase/source/AbstractTableInputFormat.java
##
@@ -68,6 +71,11 @@ public 
AbstractTableInputFormat(org.apache.hadoop.conf.Configuration hConf) {
serializedConfig = 
HBaseConfigurationUtil.serializeConfiguration(hConf);
}
 
+   /**
+* Creates a {@link Scan} object and opens the {@link HTable} 
connection to initialize the HBase table.
+*/
+   protected abstract void initTable();

Review comment:
   If you make this method `throws IOException`, you don't need to wrap the 
IOExceptions in HBaseRowDataInputFormat.connectToTable() into RuntimeExceptions.
   
   Throwing an IOException is not a problem, because initTable is called in 
open(); which throws an IOException as well.

##
File path: 
flink-connectors/flink-connector-hbase/src/main/java/org/apache/flink/connector/hbase/source/AbstractTableInputFormat.java
##
@@ -186,24 +193,22 @@ public void close() throws IOException {
if (resultScanner != null) {
resultScanner.close();
}
-   } finally {
-   resultScanner = null;
-   }
-   }
-
-   @Override
-   public void closeInputFormat() throws IOException {
-   try {
if (table != null) {
table.close();
}
+  

[GitHub] [flink] dianfu commented on a change in pull request #13348: [FLINK-19119][python][docs] Update the documentation to use Expression instead of strings in the Python Table API

2020-09-08 Thread GitBox


dianfu commented on a change in pull request #13348:
URL: https://github.com/apache/flink/pull/13348#discussion_r485368137



##
File path: docs/dev/python/table_api_tutorial.md
##
@@ -124,10 +124,12 @@ The table `mySink` has two columns, word and count, and 
writes data to the file
 You can now create a job which reads input from table `mySource`, preforms 
some transformations, and writes the results to table `mySink`.
 
 {% highlight python %}
-t_env.from_path('mySource') \
-.group_by('word') \
-.select('word, count(1)') \
-.insert_into('mySink')
+from pyflink.table.expressions import lit
+
+tab = t_env.from_path('mySource')
+tab.group_by(tab.word) \
+   .select(tab.word, lit(1).count) \

Review comment:
   I think there is no need to use 4 spaces here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pnowojski closed pull request #13044: [FLINK-18641][checkpointing] Fix CheckpointCoordinator to work with ExternallyInducedSource

2020-09-08 Thread GitBox


pnowojski closed pull request #13044:
URL: https://github.com/apache/flink/pull/13044


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pnowojski commented on pull request #13044: [FLINK-18641][checkpointing] Fix CheckpointCoordinator to work with ExternallyInducedSource

2020-09-08 Thread GitBox


pnowojski commented on pull request #13044:
URL: https://github.com/apache/flink/pull/13044#issuecomment-689332349


   Merged, thanks for the fix @becketqin :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13311:
URL: https://github.com/apache/flink/pull/13311#issuecomment-686196434


   
   ## CI report:
   
   * b7660009b48d48261b767855d8973c63741ab493 UNKNOWN
   * 5ff0956ceca9eea61cf43bdb881b84a179937abd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6282)
 
   * f16d78f22d3d66e763f5629a377936bcb20b80c4 UNKNOWN
   * dc944c1a5ce38b566ae8863d58e43b3843e43ab1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6361)
 
   * 0e0b615e8e1349efbbf9b69e1ded4388dc9fcc2b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6363)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13311:
URL: https://github.com/apache/flink/pull/13311#issuecomment-686196434


   
   ## CI report:
   
   * b7660009b48d48261b767855d8973c63741ab493 UNKNOWN
   * 5ff0956ceca9eea61cf43bdb881b84a179937abd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6282)
 
   * f16d78f22d3d66e763f5629a377936bcb20b80c4 UNKNOWN
   * dc944c1a5ce38b566ae8863d58e43b3843e43ab1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6361)
 
   * 0e0b615e8e1349efbbf9b69e1ded4388dc9fcc2b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #13275: [FLINK-19064][hbase] HBaseRowDataInputFormat is leaking resources

2020-09-08 Thread GitBox


wuchong commented on a change in pull request #13275:
URL: https://github.com/apache/flink/pull/13275#discussion_r485350217



##
File path: 
flink-connectors/flink-connector-hbase/src/main/java/org/apache/flink/connector/hbase/source/AbstractTableInputFormat.java
##
@@ -186,24 +193,22 @@ public void close() throws IOException {
if (resultScanner != null) {
resultScanner.close();
}
-   } finally {
-   resultScanner = null;
-   }
-   }
-
-   @Override
-   public void closeInputFormat() throws IOException {
-   try {
if (table != null) {
table.close();
}
+   if (connection != null) {
+   connection.close();
+   }
} finally {
+   resultScanner = null;
table = null;
+   connection = null;
}
}
 
@Override
public TableInputSplit[] createInputSplits(final int minNumSplits) 
throws IOException {
+   initTable();

Review comment:
   I think it is called on the 
[JobMaster](https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionJobVertex.java#L258),
 so we should use a local connection and close the connection in the method. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #13275: [FLINK-19064][hbase] HBaseRowDataInputFormat is leaking resources

2020-09-08 Thread GitBox


wuchong commented on a change in pull request #13275:
URL: https://github.com/apache/flink/pull/13275#discussion_r485350217



##
File path: 
flink-connectors/flink-connector-hbase/src/main/java/org/apache/flink/connector/hbase/source/AbstractTableInputFormat.java
##
@@ -186,24 +193,22 @@ public void close() throws IOException {
if (resultScanner != null) {
resultScanner.close();
}
-   } finally {
-   resultScanner = null;
-   }
-   }
-
-   @Override
-   public void closeInputFormat() throws IOException {
-   try {
if (table != null) {
table.close();
}
+   if (connection != null) {
+   connection.close();
+   }
} finally {
+   resultScanner = null;
table = null;
+   connection = null;
}
}
 
@Override
public TableInputSplit[] createInputSplits(final int minNumSplits) 
throws IOException {
+   initTable();

Review comment:
   I think it is called on the JobMaster [1], so we should use a local 
connection and close the connection in the method. 
   
   [1]: 
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionJobVertex.java#L258





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13347: [FLINK-19151][yarn]Fix the unit value according to different yarn scheduler

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13347:
URL: https://github.com/apache/flink/pull/13347#issuecomment-688594190


   
   ## CI report:
   
   * 741eefecca8abcf6f4b5092623599856850449c1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6362)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13275: [FLINK-19064][hbase] HBaseRowDataInputFormat is leaking resources

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13275:
URL: https://github.com/apache/flink/pull/13275#issuecomment-682391096


   
   ## CI report:
   
   * 27d9c90c0393f1febaec84f28e850867eed6222d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6357)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13347: [FLINK-19151][yarn]Fix the unit value according to different yarn scheduler

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13347:
URL: https://github.com/apache/flink/pull/13347#issuecomment-688594190


   
   ## CI report:
   
   * 08cde36b22f34a53c41f7a864f50ef6cd33ca9fe Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6318)
 
   * 741eefecca8abcf6f4b5092623599856850449c1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6362)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #13275: [FLINK-19064][hbase] HBaseRowDataInputFormat is leaking resources

2020-09-08 Thread GitBox


SteNicholas commented on a change in pull request #13275:
URL: https://github.com/apache/flink/pull/13275#discussion_r485337684



##
File path: 
flink-connectors/flink-connector-hbase/src/main/java/org/apache/flink/connector/hbase/source/AbstractTableInputFormat.java
##
@@ -186,24 +193,22 @@ public void close() throws IOException {
if (resultScanner != null) {
resultScanner.close();
}
-   } finally {
-   resultScanner = null;
-   }
-   }
-
-   @Override
-   public void closeInputFormat() throws IOException {
-   try {
if (table != null) {
table.close();
}
+   if (connection != null) {
+   connection.close();
+   }
} finally {
+   resultScanner = null;
table = null;
+   connection = null;
}
}
 
@Override
public TableInputSplit[] createInputSplits(final int minNumSplits) 
throws IOException {
+   initTable();

Review comment:
   > This is never closed ?
   
   @wuchong This is closed when calling `close` method. I thought that in the 
lifecycle, `createInputSplits` would be called before `close`. Therefore, I 
didn't close the connection and table.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #13275: [FLINK-19064][hbase] HBaseRowDataInputFormat is leaking resources

2020-09-08 Thread GitBox


SteNicholas commented on a change in pull request #13275:
URL: https://github.com/apache/flink/pull/13275#discussion_r485337684



##
File path: 
flink-connectors/flink-connector-hbase/src/main/java/org/apache/flink/connector/hbase/source/AbstractTableInputFormat.java
##
@@ -186,24 +193,22 @@ public void close() throws IOException {
if (resultScanner != null) {
resultScanner.close();
}
-   } finally {
-   resultScanner = null;
-   }
-   }
-
-   @Override
-   public void closeInputFormat() throws IOException {
-   try {
if (table != null) {
table.close();
}
+   if (connection != null) {
+   connection.close();
+   }
} finally {
+   resultScanner = null;
table = null;
+   connection = null;
}
}
 
@Override
public TableInputSplit[] createInputSplits(final int minNumSplits) 
throws IOException {
+   initTable();

Review comment:
   > This is never closed ?
   
   This is closed when calling `close` method. I thought that in the lifecycle, 
`createInputSplits` would be called before `close`. Therefore, I didn't close 
the connection and table.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13347: [FLINK-19151][yarn]Fix the unit value according to different yarn scheduler

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13347:
URL: https://github.com/apache/flink/pull/13347#issuecomment-688594190


   
   ## CI report:
   
   * 08cde36b22f34a53c41f7a864f50ef6cd33ca9fe Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6318)
 
   * 741eefecca8abcf6f4b5092623599856850449c1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13311:
URL: https://github.com/apache/flink/pull/13311#issuecomment-686196434


   
   ## CI report:
   
   * b7660009b48d48261b767855d8973c63741ab493 UNKNOWN
   * 5ff0956ceca9eea61cf43bdb881b84a179937abd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6282)
 
   * f16d78f22d3d66e763f5629a377936bcb20b80c4 UNKNOWN
   * dc944c1a5ce38b566ae8863d58e43b3843e43ab1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6361)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #13275: [FLINK-19064][hbase] HBaseRowDataInputFormat is leaking resources

2020-09-08 Thread GitBox


wuchong commented on a change in pull request #13275:
URL: https://github.com/apache/flink/pull/13275#discussion_r485332171



##
File path: 
flink-connectors/flink-connector-hbase/src/main/java/org/apache/flink/connector/hbase/source/AbstractTableInputFormat.java
##
@@ -186,24 +193,22 @@ public void close() throws IOException {
if (resultScanner != null) {
resultScanner.close();
}
-   } finally {
-   resultScanner = null;
-   }
-   }
-
-   @Override
-   public void closeInputFormat() throws IOException {
-   try {
if (table != null) {
table.close();
}
+   if (connection != null) {
+   connection.close();
+   }
} finally {
+   resultScanner = null;
table = null;
+   connection = null;
}
}
 
@Override
public TableInputSplit[] createInputSplits(final int minNumSplits) 
throws IOException {
+   initTable();

Review comment:
   This is never closed ?

##
File path: 
flink-connectors/flink-connector-hbase/src/main/java/org/apache/flink/connector/hbase/source/TableInputSplit.java
##
@@ -26,7 +26,7 @@
  * references to row below refer to the key of the row.
  */
 @Internal
-class TableInputSplit extends LocatableInputSplit {
+public class TableInputSplit extends LocatableInputSplit {

Review comment:
   We don't need to declare it `public`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13311:
URL: https://github.com/apache/flink/pull/13311#issuecomment-686196434


   
   ## CI report:
   
   * b7660009b48d48261b767855d8973c63741ab493 UNKNOWN
   * 5ff0956ceca9eea61cf43bdb881b84a179937abd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6282)
 
   * f16d78f22d3d66e763f5629a377936bcb20b80c4 UNKNOWN
   * dc944c1a5ce38b566ae8863d58e43b3843e43ab1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13128: [FLINK-18795][hbase] Support for HBase 2

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13128:
URL: https://github.com/apache/flink/pull/13128#issuecomment-672766836


   
   ## CI report:
   
   * 313b80f4474455d7013b0852929b5a8458f391a1 UNKNOWN
   * 860de517377c2666a8b44c1229c09e0c3a72959b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6355)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lsyldliu commented on a change in pull request #11830: [FLINK-17096] [table] Support state ttl for Mini-Batch Group Agg using StateTtlConfig

2020-09-08 Thread GitBox


lsyldliu commented on a change in pull request #11830:
URL: https://github.com/apache/flink/pull/11830#discussion_r485324183



##
File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/aggregate/GroupAggFunction.java
##
@@ -83,61 +85,57 @@
// stores the accumulators
private transient ValueState accState = null;
 
+   private final StateTtlConfig ttlConfig;

Review comment:
   get





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13327: [FLINK-19134][python] Add BasicArrayTypeInfo and coder for PrimitiveArrayTypeInfo for Python DataStream API.

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13327:
URL: https://github.com/apache/flink/pull/13327#issuecomment-687006937


   
   ## CI report:
   
   * 352380e8ab8941541fbc1773bcd29820bcf646c3 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6204)
 
   * 11b0d42c7e32ea83fa364245f5fa5dced702d5dc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6360)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lsyldliu commented on a change in pull request #11830: [FLINK-17096] [table] Support state ttl for Mini-Batch Group Agg using StateTtlConfig

2020-09-08 Thread GitBox


lsyldliu commented on a change in pull request #11830:
URL: https://github.com/apache/flink/pull/11830#discussion_r485319896



##
File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/aggregate/GroupAggFunction.java
##
@@ -83,61 +85,57 @@
// stores the accumulators
private transient ValueState accState = null;
 
+   private final StateTtlConfig ttlConfig;
+
/**
 * Creates a {@link GroupAggFunction}.
 *
-* @param minRetentionTime minimal state idle retention time.
-* @param maxRetentionTime maximal state idle retention time.
 * @param genAggsHandler The code generated function used to handle 
aggregates.
 * @param genRecordEqualiser The code generated equaliser used to equal 
RowData.
 * @param accTypes The accumulator types.
 * @param indexOfCountStar The index of COUNT(*) in the aggregates.
 *  -1 when the input doesn't contain COUNT(*), 
i.e. doesn't contain retraction messages.
 *  We make sure there is a COUNT(*) if input 
stream contains retraction.
 * @param generateUpdateBefore Whether this operator will generate 
UPDATE_BEFORE messages.
+* @param minRetentionTime minimal state idle retention time which unit 
is MILLISECONDS.
 */
public GroupAggFunction(
-   long minRetentionTime,
-   long maxRetentionTime,
GeneratedAggsHandleFunction genAggsHandler,
GeneratedRecordEqualiser genRecordEqualiser,
LogicalType[] accTypes,
int indexOfCountStar,
-   boolean generateUpdateBefore) {
-   super(minRetentionTime, maxRetentionTime);
+   boolean generateUpdateBefore,
+   long minRetentionTime) {

Review comment:
   OK





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13358: [FLINK-11779] CLI ignores -m parameter if high-availability is ZOOKEEPER

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13358:
URL: https://github.com/apache/flink/pull/13358#issuecomment-689274090


   
   ## CI report:
   
   * af452f632d2b733634a88f2a91c65fe9837f99d6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6359)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13311: [FLINK-18721] Migrate YarnResourceManager to the new YarnResourceManagerDriver

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13311:
URL: https://github.com/apache/flink/pull/13311#issuecomment-686196434


   
   ## CI report:
   
   * b7660009b48d48261b767855d8973c63741ab493 UNKNOWN
   * 5ff0956ceca9eea61cf43bdb881b84a179937abd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6282)
 
   * f16d78f22d3d66e763f5629a377936bcb20b80c4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13327: [FLINK-19134][python] Add BasicArrayTypeInfo and coder for PrimitiveArrayTypeInfo for Python DataStream API.

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13327:
URL: https://github.com/apache/flink/pull/13327#issuecomment-687006937


   
   ## CI report:
   
   * 352380e8ab8941541fbc1773bcd29820bcf646c3 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6204)
 
   * 11b0d42c7e32ea83fa364245f5fa5dced702d5dc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shuiqiangchen commented on pull request #13327: [FLINK-19134][python] Add BasicArrayTypeInfo and coder for PrimitiveArrayTypeInfo for Python DataStream API.

2020-09-08 Thread GitBox


shuiqiangchen commented on pull request #13327:
URL: https://github.com/apache/flink/pull/13327#issuecomment-689275593


   @HuangXingBo Thanks a lot for your comments, I have updated the PR according 
to your suggestions, please have a look at it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] WeiZhong94 commented on a change in pull request #13348: [FLINK-19119][python][docs] Update the documentation to use Expression instead of strings in the Python Table API

2020-09-08 Thread GitBox


WeiZhong94 commented on a change in pull request #13348:
URL: https://github.com/apache/flink/pull/13348#discussion_r485301884



##
File path: docs/dev/python/table_api_tutorial.md
##
@@ -124,10 +124,12 @@ The table `mySink` has two columns, word and count, and 
writes data to the file
 You can now create a job which reads input from table `mySource`, preforms 
some transformations, and writes the results to table `mySink`.
 
 {% highlight python %}
-t_env.from_path('mySource') \
-.group_by('word') \
-.select('word, count(1)') \
-.insert_into('mySink')
+from pyflink.table.expressions import lit
+
+tab = t_env.from_path('mySource')
+tab.group_by(tab.word) \
+   .select(tab.word, lit(1).count) \

Review comment:
   The indent should be 4 spaces.

##
File path: docs/dev/python/table_api_tutorial.md
##
@@ -167,10 +170,10 @@ t_env.connect(FileSystem().path('/tmp/output')) \
  .field('count', DataTypes.BIGINT())) \
 .create_temporary_table('mySink')
 
-t_env.from_path('mySource') \
-.group_by('word') \
-.select('word, count(1)') \
-.insert_into('mySink')
+tab = t_env.from_path('mySource')
+tab.group_by(tab.word) \
+   .select(tab.word, lit(1).count) \

Review comment:
   ditto

##
File path: docs/dev/python/table_api_tutorial.zh.md
##
@@ -171,10 +174,10 @@ t_env.connect(FileSystem().path('/tmp/output')) \
  .field('count', DataTypes.BIGINT())) \
 .create_temporary_table('mySink')
 
-t_env.from_path('mySource') \
-.group_by('word') \
-.select('word, count(1)') \
-.insert_into('mySink')
+tab = t_env.from_path('mySource')
+tab.group_by(tab.word) \
+   .select(tab.word, lit(1).count) \

Review comment:
   ditto

##
File path: docs/dev/table/tableApi.md
##
@@ -2305,7 +2326,7 @@ The following example shows how to define a window 
aggregation with additional g
 # define window with alias w, group the table by attribute a and window w,
 # then aggregate
 table = input.window([w: GroupWindow].alias("w")) \
- .group_by("w, a").select("b.sum")
+ .group_by(col('w'), col('a')).select(input.b.sum)

Review comment:
   col('a') -> input.a ?

##
File path: docs/dev/python/table-api-users-guide/conversion_of_pandas.md
##
@@ -74,7 +74,7 @@ import numpy as np
 
 # Create a PyFlink Table
 pdf = pd.DataFrame(np.random.rand(1000, 2))
-table = t_env.from_pandas(pdf, ["a", "b"]).filter("a > 0.5")
+table = t_env.from_pandas(pdf, ["a", "b"]).filter(col('a' > 0.5))

Review comment:
   col('a' > 0.5) -> col('a') > 0.5 ?

##
File path: docs/dev/table/tableApi.md
##
@@ -1131,22 +1141,23 @@ result = 
orders.over_window(Over.partition_by("a").order_by("rowtime")
   
 Similar to a SQL DISTINCT aggregation clause such as COUNT(DISTINCT 
a). Distinct aggregation declares that an aggregation function (built-in or 
user-defined) is only applied on distinct input values. Distinct can be applied 
to GroupBy Aggregation, GroupBy Window Aggregation and Over 
Window Aggregation.
 {% highlight python %}
+from pyflink.table.expressions import col, lit, UNBOUNDED_RANGE
+
 orders = t_env.from_path("Orders")
 # Distinct aggregation on group by
-group_by_distinct_result = orders.group_by("a") \
- .select("a, b.sum.distinct as d")
+group_by_distinct_result = orders.group_by(orders.a) \
+ .select(orders.a, 
orders.b.sum.distinct.alias('d'))
 # Distinct aggregation on time window group by
 group_by_window_distinct_result = orders.window(
-Tumble.over("5.minutes").on("rowtime").alias("w")).group_by("a, w") \
-.select("a, b.sum.distinct as d")
+
Tumble.over(lit(5).minutes).on(orders.rowtime).alias("w")).group_by(col('a'), 
col('w')) \

Review comment:
   col('a') -> orders.a ?

##
File path: docs/dev/python/table_api_tutorial.zh.md
##
@@ -128,10 +128,12 @@ t_env.sql_update(my_sink_ddl)
 接下来,我们介绍如何创建一个作业:该作业读取表`mySource`中的数据,进行一些变换,然后将结果写入表`mySink`。
 
 {% highlight python %}
-t_env.scan('mySource') \
-.group_by('word') \
-.select('word, count(1)') \
-.insert_into('mySink')
+from pyflink.table.expressions import lit
+
+tab = t_env.from_path('mySource')
+tab.group_by(tab.word) \
+   .select(tab.word, lit(1).count) \

Review comment:
   ditto

##
File path: docs/dev/table/tableApi.md
##
@@ -202,13 +202,15 @@ val result: Table = orders
 
 {% highlight python %}
 # specify table program
+from pyflink.table.expressions import col, lit
+
 orders = t_env.from_path("Orders")  # schema (a, b, c, rowtime)
 
-result = orders.filter("a.isNotNull && b.isNotNull && c.isNotNull") \
-   .select("a.lowerCase() as a, b, rowtime") \
-   
.window(Tumble.over("1.hour").on("rowtime").alias("hourlyWindow")) \
-   .group_by("hourlyWindow, a") \
-   .select("a, hourlyWindow.end as hour, b.avg

[GitHub] [flink] flinkbot commented on pull request #13358: [FLINK-11779] CLI ignores -m parameter if high-availability is ZOOKEEPER

2020-09-08 Thread GitBox


flinkbot commented on pull request #13358:
URL: https://github.com/apache/flink/pull/13358#issuecomment-689274090


   
   ## CI report:
   
   * af452f632d2b733634a88f2a91c65fe9837f99d6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16048) Support read/write confluent schema registry avro data from Kafka

2020-09-08 Thread Xingcan Cui (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192590#comment-17192590
 ] 

Xingcan Cui commented on FLINK-16048:
-

Hi all, thanks for your effort on this feature. I wonder if it's possible to 
provide the authentication info (i.e.,{{value.converter.basic.auth.user.info}}) 
via this new schema class?

> Support read/write confluent schema registry avro data  from Kafka
> --
>
> Key: FLINK-16048
> URL: https://issues.apache.org/jira/browse/FLINK-16048
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.12.0
>
>
> *The background*
> I found SQL Kafka connector can not consume avro data that was serialized by 
> `KafkaAvroSerializer` and only can consume Row data with avro schema because 
> we use `AvroRowDeserializationSchema/AvroRowSerializationSchema` to se/de 
> data in  `AvroRowFormatFactory`. 
> I think we should support this because `KafkaAvroSerializer` is very common 
> in Kafka.
> and someone met same question in stackoverflow[1].
> [[1]https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259|https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259]
> *The format details*
> _The factory identifier (or format id)_
> There are 2 candidates now ~
> - {{avro-sr}}: the pattern borrowed from KSQL {{JSON_SR}} format [1]
> - {{avro-confluent}}: the pattern borrowed from Clickhouse {{AvroConfluent}} 
> [2]
> Personally i would prefer {{avro-sr}} because it is more concise and the 
> confluent is a company name which i think is not that suitable for a format 
> name.
> _The format attributes_
> || Options || required || Remark ||
> | schema-registry.url | true | URL to connect to schema registry service |
> | schema-registry.subject | false | Subject name to write to the Schema 
> Registry service, required for sink |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19166) StreamingFileWriter should register Listener before the initialization of buckets

2020-09-08 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-19166:
-
Issue Type: Bug  (was: New Feature)

> StreamingFileWriter should register Listener before the initialization of 
> buckets
> -
>
> Key: FLINK-19166
> URL: https://issues.apache.org/jira/browse/FLINK-19166
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.1
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.11.2
>
>
> In 
> [http://apache-flink.147419.n8.nabble.com/StreamingFileSink-hive-metadata-td6898.html]
> The feedback of User indicates that some partitions have not been committed 
> since the job failed.
> This maybe due to FLINK-18110, in FLINK-18110, it has fixed Buckets, but 
> forgot fixing {{StreamingFileWriter}} , it should register Listener before 
> the initialization of buckets, otherwise, will loose listening too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19166) StreamingFileWriter should register Listener before the initialization of buckets

2020-09-08 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-19166:


 Summary: StreamingFileWriter should register Listener before the 
initialization of buckets
 Key: FLINK-19166
 URL: https://issues.apache.org/jira/browse/FLINK-19166
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Runtime
Affects Versions: 1.11.1
Reporter: Jingsong Lee
Assignee: Jingsong Lee
 Fix For: 1.11.2


In 
[http://apache-flink.147419.n8.nabble.com/StreamingFileSink-hive-metadata-td6898.html]

The feedback of User indicates that some partitions have not been committed 
since the job failed.

This maybe due to FLINK-18110, in FLINK-18110, it has fixed Buckets, but forgot 
fixing {{StreamingFileWriter}} , it should register Listener before the 
initialization of buckets, otherwise, will loose listening too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13341: [FLINK-15854][hive][table-planner-blink] Use the new type inference f…

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13341:
URL: https://github.com/apache/flink/pull/13341#issuecomment-688346241


   
   ## CI report:
   
   * 5a1c81663755cdf214e41663d486f9f9fa01db51 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6298)
 
   * b12ee5981dccf9dce8e59d457d00088b77016b83 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6358)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13358: [FLINK-11779] CLI ignores -m parameter if high-availability is ZOOKEEPER

2020-09-08 Thread GitBox


flinkbot commented on pull request #13358:
URL: https://github.com/apache/flink/pull/13358#issuecomment-689269505


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit af452f632d2b733634a88f2a91c65fe9837f99d6 (Wed Sep 09 
03:00:39 UTC 2020)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] guoweiM opened a new pull request #13358: [FLINK-11779] CLI ignores -m parameter if high-availability is ZOOKEEPER

2020-09-08 Thread GitBox


guoweiM opened a new pull request #13358:
URL: https://github.com/apache/flink/pull/13358


   
   
   ## What is the purpose of the change
   
   1. Change the help messages of -m parameter of `DefaultCLI` and 
`FlinkYarnSessionCli`.
   2. Document that "rest.address" and "rest.address.port" is respected only if 
the high-availability is NONE.
   
   
   ## Brief change log
   
 - Remove the "-m" options from the `AbstractCustomCommandLine `
 - Add the "-m" options to `DefaultCLI` and add some description`This 
option is respected only if the high-availability is NONE.`
 - Add the "-m" options to `FlinkYarnSessionCli` and add some description 
`Specify the yarn cluster mode if the argument equals yarn-cluster`
 - Document that "rest.address" and "rest.address.port" is respected only 
if the high-availability is NONE.
   
   ## Verifying this change
   
   Verify the help message manually.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (docs)
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13341: [FLINK-15854][hive][table-planner-blink] Use the new type inference f…

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13341:
URL: https://github.com/apache/flink/pull/13341#issuecomment-688346241


   
   ## CI report:
   
   * 5a1c81663755cdf214e41663d486f9f9fa01db51 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6298)
 
   * b12ee5981dccf9dce8e59d457d00088b77016b83 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on pull request #13128: [FLINK-18795][hbase] Support for HBase 2

2020-09-08 Thread GitBox


leonardBang commented on pull request #13128:
URL: https://github.com/apache/flink/pull/13128#issuecomment-689259287


   Hi, @miklosgergely 
   I reviewed HBase connector 1, and maybe I can offer some help about this PR。
   Please let me know  once you have a final version and need someone to  
review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi closed pull request #13016: [FLINK-18508][table] Dynamic table supports statistics and parallelism report

2020-09-08 Thread GitBox


JingsongLi closed pull request #13016:
URL: https://github.com/apache/flink/pull/13016


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-14012) Failed to start job for consuming Secure Kafka after the job cancel

2020-09-08 Thread Daebeom Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daebeom Lee closed FLINK-14012.
---
Fix Version/s: 1.10.0
   Resolution: Fixed

It is fixed in the recent version (1.10.x, 1.11.x)

> Failed to start job for consuming Secure Kafka after the job cancel
> ---
>
> Key: FLINK-14012
> URL: https://issues.apache.org/jira/browse/FLINK-14012
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.9.0
> Environment: * Kubernetes 1.13.2
>  * Flink 1.9.0
>  * Kafka client libary 2.2.0
>Reporter: Daebeom Lee
>Priority: Minor
> Fix For: 1.10.0
>
>
> Hello, this is Daebeom Lee.
> h2. Background
> I installed Flink 1.9.0 at this our Kubernetes cluster.
> We use Flink session cluster. - build fatJar file and upload it at the UI, 
> run serval jobs.
> At first, our jobs are good to start.
> But, when we cancel some jobs, the job failed
> This is the error code.
> {code:java}
> // code placeholder
> java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/security/scram/internals/ScramSaslClient
> at 
> org.apache.kafka.common.security.scram.internals.ScramSaslClient$ScramSaslClientFactory.createSaslClient(ScramSaslClient.java:235)
> at javax.security.sasl.Sasl.createSaslClient(Sasl.java:384)
> at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.lambda$createSaslClient$0(SaslClientAuthenticator.java:180)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslClient(SaslClientAuthenticator.java:176)
> at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.(SaslClientAuthenticator.java:168)
> at 
> org.apache.kafka.common.network.SaslChannelBuilder.buildClientAuthenticator(SaslChannelBuilder.java:254)
> at 
> org.apache.kafka.common.network.SaslChannelBuilder.lambda$buildChannel$1(SaslChannelBuilder.java:202)
> at 
> org.apache.kafka.common.network.KafkaChannel.(KafkaChannel.java:140)
> at 
> org.apache.kafka.common.network.SaslChannelBuilder.buildChannel(SaslChannelBuilder.java:210)
> at 
> org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:334)
> at 
> org.apache.kafka.common.network.Selector.registerChannel(Selector.java:325)
> at org.apache.kafka.common.network.Selector.connect(Selector.java:257)
> at 
> org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:920)
> at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.trySend(ConsumerNetworkClient.java:474)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:255)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:292)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1803)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1771)
> at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaPartitionDiscoverer.getAllPartitionsForTopics(KafkaPartitionDiscoverer.java:77)
> at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.discoverPartitions(AbstractPartitionDiscoverer.java:131)
> at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:508)
> at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:529)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:393)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> h2. Our workaround
>  * I think that this is Flink JVM classloader issue.
>  * Classloader unloads when job cancels by the way kafka client library is 
> included fatJar.
>  * So, I located Kafka client library to /opt/flink/lib 
>  ** /opt/flink/lib/kafka-clients-2.2.0.jar
>  * And then all issue sol

[jira] [Commented] (FLINK-14012) Failed to start job for consuming Secure Kafka after the job cancel

2020-09-08 Thread Daebeom Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192568#comment-17192568
 ] 

Daebeom Lee commented on FLINK-14012:
-

I have used 1.11.0 and 1.10.x. Maybe this issue is resolved at the versions.

I'll close this issue.

Thank you for your comment.

> Failed to start job for consuming Secure Kafka after the job cancel
> ---
>
> Key: FLINK-14012
> URL: https://issues.apache.org/jira/browse/FLINK-14012
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.9.0
> Environment: * Kubernetes 1.13.2
>  * Flink 1.9.0
>  * Kafka client libary 2.2.0
>Reporter: Daebeom Lee
>Priority: Minor
>
> Hello, this is Daebeom Lee.
> h2. Background
> I installed Flink 1.9.0 at this our Kubernetes cluster.
> We use Flink session cluster. - build fatJar file and upload it at the UI, 
> run serval jobs.
> At first, our jobs are good to start.
> But, when we cancel some jobs, the job failed
> This is the error code.
> {code:java}
> // code placeholder
> java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/security/scram/internals/ScramSaslClient
> at 
> org.apache.kafka.common.security.scram.internals.ScramSaslClient$ScramSaslClientFactory.createSaslClient(ScramSaslClient.java:235)
> at javax.security.sasl.Sasl.createSaslClient(Sasl.java:384)
> at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.lambda$createSaslClient$0(SaslClientAuthenticator.java:180)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslClient(SaslClientAuthenticator.java:176)
> at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.(SaslClientAuthenticator.java:168)
> at 
> org.apache.kafka.common.network.SaslChannelBuilder.buildClientAuthenticator(SaslChannelBuilder.java:254)
> at 
> org.apache.kafka.common.network.SaslChannelBuilder.lambda$buildChannel$1(SaslChannelBuilder.java:202)
> at 
> org.apache.kafka.common.network.KafkaChannel.(KafkaChannel.java:140)
> at 
> org.apache.kafka.common.network.SaslChannelBuilder.buildChannel(SaslChannelBuilder.java:210)
> at 
> org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:334)
> at 
> org.apache.kafka.common.network.Selector.registerChannel(Selector.java:325)
> at org.apache.kafka.common.network.Selector.connect(Selector.java:257)
> at 
> org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:920)
> at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.trySend(ConsumerNetworkClient.java:474)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:255)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:292)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1803)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1771)
> at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaPartitionDiscoverer.getAllPartitionsForTopics(KafkaPartitionDiscoverer.java:77)
> at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.discoverPartitions(AbstractPartitionDiscoverer.java:131)
> at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:508)
> at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:529)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:393)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> h2. Our workaround
>  * I think that this is Flink JVM classloader issue.
>  * Classloader unloads when job cancels by the way kafka client library is 
> included fatJar.
>  * So, I located Kafka client library to /opt/flink/lib 
>  ** /opt/flink/

[GitHub] [flink-playgrounds] shuiqiangchen commented on pull request #16: [FLINK-19145][walkthroughs] Add PyFlink-walkthrough to Flink playground.

2020-09-08 Thread GitBox


shuiqiangchen commented on pull request #16:
URL: https://github.com/apache/flink-playgrounds/pull/16#issuecomment-689249936


   Hi @sjwiesman , could you please help review this pull request?  Thank you : 
)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13275: [FLINK-19064][hbase] HBaseRowDataInputFormat is leaking resources

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13275:
URL: https://github.com/apache/flink/pull/13275#issuecomment-682391096


   
   ## CI report:
   
   * 74ac3764e017be069dd2eae07e893a0c8cad40e4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6313)
 
   * 27d9c90c0393f1febaec84f28e850867eed6222d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6357)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on pull request #13193: [FLINK-18918][python][docs] Add dedicated connector documentation for Python Table API

2020-09-08 Thread GitBox


dianfu commented on pull request #13193:
URL: https://github.com/apache/flink/pull/13193#issuecomment-689245491


   @sjwiesman So sorry to ping you again. Could you help to take a look at this 
PR when it's convenient for you?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18641) "Failure to finalize checkpoint" error in MasterTriggerRestoreHook

2020-09-08 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin closed FLINK-18641.

Resolution: Fixed

Master: d850871518..99cd44f203

release-1.11: 3dc019eaff9a30..7286c662af636

> "Failure to finalize checkpoint" error in MasterTriggerRestoreHook
> --
>
> Key: FLINK-18641
> URL: https://issues.apache.org/jira/browse/FLINK-18641
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Brian Zhou
>Assignee: Jiangjie Qin
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.2
>
>
> https://github.com/pravega/flink-connectors is a Pravega connector for Flink. 
> The ReaderCheckpointHook[1] class uses the Flink `MasterTriggerRestoreHook` 
> interface to trigger the Pravega checkpoint during Flink checkpoints to make 
> sure the data recovery. The checkpoint recovery tests are running fine in 
> Flink 1.10, but it has below issues in Flink 1.11 causing the tests time out. 
> Suspect it is related to the checkpoint coordinator thread model changes in 
> Flink 1.11
> Error stacktrace:
> {code}
> 2020-07-09 15:39:39,999 30945 [jobmanager-future-thread-5] WARN  
> o.a.f.runtime.jobmaster.JobMaster - Error while processing checkpoint 
> acknowledgement message
> org.apache.flink.runtime.checkpoint.CheckpointException: Could not finalize 
> the pending checkpoint 3. Failure reason: Failure to finalize checkpoint.
>  at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1033)
>  at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:948)
>  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.lambda$acknowledgeCheckpoint$4(SchedulerBase.java:802)
>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.util.SerializedThrowable: Pending checkpoint has 
> not been fully acknowledged yet
>  at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:195)
>  at 
> org.apache.flink.runtime.checkpoint.PendingCheckpoint.finalizeCheckpoint(PendingCheckpoint.java:298)
>  at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1021)
>  ... 9 common frames omitted
> {code}
> More detail in this mailing thread: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Pravega-connector-cannot-recover-from-the-checkpoint-due-to-quot-Failure-to-finalize-checkpoint-quot-td36652.html
> Also in https://github.com/pravega/flink-connectors/issues/387



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] leonardBang commented on pull request #13275: [FLINK-19064][hbase] HBaseRowDataInputFormat is leaking resources

2020-09-08 Thread GitBox


leonardBang commented on pull request #13275:
URL: https://github.com/apache/flink/pull/13275#issuecomment-689242852


   Thanks @SteNicholas, LGTM once the pipeline passed.
   Before merge this, I want to CC @rmetzger @wuchong have a look, they gave 
guidance in the discussion. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13275: [FLINK-19064][hbase] HBaseRowDataInputFormat is leaking resources

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13275:
URL: https://github.com/apache/flink/pull/13275#issuecomment-682391096


   
   ## CI report:
   
   * 74ac3764e017be069dd2eae07e893a0c8cad40e4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6313)
 
   * 27d9c90c0393f1febaec84f28e850867eed6222d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-docker] wangxiyuan edited a comment on pull request #23: [FLINK-14241][test]Add arm64 support for docker e2e test

2020-09-08 Thread GitBox


wangxiyuan edited a comment on pull request #23:
URL: https://github.com/apache/flink-docker/pull/23#issuecomment-689237514


   > I'm not sure if this PR works as-is. The updated generator doesn't seem to 
set the architecture flag correctly: 
https://github.com/apache/flink-docker/pull/23/files#diff-0c5f825c10334c2eff0a90687f513c69R93
   > 
   > also, the architecture seems to depend on the architecture of the machine 
executing the `generate.sh` script. But this script only generates the 
Dockerfile and metadata file, not the actual docker image.
   
   Yeah, many docker images support multi arch now, for example 
`openjdk:11-jre` [1]. For this kind of image, we don't need to change any code 
here of cause. Different machine use different image automatically.
   
   But there are still some images don't support arm64, for example, the image 
`openjdk:8-jre`[2]  which the e2e test uses. For this kind of image, we should 
use the image from another official repo `arm64v8`[3].
   
   Now openlab runs flink e2e test on arm64 with my forked `flink-docker` 
repo.[4] And the test runs well[5]
   
   P.S.  In openlab test,  the 
`swr.ap-southeast-3.myhuaweicloud.com/openlab/arm64v8` is just a mirror for 
`arm64v8` to speed up the test.
   
   [1]: 
https://hub.docker.com/layers/openjdk/library/openjdk/11-jre/images/sha256-9ea29bab2ef2eebaf0d9693cfab073fbdc560f2abe37be5885eede04a41188f8?context=explore
   [2]: 
https://hub.docker.com/layers/openjdk/library/openjdk/8-jre/images/sha256-41aefd23ef94a79df24c0d534d80eefd3c1fbc4f3810f3c2211dffb2e2736fe1?context=explore
   [3]: 
https://hub.docker.com/layers/arm64v8/openjdk/8-jre/images/sha256-b915aa8067ed0f7bdadb94701109b4408ee6d90642181c87ad37ff4036e0aed5?context=explore
   [4]: 
https://github.com/wangxiyuan/flink-docker/blob/dev-master/generator.sh#L18-L22
   [5]: 
http://status.openlabtesting.org/builds?job_name=flink-end-to-end-test-cron-hadoop313



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-docker] wangxiyuan edited a comment on pull request #23: [FLINK-14241][test]Add arm64 support for docker e2e test

2020-09-08 Thread GitBox


wangxiyuan edited a comment on pull request #23:
URL: https://github.com/apache/flink-docker/pull/23#issuecomment-689237514


   > I'm not sure if this PR works as-is. The updated generator doesn't seem to 
set the architecture flag correctly: 
https://github.com/apache/flink-docker/pull/23/files#diff-0c5f825c10334c2eff0a90687f513c69R93
   > 
   > also, the architecture seems to depend on the architecture of the machine 
executing the `generate.sh` script. But this script only generates the 
Dockerfile and metadata file, not the actual docker image.
   
   Yeah, many docker images support multi arch now, for example 
`openjdk:11-jre` [1]. For this kind of image, we don't need to change any code 
here of cause. Different machine use different image automatically.
   
   But there are still some images don't support arm64, for example, the image 
`openjdk:8-jre`[2]  which the e2e test uses. For this kind of image, we should 
use the image from another official repo `arm64v8`[3].
   
   Now openlab runs flink e2e test on arm64 with my forked `flink-docker` 
repo.[4] And the test runs well[5]
   
   
   [1]: 
https://hub.docker.com/layers/openjdk/library/openjdk/11-jre/images/sha256-9ea29bab2ef2eebaf0d9693cfab073fbdc560f2abe37be5885eede04a41188f8?context=explore
   [2]: 
https://hub.docker.com/layers/openjdk/library/openjdk/8-jre/images/sha256-41aefd23ef94a79df24c0d534d80eefd3c1fbc4f3810f3c2211dffb2e2736fe1?context=explore
   [3]: 
https://hub.docker.com/layers/arm64v8/openjdk/8-jre/images/sha256-b915aa8067ed0f7bdadb94701109b4408ee6d90642181c87ad37ff4036e0aed5?context=explore
   [4]: 
https://github.com/wangxiyuan/flink-docker/blob/dev-master/generator.sh#L18-L22
   [5]: 
http://status.openlabtesting.org/builds?job_name=flink-end-to-end-test-cron-hadoop313



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-docker] wangxiyuan commented on pull request #23: [FLINK-14241][test]Add arm64 support for docker e2e test

2020-09-08 Thread GitBox


wangxiyuan commented on pull request #23:
URL: https://github.com/apache/flink-docker/pull/23#issuecomment-689237514


   > I'm not sure if this PR works as-is. The updated generator doesn't seem to 
set the architecture flag correctly: 
https://github.com/apache/flink-docker/pull/23/files#diff-0c5f825c10334c2eff0a90687f513c69R93
   > 
   > also, the architecture seems to depend on the architecture of the machine 
executing the `generate.sh` script. But this script only generates the 
Dockerfile and metadata file, not the actual docker image.
   
   Acutally many docker images support multi arch currently.  For example 
`openjdk:11-jre`
   
   
   
https://github.com/wangxiyuan/flink-docker/blob/dev-master/generator.sh#L18-L22



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-docker] wangxiyuan closed pull request #23: [FLINK-14241][test]Add arm64 support for docker e2e test

2020-09-08 Thread GitBox


wangxiyuan closed pull request #23:
URL: https://github.com/apache/flink-docker/pull/23


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13128: [FLINK-18795][hbase] Support for HBase 2

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13128:
URL: https://github.com/apache/flink/pull/13128#issuecomment-672766836


   
   ## CI report:
   
   * 313b80f4474455d7013b0852929b5a8458f391a1 UNKNOWN
   * bed4e3388e10406af39aef4edb136fa84864dca7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6340)
 
   * 860de517377c2666a8b44c1229c09e0c3a72959b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6355)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13128: [FLINK-18795][hbase] Support for HBase 2

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13128:
URL: https://github.com/apache/flink/pull/13128#issuecomment-672766836


   
   ## CI report:
   
   * 313b80f4474455d7013b0852929b5a8458f391a1 UNKNOWN
   * bed4e3388e10406af39aef4edb136fa84864dca7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6340)
 
   * 860de517377c2666a8b44c1229c09e0c3a72959b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13228: [FLINK-19026][network] Improve threading model of CheckpointBarrierUnaligner

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13228:
URL: https://github.com/apache/flink/pull/13228#issuecomment-679099456


   
   ## CI report:
   
   * d141f10ef060063162dc073b1cd66729f5f75a3b UNKNOWN
   * 11cb1939f8a98340acab9b795c6f1894808fb606 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6351)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13321: [FLINK-14870] Remove nullable assumption of task slot sharing group

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13321:
URL: https://github.com/apache/flink/pull/13321#issuecomment-686567896


   
   ## CI report:
   
   * d79f152fa91b8bc555af6fb2a00a8d62b184be5a UNKNOWN
   * 181e73983ad41f287d91792136d13c59b2ea037d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6347)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-19164) Release scripts break other dependency versions unintentionally

2020-09-08 Thread Serhat Soydan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192484#comment-17192484
 ] 

Serhat Soydan edited comment on FLINK-19164 at 9/8/20, 9:13 PM:


[~rmetzger], thanks for the comment. Although it is only a helper script and 
the results are reviewed by many eyes for the official release process, it 
still exists in the project and open to use for people that may use it within 
forked repos or other cases. The possible side effect is not clear and everyone 
should be able to somehow use it without worrying that it may break. Actually 
we experienced the issue in our project and spent hours trying to figure out 
where the problem is (there are around 185 poms that needs to be reviewed).

The below line seem to update versions of all modules in the project (within 
pom files) by a manual find & replace:
 {color:#de350b}find . -name 'pom.xml' -type f -exec perl -pi -e 
's#(.*)'$OLD_VERSION'(.*)#${1}'$NEW_VERSION'${2}#'
 {} \;{color}

*A possible solution is to replace it with "versions maven plugin" and running 
the command below within the script (tried it locally and it seems to work 
properly):*

{color:#de350b}mvn versions:set -DnewVersion=$NEW_VERSION 
-DprocessAllModules{color}

or use it directly in the script only without adding the plugin to the project

{color:#de350b}mvn org.codehaus.mojo:versions-maven-plugin:2.8.1:set 
-DnewVersion=$NEW_VERSION -DprocessAllModules{color}


was (Author: ssoydan):
[~rmetzger], thanks for the comment. Although it is only a helper script and 
the results are reviewed by many eyes for the official release process, it 
still exists in the project and open to use for people that may use it within 
forked repos or other cases. The possible side effect is not clear and everyone 
should be able to somehow use it without worrying that it may break. Actually 
we experienced the issue in our project and spent hours trying to figure out 
where the problem is (there are around 185 poms that needs to be reviewed).

The below line seem to update versions of all modules in the project (within 
pom files) by a manual find & replace:
{color:#de350b}find . -name 'pom.xml' -type f -exec perl -pi -e 
's#(.*)'$OLD_VERSION'(.*)#${1}'$NEW_VERSION'${2}#'
 {} \;{color}

** A possible solution is to replace it with "versions maven plugin" and 
running the command below within the script (tried it locally and it seems to 
work properly)*:

{color:#de350b}mvn versions:set -DnewVersion=$NEW_VERSION 
-DprocessAllModules{color}

or use it directly in the script only without adding the plugin to the project

{color:#de350b}mvn org.codehaus.mojo:versions-maven-plugin:2.8.1:set 
-DnewVersion=$NEW_VERSION -DprocessAllModules{color}

> Release scripts break other dependency versions unintentionally
> ---
>
> Key: FLINK-19164
> URL: https://issues.apache.org/jira/browse/FLINK-19164
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Scripts, Release System
>Reporter: Serhat Soydan
>Priority: Minor
>
> All the scripts below has a line to change the old version to new version in 
> pom files.
> [https://github.com/apache/flink/blob/master/tools/change-version.sh#L31]
> [https://github.com/apache/flink/blob/master/tools/releasing/create_release_branch.sh#L60]
> [https://github.com/apache/flink/blob/master/tools/releasing/update_branch_version.sh#L52]
>  
> It works like find & replace so it is prone to unintentional errors. Any 
> dependency with a version equals to "old version" might be automatically 
> changed to "new version". See below to see how to produce a similar case. 
>  
> +How to re-produce the bug:+
>  * Clone/Fork Flink repo and for example checkout version v*1.11.1* 
>  * Apply any changes you need
>  * Run "create_release_branch.sh" script with OLD_VERSION=*1.11.1* 
> NEW_VERSION={color:#de350b}*1.12.0*{color}
>  ** In parent pom.xml, an auto find&replace of maven-dependency-analyzer 
> version will be done automatically and *unintentionally* which will break the 
> build.
>  
> 
> org.apache.maven.shared
> maven-dependency-analyzer
> *1.11.1*
> 
>  
> 
> org.apache.maven.shared
> maven-dependency-analyzer
> {color:#de350b}*1.12.0*{color}
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19164) Release scripts break other dependency versions unintentionally

2020-09-08 Thread Serhat Soydan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192484#comment-17192484
 ] 

Serhat Soydan commented on FLINK-19164:
---

[~rmetzger], thanks for the comment. Although it is only a helper script and 
the results are reviewed by many eyes for the official release process, it 
still exists in the project and open to use for people that may use it within 
forked repos or other cases. The possible side effect is not clear and everyone 
should be able to somehow use it without worrying that it may break. Actually 
we experienced the issue in our project and spent hours trying to figure out 
where the problem is (there are around 185 poms that needs to be reviewed).

The below line seem to update versions of all modules in the project (within 
pom files) by a manual find & replace:
{color:#de350b}find . -name 'pom.xml' -type f -exec perl -pi -e 
's#(.*)'$OLD_VERSION'(.*)#${1}'$NEW_VERSION'${2}#'
 {} \;{color}

** A possible solution is to replace it with "versions maven plugin" and 
running the command below within the script (tried it locally and it seems to 
work properly)*:

{color:#de350b}mvn versions:set -DnewVersion=$NEW_VERSION 
-DprocessAllModules{color}

or use it directly in the script only without adding the plugin to the project

{color:#de350b}mvn org.codehaus.mojo:versions-maven-plugin:2.8.1:set 
-DnewVersion=$NEW_VERSION -DprocessAllModules{color}

> Release scripts break other dependency versions unintentionally
> ---
>
> Key: FLINK-19164
> URL: https://issues.apache.org/jira/browse/FLINK-19164
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Scripts, Release System
>Reporter: Serhat Soydan
>Priority: Minor
>
> All the scripts below has a line to change the old version to new version in 
> pom files.
> [https://github.com/apache/flink/blob/master/tools/change-version.sh#L31]
> [https://github.com/apache/flink/blob/master/tools/releasing/create_release_branch.sh#L60]
> [https://github.com/apache/flink/blob/master/tools/releasing/update_branch_version.sh#L52]
>  
> It works like find & replace so it is prone to unintentional errors. Any 
> dependency with a version equals to "old version" might be automatically 
> changed to "new version". See below to see how to produce a similar case. 
>  
> +How to re-produce the bug:+
>  * Clone/Fork Flink repo and for example checkout version v*1.11.1* 
>  * Apply any changes you need
>  * Run "create_release_branch.sh" script with OLD_VERSION=*1.11.1* 
> NEW_VERSION={color:#de350b}*1.12.0*{color}
>  ** In parent pom.xml, an auto find&replace of maven-dependency-analyzer 
> version will be done automatically and *unintentionally* which will break the 
> build.
>  
> 
> org.apache.maven.shared
> maven-dependency-analyzer
> *1.11.1*
> 
>  
> 
> org.apache.maven.shared
> maven-dependency-analyzer
> {color:#de350b}*1.12.0*{color}
> 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13228: [FLINK-19026][network] Improve threading model of CheckpointBarrierUnaligner

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13228:
URL: https://github.com/apache/flink/pull/13228#issuecomment-679099456


   
   ## CI report:
   
   * d141f10ef060063162dc073b1cd66729f5f75a3b UNKNOWN
   * 16c32eb1e2d49fa3c84cd4a82380fd72d5dcf5c0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6349)
 
   * 11cb1939f8a98340acab9b795c6f1894808fb606 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6351)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13356: [FLINK-16789][runtime][rest] Enable JMX RMI port retrieval via REST API

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13356:
URL: https://github.com/apache/flink/pull/13356#issuecomment-688955548


   
   ## CI report:
   
   * 656789f8c2f1288e3b9cf369f82d0698b249f179 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6346)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13228: [FLINK-19026][network] Improve threading model of CheckpointBarrierUnaligner

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13228:
URL: https://github.com/apache/flink/pull/13228#issuecomment-679099456


   
   ## CI report:
   
   * d141f10ef060063162dc073b1cd66729f5f75a3b UNKNOWN
   * b3a1520089c241fc74837902b6440d84a9636c14 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6333)
 
   * 16c32eb1e2d49fa3c84cd4a82380fd72d5dcf5c0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6349)
 
   * 11cb1939f8a98340acab9b795c6f1894808fb606 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6351)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13354: [FLINK-19135] Strip ExecutionException in (Stream)ExecutionEnvironment.execute()

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13354:
URL: https://github.com/apache/flink/pull/13354#issuecomment-688863968


   
   ## CI report:
   
   * 0b67901e6fb6811ffc7b8e85fef7dc20129f3184 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6345)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13228: [FLINK-19026][network] Improve threading model of CheckpointBarrierUnaligner

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13228:
URL: https://github.com/apache/flink/pull/13228#issuecomment-679099456


   
   ## CI report:
   
   * d141f10ef060063162dc073b1cd66729f5f75a3b UNKNOWN
   * b3a1520089c241fc74837902b6440d84a9636c14 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6333)
 
   * 16c32eb1e2d49fa3c84cd4a82380fd72d5dcf5c0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6349)
 
   * 11cb1939f8a98340acab9b795c6f1894808fb606 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13355: [FLINK-17554] Allow to register release hooks for the classloader

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13355:
URL: https://github.com/apache/flink/pull/13355#issuecomment-688910168


   
   ## CI report:
   
   * 7280d757646b7541c5810ebef0e4b752edbb4f5f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6344)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13340: [FLINK-19112][table] Improve usability during constant expression reduction

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13340:
URL: https://github.com/apache/flink/pull/13340#issuecomment-688291075


   
   ## CI report:
   
   * 69e2cdd9970bd479ae081b5a15f5e8239c111bec Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6343)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13228: [FLINK-19026][network] Improve threading model of CheckpointBarrierUnaligner

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13228:
URL: https://github.com/apache/flink/pull/13228#issuecomment-679099456


   
   ## CI report:
   
   * d141f10ef060063162dc073b1cd66729f5f75a3b UNKNOWN
   * b3a1520089c241fc74837902b6440d84a9636c14 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6333)
 
   * 16c32eb1e2d49fa3c84cd4a82380fd72d5dcf5c0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6349)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-14942) State Processing API: add an option to make deep copy

2020-09-08 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192368#comment-17192368
 ] 

Seth Wiesman commented on FLINK-14942:
--

merged in master: 597f5027c5b0277a80448f988c11f314449d270f

> State Processing API: add an option to make deep copy
> -
>
> Key: FLINK-14942
> URL: https://issues.apache.org/jira/browse/FLINK-14942
> Project: Flink
>  Issue Type: Improvement
>  Components: API / State Processor
>Affects Versions: 1.11.0
>Reporter: Jun Qin
>Assignee: Jun Qin
>Priority: Blocker
>  Labels: pull-request-available, usability
> Fix For: 1.12.0
>
>
> Current when a new savepoint is created based on a source savepoint, then 
> there are references in the new savepoint to the source savepoint. Here is 
> the [State Processing API 
> doc|https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/libs/state_processor_api.html]
>  says: 
> bq. Note: When basing a new savepoint on existing state, the state processor 
> api makes a shallow copy of the pointers to the existing operators. This 
> means that both savepoints share state and one cannot be deleted without 
> corrupting the other!
> This JIRA is to request an option to have a deep copy (instead of shallow 
> copy) such that the new savepoint is self-contained. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sjwiesman commented on pull request #13309: [Flink-14942][state-processor-api] savepoint deep copy

2020-09-08 Thread GitBox


sjwiesman commented on pull request #13309:
URL: https://github.com/apache/flink/pull/13309#issuecomment-689048049







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-14942) State Processing API: add an option to make deep copy

2020-09-08 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman closed FLINK-14942.

Resolution: Fixed

> State Processing API: add an option to make deep copy
> -
>
> Key: FLINK-14942
> URL: https://issues.apache.org/jira/browse/FLINK-14942
> Project: Flink
>  Issue Type: Improvement
>  Components: API / State Processor
>Affects Versions: 1.11.0
>Reporter: Jun Qin
>Assignee: Jun Qin
>Priority: Blocker
>  Labels: pull-request-available, usability
> Fix For: 1.12.0
>
>
> Current when a new savepoint is created based on a source savepoint, then 
> there are references in the new savepoint to the source savepoint. Here is 
> the [State Processing API 
> doc|https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/libs/state_processor_api.html]
>  says: 
> bq. Note: When basing a new savepoint on existing state, the state processor 
> api makes a shallow copy of the pointers to the existing operators. This 
> means that both savepoints share state and one cannot be deleted without 
> corrupting the other!
> This JIRA is to request an option to have a deep copy (instead of shallow 
> copy) such that the new savepoint is self-contained. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sjwiesman closed pull request #13309: [Flink-14942][state-processor-api] savepoint deep copy

2020-09-08 Thread GitBox


sjwiesman closed pull request #13309:
URL: https://github.com/apache/flink/pull/13309


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13357: [FLINK-19165] Refactor the UnilateralSortMerger

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13357:
URL: https://github.com/apache/flink/pull/13357#issuecomment-689034696


   
   ## CI report:
   
   * 54ee6987c7054a42fa52d04ec61fae74a816792c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6348)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13357: [FLINK-19165] Refactor the UnilateralSortMerger

2020-09-08 Thread GitBox


flinkbot commented on pull request #13357:
URL: https://github.com/apache/flink/pull/13357#issuecomment-689034696


   
   ## CI report:
   
   * 54ee6987c7054a42fa52d04ec61fae74a816792c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13357: [FLINK-19165] Refactor the UnilateralSortMerger

2020-09-08 Thread GitBox


flinkbot commented on pull request #13357:
URL: https://github.com/apache/flink/pull/13357#issuecomment-689028555


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 54ee6987c7054a42fa52d04ec61fae74a816792c (Tue Sep 08 
17:31:51 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19165) Clean up the UnilateralSortMerger

2020-09-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19165:
---
Labels: pull-request-available  (was: )

> Clean up the UnilateralSortMerger
> -
>
> Key: FLINK-19165
> URL: https://issues.apache.org/jira/browse/FLINK-19165
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> This is a preparation step for 
> [FLIP-140|https://cwiki.apache.org/confluence/display/FLINK/FLIP-140%3A+Introduce+bounded+style+execution+for+keyed+streams].
>  The purpose of the task is two-folds:
> * break down the implementation into a more composable pieces
> * introduce a way to produce records in a push-based manner instead of 
> pull-based with additional reading thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys opened a new pull request #13357: [FLINK-19165] Refactor the UnilateralSortMerger

2020-09-08 Thread GitBox


dawidwys opened a new pull request #13357:
URL: https://github.com/apache/flink/pull/13357


   ## What is the purpose of the change
   
   This is a preparation for introducing a sorted inputs in BATCH execution
   mode. It refactors the UnilateralSortMerger into more composable pieces.
   Additionally it adds the option to use a push-based approach instead of
   spawning a Reader thread that consumes from an input iterator.
   
   The PR still has a couple of shortcomings:
   
   * the combine function is not being opened (however it did not work well 
even before that, the function was being closed before the merge stage)
   * it misses tests for the push-based approach (however the reading thread 
simply encapsulates the push-based producer)
   * it misses a test for testing closing all threads on an exception (the 
exception handler was reworked)
   
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   Some tests are still to be added.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (**yes** / no 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13321: [FLINK-14870] Remove nullable assumption of task slot sharing group

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13321:
URL: https://github.com/apache/flink/pull/13321#issuecomment-686567896


   
   ## CI report:
   
   * d79f152fa91b8bc555af6fb2a00a8d62b184be5a UNKNOWN
   * 96e983d10c7095847b404461abd01b4d1ebb0aac Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6324)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6284)
 
   * 181e73983ad41f287d91792136d13c59b2ea037d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6347)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19165) Clean up the UnilateralSortMerger

2020-09-08 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-19165:


 Summary: Clean up the UnilateralSortMerger
 Key: FLINK-19165
 URL: https://issues.apache.org/jira/browse/FLINK-19165
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.12.0


This is a preparation step for 
[FLIP-140|https://cwiki.apache.org/confluence/display/FLINK/FLIP-140%3A+Introduce+bounded+style+execution+for+keyed+streams].
 The purpose of the task is two-folds:
* break down the implementation into a more composable pieces
* introduce a way to produce records in a push-based manner instead of 
pull-based with additional reading thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13321: [FLINK-14870] Remove nullable assumption of task slot sharing group

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13321:
URL: https://github.com/apache/flink/pull/13321#issuecomment-686567896


   
   ## CI report:
   
   * d79f152fa91b8bc555af6fb2a00a8d62b184be5a UNKNOWN
   * 96e983d10c7095847b404461abd01b4d1ebb0aac Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6324)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6284)
 
   * 181e73983ad41f287d91792136d13c59b2ea037d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13228: [FLINK-19026][network] Improve threading model of CheckpointBarrierUnaligner

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13228:
URL: https://github.com/apache/flink/pull/13228#issuecomment-679099456


   
   ## CI report:
   
   * d141f10ef060063162dc073b1cd66729f5f75a3b UNKNOWN
   * b3a1520089c241fc74837902b6440d84a9636c14 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6333)
 
   * 16c32eb1e2d49fa3c84cd4a82380fd72d5dcf5c0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #13284: [FLINK-17016][runtime] Enable pipelined region scheduling

2020-09-08 Thread GitBox


zhuzhurk commented on a change in pull request #13284:
URL: https://github.com/apache/flink/pull/13284#discussion_r485070778



##
File path: 
flink-tests/src/test/java/org/apache/flink/test/scheduling/PipelinedRegionSchedulingITCase.java
##
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.test.scheduling;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.client.program.MiniClusterClient;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.JobManagerOptions;
+import org.apache.flink.configuration.RestOptions;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionType;
+import org.apache.flink.runtime.jobgraph.DistributionPattern;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.ScheduleMode;
+import 
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException;
+import org.apache.flink.runtime.jobmanager.scheduler.SlotSharingGroup;
+import org.apache.flink.runtime.jobmaster.JobResult;
+import org.apache.flink.runtime.minicluster.MiniCluster;
+import org.apache.flink.runtime.minicluster.MiniClusterConfiguration;
+import org.apache.flink.runtime.testtasks.NoOpInvokable;
+import org.apache.flink.util.ExceptionUtils;
+import org.apache.flink.util.TestLogger;
+
+import org.junit.Test;
+
+import java.util.Optional;
+import java.util.concurrent.CompletableFuture;
+
+import static org.hamcrest.CoreMatchers.containsString;
+import static org.hamcrest.CoreMatchers.is;
+import static org.junit.Assert.assertThat;
+
+/**
+ * IT case for pipelined region scheduling.
+ */
+public class PipelinedRegionSchedulingITCase extends TestLogger {
+
+   @Test
+   public void testSuccessWithSlotsNoFewerThanTheMaxRegionRequired() 
throws Exception {
+   final JobResult jobResult = executeSchedulingTest(2);
+   assertThat(jobResult.getSerializedThrowable().isPresent(), 
is(false));
+   }
+
+   @Test
+   public void testFailsOnInsufficientSlots() throws Exception {
+   final JobResult jobResult = executeSchedulingTest(1);
+   assertThat(jobResult.getSerializedThrowable().isPresent(), 
is(true));
+
+   final Throwable jobFailure = jobResult
+   .getSerializedThrowable()
+   .get()
+   .deserializeError(ClassLoader.getSystemClassLoader());
+
+   final Optional cause = 
ExceptionUtils.findThrowable(
+   jobFailure,
+   NoResourceAvailableException.class);
+   assertThat(cause.isPresent(), is(true));
+   assertThat(cause.get().getMessage(), containsString("Slot 
request bulk is not fulfillable!"));
+   }
+
+   private JobResult executeSchedulingTest(int numSlots) throws Exception {
+   final Configuration configuration = new Configuration();
+   configuration.setString(RestOptions.BIND_PORT, "0");
+   configuration.setLong(JobManagerOptions.SLOT_REQUEST_TIMEOUT, 
5000L);

Review comment:
   The main concern is the time to build the connections between JM and TMs 
and slot offers. It can happen only after a job is launched.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #13284: [FLINK-17016][runtime] Enable pipelined region scheduling

2020-09-08 Thread GitBox


zhuzhurk commented on a change in pull request #13284:
URL: https://github.com/apache/flink/pull/13284#discussion_r485065322



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/minicluster/MiniClusterITCase.java
##
@@ -400,16 +400,23 @@ public void 
testJobWithAnOccasionallyFailingSenderVertex() throws Exception {
try (final MiniCluster miniCluster = new MiniCluster(cfg)) {
miniCluster.start();
 
+   // putting sender and receiver vertex in the same slot 
sharing group is required
+   // to ensure all senders can be deployed. Otherwise 
this case can fail if the
+   // expected failing sender is not deployed.

Review comment:
   Yes, by default each JobVertex is in a different slot sharing group, 
which is aligned to the previous behavior for null slot sharing group before we 
applies #13321.

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/minicluster/MiniClusterITCase.java
##
@@ -400,16 +400,23 @@ public void 
testJobWithAnOccasionallyFailingSenderVertex() throws Exception {
try (final MiniCluster miniCluster = new MiniCluster(cfg)) {
miniCluster.start();
 
+   // putting sender and receiver vertex in the same slot 
sharing group is required
+   // to ensure all senders can be deployed. Otherwise 
this case can fail if the
+   // expected failing sender is not deployed.

Review comment:
   Yes, by default each JobVertex is in a different slot sharing group, 
which is aligned to the previous behavior for null slot sharing group before we 
apply #13321.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13344: [FLINK-10740] FLIP-27 File Source and various Updates on the Split Reader API

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13344:
URL: https://github.com/apache/flink/pull/13344#issuecomment-688495541


   
   ## CI report:
   
   * 15a81e82a4a42cd255baad4ffd3e7e20343b5389 UNKNOWN
   * d154ff654bd927988a42f2881e8f690a121ea536 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6341)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #13321: [FLINK-14870] Remove nullable assumption of task slot sharing group

2020-09-08 Thread GitBox


zhuzhurk commented on a change in pull request #13321:
URL: https://github.com/apache/flink/pull/13321#discussion_r485060703



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/AbstractExecutionSlotAllocatorTest.java
##
@@ -114,38 +118,30 @@ public void 
testCompletedExecutionVertexAssignmentWillBeUnregistered() {
public void testComputeAllPriorAllocationIds() {
final List expectAllocationIds = 
Arrays.asList(new AllocationID(), new AllocationID());
final List 
testSchedulingRequirements = Arrays.asList(
-   new ExecutionVertexSchedulingRequirements.Builder().
-   withExecutionVertexId(new ExecutionVertexID(new 
JobVertexID(), 0)).
-   
withPreviousAllocationId(expectAllocationIds.get(0)).
-   build(),
-   new ExecutionVertexSchedulingRequirements.Builder().
-   withExecutionVertexId(new ExecutionVertexID(new 
JobVertexID(), 1)).
-   
withPreviousAllocationId(expectAllocationIds.get(0)).
-   build(),
-   new ExecutionVertexSchedulingRequirements.Builder().
-   withExecutionVertexId(new ExecutionVertexID(new 
JobVertexID(), 2)).
-   
withPreviousAllocationId(expectAllocationIds.get(1)).
-   build(),
-   new ExecutionVertexSchedulingRequirements.Builder().
-   withExecutionVertexId(new ExecutionVertexID(new 
JobVertexID(), 3)).
-   build()
+   createSchedulingRequirement(new ExecutionVertexID(new 
JobVertexID(), 0), expectAllocationIds.get(0)),
+   createSchedulingRequirement(new ExecutionVertexID(new 
JobVertexID(), 1), expectAllocationIds.get(0)),
+   createSchedulingRequirement(new ExecutionVertexID(new 
JobVertexID(), 2), expectAllocationIds.get(1)),
+   createSchedulingRequirement(new ExecutionVertexID(new 
JobVertexID(), 3))
);
 
final Set allPriorAllocationIds =

AbstractExecutionSlotAllocator.computeAllPriorAllocationIds(testSchedulingRequirements);
assertThat(allPriorAllocationIds, 
containsInAnyOrder(expectAllocationIds.toArray()));
}
 
-   private List 
createSchedulingRequirements(
-   final ExecutionVertexID... executionVertexIds) {
-
-   final List 
schedulingRequirements = new ArrayList<>(executionVertexIds.length);
+   private ExecutionVertexSchedulingRequirements 
createSchedulingRequirement(
+   final ExecutionVertexID executionVertexId) {
+   return createSchedulingRequirement(executionVertexId, null);
+   }

Review comment:
   done.

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/scheduler/SchedulerTestBase.java
##
@@ -161,7 +161,7 @@ public TaskManagerLocation addTaskManager(int numberSlots) {
 
public void releaseTaskManager(ResourceID resourceId) {
try {
-   supplyInMainThreadExecutor(() -> 
slotPool.releaseTaskManager(resourceId, null));
+   supplyInMainThreadExecutor(() -> 
slotPool.releaseTaskManager(resourceId, new Exception("Test Exception")));

Review comment:
   done.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #13321: [FLINK-14870] Remove nullable assumption of task slot sharing group

2020-09-08 Thread GitBox


zhuzhurk commented on a change in pull request #13321:
URL: https://github.com/apache/flink/pull/13321#discussion_r485058886



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/OneSlotPerExecutionSlotAllocatorTest.java
##
@@ -224,6 +224,7 @@ public void testCoLocationConstraintThrowsException() {
final List 
schedulingRequirements = Collections.singletonList(
new ExecutionVertexSchedulingRequirements.Builder()
.withExecutionVertexId(new 
ExecutionVertexID(new JobVertexID(), 0))
+   .withSlotSharingGroupId(new 
SlotSharingGroupId())

Review comment:
   In production code path, a non-null `SlotSharingGroupId` should always 
be obtainable from the vertex.
   A random `SlotSharingGroupId` is only needed in tests so I'd like to avoid 
to create it here. This is similar to how we deal with `ExecutionVertexID`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13294: [FLINK-19002][canal][json] Support to only read changelogs of specific database and table for canal-json format

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13294:
URL: https://github.com/apache/flink/pull/13294#issuecomment-684586137


   
   ## CI report:
   
   * e2c5268981c4bf459bffeccac3443da6c209b13a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6339)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #13321: [FLINK-14870] Remove nullable assumption of task slot sharing group

2020-09-08 Thread GitBox


zhuzhurk commented on a change in pull request #13321:
URL: https://github.com/apache/flink/pull/13321#discussion_r485052102



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java
##
@@ -364,25 +365,29 @@ public void 
addOperatorCoordinator(SerializedValue
 * @param grp The slot sharing group to associate the vertex with.
 */
public void setSlotSharingGroup(SlotSharingGroup grp) {
+   checkNotNull(grp);
+
if (this.slotSharingGroup != null) {

this.slotSharingGroup.removeVertexFromGroup(this.getID(), 
this.getMinResources());
}
 
+   grp.addVertexToGroup(this.getID(), this.getMinResources());
this.slotSharingGroup = grp;
-   if (grp != null) {
-   grp.addVertexToGroup(this.getID(), 
this.getMinResources());
-   }
}
 
/**
 * Gets the slot sharing group that this vertex is associated with. 
Different vertices in the same
-* slot sharing group can run one subtask each in the same slot. If the 
vertex is not associated with
-* a slot sharing group, this method returns {@code null}.
+* slot sharing group can run one subtask each in the same slot.
 *
-* @return The slot sharing group to associate the vertex with, or 
{@code null}, if not associated with one.
+* @return The slot sharing group to associate the vertex with
 */
-   @Nullable
public SlotSharingGroup getSlotSharingGroup() {
+   if (slotSharingGroup == null) {
+   // create a new slot sharing group for this vertex if 
it was in no other slot sharing group.
+   // this should only happen in testing cases at the 
moment because production code path will
+   // always set a value to it before used

Review comment:
   I think so, there are 200+ usages of JobVertex constructors, distributed 
in 40+ test classes.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #13321: [FLINK-14870] Remove nullable assumption of task slot sharing group

2020-09-08 Thread GitBox


zhuzhurk commented on a change in pull request #13321:
URL: https://github.com/apache/flink/pull/13321#discussion_r485052102



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java
##
@@ -364,25 +365,29 @@ public void 
addOperatorCoordinator(SerializedValue
 * @param grp The slot sharing group to associate the vertex with.
 */
public void setSlotSharingGroup(SlotSharingGroup grp) {
+   checkNotNull(grp);
+
if (this.slotSharingGroup != null) {

this.slotSharingGroup.removeVertexFromGroup(this.getID(), 
this.getMinResources());
}
 
+   grp.addVertexToGroup(this.getID(), this.getMinResources());
this.slotSharingGroup = grp;
-   if (grp != null) {
-   grp.addVertexToGroup(this.getID(), 
this.getMinResources());
-   }
}
 
/**
 * Gets the slot sharing group that this vertex is associated with. 
Different vertices in the same
-* slot sharing group can run one subtask each in the same slot. If the 
vertex is not associated with
-* a slot sharing group, this method returns {@code null}.
+* slot sharing group can run one subtask each in the same slot.
 *
-* @return The slot sharing group to associate the vertex with, or 
{@code null}, if not associated with one.
+* @return The slot sharing group to associate the vertex with
 */
-   @Nullable
public SlotSharingGroup getSlotSharingGroup() {
+   if (slotSharingGroup == null) {
+   // create a new slot sharing group for this vertex if 
it was in no other slot sharing group.
+   // this should only happen in testing cases at the 
moment because production code path will
+   // always set a value to it before used

Review comment:
   I think so, there are 200+ usages of JobVertex constructors, distributed 
in tens of tests.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13315: [FLINK-19070][hive] Hive connector should throw a meaningful exception if user reads/writes ACID tables

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13315:
URL: https://github.com/apache/flink/pull/13315#issuecomment-686398823


   
   ## CI report:
   
   * 2ed647fc736a248d536d6f5422e0a50a7119045f UNKNOWN
   * c724d8f21242ae5573136e88e0c983dc776d4cf3 UNKNOWN
   * 6d8f174b90ce3efd668bb7d53709d55eab2cbfa0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6338)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13356: [FLINK-16789][runtime][rest] Enable JMX RMI port retrieval via REST API

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13356:
URL: https://github.com/apache/flink/pull/13356#issuecomment-688955548


   
   ## CI report:
   
   * 656789f8c2f1288e3b9cf369f82d0698b249f179 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6346)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13356: [FLINK-16789][runtime][rest] Enable JMX RMI port retrieval via REST API

2020-09-08 Thread GitBox


flinkbot commented on pull request #13356:
URL: https://github.com/apache/flink/pull/13356#issuecomment-688955548


   
   ## CI report:
   
   * 656789f8c2f1288e3b9cf369f82d0698b249f179 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on pull request #13340: [FLINK-19112][table] Improve usability during constant expression reduction

2020-09-08 Thread GitBox


dawidwys commented on pull request #13340:
URL: https://github.com/apache/flink/pull/13340#issuecomment-688945647


   +1 from my side for the whole PR including the exception handling in 
constant expressions reduction



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] azagrebin commented on a change in pull request #13321: [FLINK-14870] Remove nullable assumption of task slot sharing group

2020-09-08 Thread GitBox


azagrebin commented on a change in pull request #13321:
URL: https://github.com/apache/flink/pull/13321#discussion_r484984687



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmanager/scheduler/SchedulerTestBase.java
##
@@ -161,7 +161,7 @@ public TaskManagerLocation addTaskManager(int numberSlots) {
 
public void releaseTaskManager(ResourceID resourceId) {
try {
-   supplyInMainThreadExecutor(() -> 
slotPool.releaseTaskManager(resourceId, null));
+   supplyInMainThreadExecutor(() -> 
slotPool.releaseTaskManager(resourceId, new Exception("Test Exception")));

Review comment:
   Maybe `Releasing TaskManager in SlotPool for tests`?

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/AbstractExecutionSlotAllocatorTest.java
##
@@ -114,38 +118,30 @@ public void 
testCompletedExecutionVertexAssignmentWillBeUnregistered() {
public void testComputeAllPriorAllocationIds() {
final List expectAllocationIds = 
Arrays.asList(new AllocationID(), new AllocationID());
final List 
testSchedulingRequirements = Arrays.asList(
-   new ExecutionVertexSchedulingRequirements.Builder().
-   withExecutionVertexId(new ExecutionVertexID(new 
JobVertexID(), 0)).
-   
withPreviousAllocationId(expectAllocationIds.get(0)).
-   build(),
-   new ExecutionVertexSchedulingRequirements.Builder().
-   withExecutionVertexId(new ExecutionVertexID(new 
JobVertexID(), 1)).
-   
withPreviousAllocationId(expectAllocationIds.get(0)).
-   build(),
-   new ExecutionVertexSchedulingRequirements.Builder().
-   withExecutionVertexId(new ExecutionVertexID(new 
JobVertexID(), 2)).
-   
withPreviousAllocationId(expectAllocationIds.get(1)).
-   build(),
-   new ExecutionVertexSchedulingRequirements.Builder().
-   withExecutionVertexId(new ExecutionVertexID(new 
JobVertexID(), 3)).
-   build()
+   createSchedulingRequirement(new ExecutionVertexID(new 
JobVertexID(), 0), expectAllocationIds.get(0)),
+   createSchedulingRequirement(new ExecutionVertexID(new 
JobVertexID(), 1), expectAllocationIds.get(0)),
+   createSchedulingRequirement(new ExecutionVertexID(new 
JobVertexID(), 2), expectAllocationIds.get(1)),
+   createSchedulingRequirement(new ExecutionVertexID(new 
JobVertexID(), 3))
);
 
final Set allPriorAllocationIds =

AbstractExecutionSlotAllocator.computeAllPriorAllocationIds(testSchedulingRequirements);
assertThat(allPriorAllocationIds, 
containsInAnyOrder(expectAllocationIds.toArray()));
}
 
-   private List 
createSchedulingRequirements(
-   final ExecutionVertexID... executionVertexIds) {
-
-   final List 
schedulingRequirements = new ArrayList<>(executionVertexIds.length);
+   private ExecutionVertexSchedulingRequirements 
createSchedulingRequirement(
+   final ExecutionVertexID executionVertexId) {
+   return createSchedulingRequirement(executionVertexId, null);
+   }

Review comment:
   ```suggestion
private ExecutionVertexSchedulingRequirements 
createSchedulingRequirement(
final int subtaskIndex) {
return createSchedulingRequirement(new ExecutionVertexID(new 
JobVertexID(), subtaskIndex), null);
}
   ```

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java
##
@@ -364,25 +365,29 @@ public void 
addOperatorCoordinator(SerializedValue
 * @param grp The slot sharing group to associate the vertex with.
 */
public void setSlotSharingGroup(SlotSharingGroup grp) {
+   checkNotNull(grp);
+
if (this.slotSharingGroup != null) {

this.slotSharingGroup.removeVertexFromGroup(this.getID(), 
this.getMinResources());
}
 
+   grp.addVertexToGroup(this.getID(), this.getMinResources());
this.slotSharingGroup = grp;
-   if (grp != null) {
-   grp.addVertexToGroup(this.getID(), 
this.getMinResources());
-   }
}
 
/**
 * Gets the slot sharing group that this vertex is associated with. 
Different vertices in the same
-* slot sharing group can run one subtask each in the same slot. If the 
vertex is not associated with
-* 

[GitHub] [flink] flinkbot edited a comment on pull request #13354: [FLINK-19135] Strip ExecutionException in (Stream)ExecutionEnvironment.execute()

2020-09-08 Thread GitBox


flinkbot edited a comment on pull request #13354:
URL: https://github.com/apache/flink/pull/13354#issuecomment-688863968


   
   ## CI report:
   
   * 493d4e645176ab876b572fc7722ee04c423ecd41 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6342)
 
   * 0b67901e6fb6811ffc7b8e85fef7dc20129f3184 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6345)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   >