[jira] [Assigned] (FLINK-17790) flink-connector-kafka-base does not compile on Java11

2020-05-18 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-17790:


Assignee: Chesnay Schepler

> flink-connector-kafka-base does not compile on Java11
> -
>
> Key: FLINK-17790
> URL: https://issues.apache.org/jira/browse/FLINK-17790
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.11.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1657&view=logs&j=946871de-358d-5815-3994-8175615bc253&t=e0240c62-4570-5d1c-51af-dd63d2093da1
> [ERROR] 
> /__w/3/s/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaOptions.java:[271,41]
>  incompatible types: cannot infer type-variable(s) U,T,T,T,T
> (argument mismatch; bad return type in lambda expression
>   
> java.util.Optional
>  cannot be converted to java.util.Optional org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner>)
> [INFO] 1 error



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys commented on a change in pull request #12201: [hotfix] Remove raw class usages in Configuration.

2020-05-18 Thread GitBox


dawidwys commented on a change in pull request #12201:
URL: https://github.com/apache/flink/pull/12201#discussion_r426407452



##
File path: 
flink-core/src/main/java/org/apache/flink/configuration/Configuration.java
##
@@ -909,12 +909,10 @@ private void loggingFallback(FallbackKey fallbackKey, 
ConfigOption configOpti
List listOfRawProperties = 
StructuredOptionsSplitter.splitEscaped(o.toString(), ',');
return listOfRawProperties.stream()
.map(s -> 
StructuredOptionsSplitter.splitEscaped(s, ':'))
-   .map(pair -> {
+   .peek(pair -> {

Review comment:
   Peek and map behave exactly the same in respect to the use cases 
described in that thread. I'd say the thread discusses the declarative aspect 
of the stream API. 
   
   In a code:
   ```
   stream()
   .map()/peek()
   .count()
   ```
   Neither `map` nor `peek` will be executed (depends on the jvm though) as 
they cannot change the result of `count()`. The linked thread rather compares 
`forEach` vs `peek`, in my opinion.
   
   In our use case particularly, the purpose of the `peek` is to add a sanity 
check right before the collection. In this case in my opinion it is absolutely 
safe to use `peek` and the code with `peek` is no different than previous 
version with `map` (minus the return value).
   
   Personally, I'd prefer to change it to `peek` as it removes IDE warning.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on a change in pull request #12201: [hotfix] Remove raw class usages in Configuration.

2020-05-18 Thread GitBox


dawidwys commented on a change in pull request #12201:
URL: https://github.com/apache/flink/pull/12201#discussion_r426407452



##
File path: 
flink-core/src/main/java/org/apache/flink/configuration/Configuration.java
##
@@ -909,12 +909,10 @@ private void loggingFallback(FallbackKey fallbackKey, 
ConfigOption configOpti
List listOfRawProperties = 
StructuredOptionsSplitter.splitEscaped(o.toString(), ',');
return listOfRawProperties.stream()
.map(s -> 
StructuredOptionsSplitter.splitEscaped(s, ':'))
-   .map(pair -> {
+   .peek(pair -> {

Review comment:
   Peek and map behave exactly the same in respect to the use cases 
described in that thread. I'd say the thread discusses the declarative aspect 
of the stream API. 
   
   In a code:
   ```
   stream()
   .map()/peek()
   .count()
   ```
   Neither `map` nor `peek` will be executed (depends on the jvm though) as 
they cannot change the result of `count()`. The linked thread rather compares 
`forEach` vs `peek`, in my opinion.
   
   In our use case particularly, the purpose of the `peek` is to add a sanity 
check right before the collection. We do not have any side effects in the 
`peek` that we want applied to all results in the input collection. We want it 
to be applied only to results we will collect. In this case in my opinion it is 
absolutely safe to use `peek` and the code with `peek` is no different than 
previous version with `map` (minus the return value).
   
   Personally, I'd prefer to change it to `peek` as it removes IDE warning.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12132: [FLINK-17593][Connectors/FileSystem] Support arbitrary recovery mechanism for PartFileWriter

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12132:
URL: https://github.com/apache/flink/pull/12132#issuecomment-628151415


   
   ## CI report:
   
   * 449b8494248924ab0c9a4a5187458933902a13a3 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1626)
 
   * 10ef0c696350fcd84866fde27f19ed2a0312ee4b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1683)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17527) kubernetes-session.sh uses log4j-console.properties

2020-05-18 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109953#comment-17109953
 ] 

Yang Wang commented on FLINK-17527:
---

[~trohrmann] If you agree, i will rename {{log4j-yarn-session.properties}} to 
{{log4j-session.properties}}, {{logback-yarn.xml}} to {{logback-session.xml}}. 
This also will make the logback configuration aligned with log4j configuration.

 

For {{log4j.properties}} and {{log4j-cli.properties}} unification, we could do 
it in a separate ticket. BTW, the logback does not differentiate this two.

> kubernetes-session.sh uses log4j-console.properties
> ---
>
> Key: FLINK-17527
> URL: https://issues.apache.org/jira/browse/FLINK-17527
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Major
> Fix For: 1.11.0, 1.10.2
>
>
> It is a bit confusing that {{kubernetes-session.sh}} uses the 
> {{log4j-console.properties}}.  At the moment, {{flink}} used 
> {{log4j-cli.properties}}, {{yarn-session.sh}} uses 
> {{log4j-yarn-session.properties}} and {{kubernetes-session.sh}} uses 
> {{log4j-console.properties}}.
> I would suggest to let all scripts use the same logger configuration file 
> (e.g. {{logj4-cli.properties}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12181:
URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595


   
   ## CI report:
   
   * 9a73076f072352ba5539bf558f90a94572fb6c36 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1510)
 
   * b954ba073cba912b98c5992b05caec91e7657871 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12186: [FLINK-16383][task] Do not relay notifyCheckpointComplete to closed operators

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12186:
URL: https://github.com/apache/flink/pull/12186#issuecomment-629492236


   
   ## CI report:
   
   * f512eeef60f86107d945f975b1ca8dead57db9c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1639)
 
   * 26b98ea10d229f1a49fbbc232dc5cdb83572ac3b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17595) JobExceptionsInfo. ExecutionExceptionInfo miss getter method

2020-05-18 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao closed FLINK-17595.

Resolution: Won't Do

> JobExceptionsInfo. ExecutionExceptionInfo miss getter method
> 
>
> Key: FLINK-17595
> URL: https://issues.apache.org/jira/browse/FLINK-17595
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.10.0
>Reporter: Wei Zhang
>Priority: Minor
> Fix For: 1.11.0
>
>
> {code:java}
>   public static final class ExecutionExceptionInfo {
>   public static final String FIELD_NAME_EXCEPTION = "exception";
>   public static final String FIELD_NAME_TASK = "task";
>   public static final String FIELD_NAME_LOCATION = "location";
>   public static final String FIELD_NAME_TIMESTAMP = "timestamp";
>   @JsonProperty(FIELD_NAME_EXCEPTION)
>   private final String exception;
>   @JsonProperty(FIELD_NAME_TASK)
>   private final String task;
>   @JsonProperty(FIELD_NAME_LOCATION)
>   private final String location;
>   @JsonProperty(FIELD_NAME_TIMESTAMP)
>   private final long timestamp;
>   @JsonCreator
>   public ExecutionExceptionInfo(
>   @JsonProperty(FIELD_NAME_EXCEPTION) String exception,
>   @JsonProperty(FIELD_NAME_TASK) String task,
>   @JsonProperty(FIELD_NAME_LOCATION) String location,
>   @JsonProperty(FIELD_NAME_TIMESTAMP) long timestamp) {
>   this.exception = Preconditions.checkNotNull(exception);
>   this.task = Preconditions.checkNotNull(task);
>   this.location = Preconditions.checkNotNull(location);
>   this.timestamp = timestamp;
>   }
>   @Override
>   public boolean equals(Object o) {
>   if (this == o) {
>   return true;
>   }
>   if (o == null || getClass() != o.getClass()) {
>   return false;
>   }
>   JobExceptionsInfo.ExecutionExceptionInfo that = 
> (JobExceptionsInfo.ExecutionExceptionInfo) o;
>   return timestamp == that.timestamp &&
>   Objects.equals(exception, that.exception) &&
>   Objects.equals(task, that.task) &&
>   Objects.equals(location, that.location);
>   }
>   @Override
>   public int hashCode() {
>   return Objects.hash(timestamp, exception, task, 
> location);
>   }
> {code}
> I found jobexceptionsinfo.executionexceptioninfo has no getter method for the 
> field, is it missing?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17595) JobExceptionsInfo. ExecutionExceptionInfo miss getter method

2020-05-18 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao updated FLINK-17595:
-
Fix Version/s: (was: 1.11.0)

> JobExceptionsInfo. ExecutionExceptionInfo miss getter method
> 
>
> Key: FLINK-17595
> URL: https://issues.apache.org/jira/browse/FLINK-17595
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.10.0
>Reporter: Wei Zhang
>Priority: Minor
>
> {code:java}
>   public static final class ExecutionExceptionInfo {
>   public static final String FIELD_NAME_EXCEPTION = "exception";
>   public static final String FIELD_NAME_TASK = "task";
>   public static final String FIELD_NAME_LOCATION = "location";
>   public static final String FIELD_NAME_TIMESTAMP = "timestamp";
>   @JsonProperty(FIELD_NAME_EXCEPTION)
>   private final String exception;
>   @JsonProperty(FIELD_NAME_TASK)
>   private final String task;
>   @JsonProperty(FIELD_NAME_LOCATION)
>   private final String location;
>   @JsonProperty(FIELD_NAME_TIMESTAMP)
>   private final long timestamp;
>   @JsonCreator
>   public ExecutionExceptionInfo(
>   @JsonProperty(FIELD_NAME_EXCEPTION) String exception,
>   @JsonProperty(FIELD_NAME_TASK) String task,
>   @JsonProperty(FIELD_NAME_LOCATION) String location,
>   @JsonProperty(FIELD_NAME_TIMESTAMP) long timestamp) {
>   this.exception = Preconditions.checkNotNull(exception);
>   this.task = Preconditions.checkNotNull(task);
>   this.location = Preconditions.checkNotNull(location);
>   this.timestamp = timestamp;
>   }
>   @Override
>   public boolean equals(Object o) {
>   if (this == o) {
>   return true;
>   }
>   if (o == null || getClass() != o.getClass()) {
>   return false;
>   }
>   JobExceptionsInfo.ExecutionExceptionInfo that = 
> (JobExceptionsInfo.ExecutionExceptionInfo) o;
>   return timestamp == that.timestamp &&
>   Objects.equals(exception, that.exception) &&
>   Objects.equals(task, that.task) &&
>   Objects.equals(location, that.location);
>   }
>   @Override
>   public int hashCode() {
>   return Objects.hash(timestamp, exception, task, 
> location);
>   }
> {code}
> I found jobexceptionsinfo.executionexceptioninfo has no getter method for the 
> field, is it missing?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-17595) JobExceptionsInfo. ExecutionExceptionInfo miss getter method

2020-05-18 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao reopened FLINK-17595:
--

> JobExceptionsInfo. ExecutionExceptionInfo miss getter method
> 
>
> Key: FLINK-17595
> URL: https://issues.apache.org/jira/browse/FLINK-17595
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.10.0
>Reporter: Wei Zhang
>Priority: Minor
> Fix For: 1.11.0
>
>
> {code:java}
>   public static final class ExecutionExceptionInfo {
>   public static final String FIELD_NAME_EXCEPTION = "exception";
>   public static final String FIELD_NAME_TASK = "task";
>   public static final String FIELD_NAME_LOCATION = "location";
>   public static final String FIELD_NAME_TIMESTAMP = "timestamp";
>   @JsonProperty(FIELD_NAME_EXCEPTION)
>   private final String exception;
>   @JsonProperty(FIELD_NAME_TASK)
>   private final String task;
>   @JsonProperty(FIELD_NAME_LOCATION)
>   private final String location;
>   @JsonProperty(FIELD_NAME_TIMESTAMP)
>   private final long timestamp;
>   @JsonCreator
>   public ExecutionExceptionInfo(
>   @JsonProperty(FIELD_NAME_EXCEPTION) String exception,
>   @JsonProperty(FIELD_NAME_TASK) String task,
>   @JsonProperty(FIELD_NAME_LOCATION) String location,
>   @JsonProperty(FIELD_NAME_TIMESTAMP) long timestamp) {
>   this.exception = Preconditions.checkNotNull(exception);
>   this.task = Preconditions.checkNotNull(task);
>   this.location = Preconditions.checkNotNull(location);
>   this.timestamp = timestamp;
>   }
>   @Override
>   public boolean equals(Object o) {
>   if (this == o) {
>   return true;
>   }
>   if (o == null || getClass() != o.getClass()) {
>   return false;
>   }
>   JobExceptionsInfo.ExecutionExceptionInfo that = 
> (JobExceptionsInfo.ExecutionExceptionInfo) o;
>   return timestamp == that.timestamp &&
>   Objects.equals(exception, that.exception) &&
>   Objects.equals(task, that.task) &&
>   Objects.equals(location, that.location);
>   }
>   @Override
>   public int hashCode() {
>   return Objects.hash(timestamp, exception, task, 
> location);
>   }
> {code}
> I found jobexceptionsinfo.executionexceptioninfo has no getter method for the 
> field, is it missing?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12209: [FLINK-17520] [core] Extend CompositeTypeSerializerSnapshot to support migration

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12209:
URL: https://github.com/apache/flink/pull/12209#issuecomment-629971018


   
   ## CI report:
   
   * c91cdd32db0464d8b60d0efc0763078388a15daf Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1684)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12202: [FLINK-17791][table][streaming] Support collecting query results under all execution and network environments

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12202:
URL: https://github.com/apache/flink/pull/12202#issuecomment-629806443


   
   ## CI report:
   
   * 9385209a72fa1314e604a3998a149d93a12617d9 UNKNOWN
   * 1b46780c0bf016a524a379cec83d120563ddcb9d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1636)
 
   * a13b735e5ecbb1c90479056ebda8d738a1a43584 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1661)
 
   * dc4b338596d9d3dee5dae9b1dfa3bebe1a5d902d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1662)
 
   * 3a44fbe9824e576068a4f172ca5738a7dd5cf9d1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap

2020-05-18 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109957#comment-17109957
 ] 

Chesnay Schepler commented on FLINK-17789:
--

I disagree. The current behavior is simple; take an option, apply prefix, look 
it up in the backing config.

If you now start removing trimming stuff that _happens_ to match the configured 
prefix you just introduce additional edge-cases.

Following your example, an option `k0.prefix.prefix.v1` will never work since 
you're removing the second `prefix`.

> DelegatingConfiguration should remove prefix instead of add prefix in toMap
> ---
>
> Key: FLINK-17789
> URL: https://issues.apache.org/jira/browse/FLINK-17789
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> {code:java}
> Configuration conf = new Configuration();
> conf.setString("k0", "v0");
> conf.setString("prefix.k1", "v1");
> DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix.");
> System.out.println(dc.getString("k0", "empty")); // empty
> System.out.println(dc.getString("k1", "empty")); // v1
> System.out.println(dc.toMap().get("k1")); // should be v1, but null
> System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, 
> but v1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap

2020-05-18 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109957#comment-17109957
 ] 

Chesnay Schepler edited comment on FLINK-17789 at 5/18/20, 7:09 AM:


I disagree. The current behavior is simple; take an option, apply prefix, look 
it up in the backing config.

If you now start removing trimming stuff that _happens_ to match the configured 
prefix you just introduce additional edge-cases.

Following your example, an option `k0.prefix.prefix.v1` would never work since 
you're removing the second `prefix`.


was (Author: zentol):
I disagree. The current behavior is simple; take an option, apply prefix, look 
it up in the backing config.

If you now start removing trimming stuff that _happens_ to match the configured 
prefix you just introduce additional edge-cases.

Following your example, an option `k0.prefix.prefix.v1` will never work since 
you're removing the second `prefix`.

> DelegatingConfiguration should remove prefix instead of add prefix in toMap
> ---
>
> Key: FLINK-17789
> URL: https://issues.apache.org/jira/browse/FLINK-17789
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> {code:java}
> Configuration conf = new Configuration();
> conf.setString("k0", "v0");
> conf.setString("prefix.k1", "v1");
> DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix.");
> System.out.println(dc.getString("k0", "empty")); // empty
> System.out.println(dc.getString("k1", "empty")); // v1
> System.out.println(dc.toMap().get("k1")); // should be v1, but null
> System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, 
> but v1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17792) Failing to invoking jstack on TM processes should not fail Jepsen Tests

2020-05-18 Thread Gary Yao (Jira)
Gary Yao created FLINK-17792:


 Summary: Failing to invoking jstack on TM processes should not 
fail Jepsen Tests
 Key: FLINK-17792
 URL: https://issues.apache.org/jira/browse/FLINK-17792
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.10.1, 1.11.0
Reporter: Gary Yao
Assignee: Gary Yao
 Fix For: 1.11.0


{{jstack}} can fail if the JVM process exits prematurely while or before we 
invoke {{jstack}}. If {{jstack}} fails, the exception propagates and exits the 
Jepsen Tests prematurely.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pnowojski commented on a change in pull request #12120: [FLINK-17547] Support unaligned checkpoints for records spilled to files

2020-05-18 Thread GitBox


pnowojski commented on a change in pull request #12120:
URL: https://github.com/apache/flink/pull/12120#discussion_r426411404



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/FileBasedBufferIterator.java
##
@@ -85,6 +90,6 @@ private int read(byte[] buffer) {
 
@Override
public void close() throws Exception {
-   closeAll(stream, file::release);
+   closeAll(stream, file::release, () -> 
buffersToClose.forEach(Buffer::recycleBuffer));

Review comment:
   > The client doesn't know whether the iterator is lazy or not. So if 
failure happens before iteration, it would have to read the whole file just to 
recycle the buffers.
   
   ? Why? As I wrote before, the buffers that were not yet returned, should be 
recycled by it's current owner - in that case `CloseableIterator`. If the 
buffers were even not yet created, because it's a lazy iterator, even better - 
nothing to do in the `close()`. 
   
   > In the beginnning of FileBasedBufferIterator.next().
   
   If buffers were returned from the iterator, iterator is no long it's owner 
and they shouldn't be recycled by the iterator. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pnowojski commented on a change in pull request #12120: [FLINK-17547] Support unaligned checkpoints for records spilled to files

2020-05-18 Thread GitBox


pnowojski commented on a change in pull request #12120:
URL: https://github.com/apache/flink/pull/12120#discussion_r426411404



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/FileBasedBufferIterator.java
##
@@ -85,6 +90,6 @@ private int read(byte[] buffer) {
 
@Override
public void close() throws Exception {
-   closeAll(stream, file::release);
+   closeAll(stream, file::release, () -> 
buffersToClose.forEach(Buffer::recycleBuffer));

Review comment:
   > The client doesn't know whether the iterator is lazy or not. So if 
failure happens before iteration, it would have to read the whole file just to 
recycle the buffers.
   
   Why? As I wrote before, the buffers that were not yet returned, should be 
recycled by it's current owner - in that case `CloseableIterator`. If the 
buffers were even not yet created, because it's a lazy iterator, even better - 
nothing to do in the `close()`. 
   
   > In the beginnning of FileBasedBufferIterator.next().
   
   If buffers were returned from the iterator, iterator is no long it's owner 
and they shouldn't be recycled by the iterator. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger merged pull request #12175: [FLINK-17692] Keep yarn.classpath in target folder to not pollute local builds

2020-05-18 Thread GitBox


rmetzger merged pull request #12175:
URL: https://github.com/apache/flink/pull/12175


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17692) "flink-end-to-end-tests/test-scripts/hadoop/yarn.classpath" present after building Flink

2020-05-18 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-17692.
--
Fix Version/s: 1.11.0
   Resolution: Fixed

Fixed in 
https://github.com/apache/flink/commit/1b42120ef62277d30210e4273903bd6020492a9f

> "flink-end-to-end-tests/test-scripts/hadoop/yarn.classpath" present after 
> building Flink
> 
>
> Key: FLINK-17692
> URL: https://issues.apache.org/jira/browse/FLINK-17692
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Some changes introduced in FLINK-11086 cause the 
> "flink-end-to-end-tests/test-scripts/hadoop/yarn.classpath" file to be 
> generated and present in the source tree after building Flink.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16383) KafkaProducerExactlyOnceITCase.testExactlyOnceRegularSink fails with "The producer has already been closed"

2020-05-18 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-16383:
---
Summary: KafkaProducerExactlyOnceITCase.testExactlyOnceRegularSink fails 
with "The producer has already been closed"  (was: 
KafkaProducerExactlyOnceITCase. testExactlyOnceRegularSink fails with "The 
producer has already been closed")

> KafkaProducerExactlyOnceITCase.testExactlyOnceRegularSink fails with "The 
> producer has already been closed"
> ---
>
> Key: FLINK-16383
> URL: https://issues.apache.org/jira/browse/FLINK-16383
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Reporter: Robert Metzger
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> Logs: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5779&view=logs&j=a54de925-e958-5e24-790a-3a6150eb72d8&t=24e561e9-4c8d-598d-a290-e6acce191345
> {code}
> 2020-03-01T01:06:57.4738418Z 01:06:57,473 [Source: Custom Source -> Map -> 
> Sink: Unnamed (1/1)] INFO  
> org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer
>  [] - Flushing new partitions
> 2020-03-01T01:06:57.4739960Z 01:06:57,473 [FailingIdentityMapper Status 
> Printer] INFO  
> org.apache.flink.streaming.connectors.kafka.testutils.FailingIdentityMapper 
> [] - > Failing mapper  0: count=680, 
> totalCount=1000
> 2020-03-01T01:06:57.4909074Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-03-01T01:06:57.4910001Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-03-01T01:06:57.4911000Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:648)
> 2020-03-01T01:06:57.4912078Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:77)
> 2020-03-01T01:06:57.4913039Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1619)
> 2020-03-01T01:06:57.4914421Z  at 
> org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:35)
> 2020-03-01T01:06:57.4915423Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testExactlyOnce(KafkaProducerTestBase.java:370)
> 2020-03-01T01:06:57.4916483Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testExactlyOnceRegularSink(KafkaProducerTestBase.java:309)
> 2020-03-01T01:06:57.4917305Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-03-01T01:06:57.4917982Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-03-01T01:06:57.4918769Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-03-01T01:06:57.4919477Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-03-01T01:06:57.4920156Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-03-01T01:06:57.4920995Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-03-01T01:06:57.4921927Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-03-01T01:06:57.4922728Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-03-01T01:06:57.4923428Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-03-01T01:06:57.4924048Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-03-01T01:06:57.4924779Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-03-01T01:06:57.4925528Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-03-01T01:06:57.4926318Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-03-01T01:06:57.4927214Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-03-01T01:06:57.4927872Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-03-01T01:06:57.4928587Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-03-01T01:06:57.4929289Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-03-01T01:06:57.4929943Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-03-01T01:06:57.4930672Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-03-01T01:06:57.4931512Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-03-01T01:06:57.4932255Z  at 
> org.junit.rules.Ext

[GitHub] [flink] wanglijie95 commented on pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-18 Thread GitBox


wanglijie95 commented on pull request #12181:
URL: https://github.com/apache/flink/pull/12181#issuecomment-629992918


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17793) Replace TableSchema with dedicated CatalogSchema

2020-05-18 Thread Timo Walther (Jira)
Timo Walther created FLINK-17793:


 Summary: Replace TableSchema with dedicated CatalogSchema
 Key: FLINK-17793
 URL: https://issues.apache.org/jira/browse/FLINK-17793
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Timo Walther
Assignee: Timo Walther


The {{TableSchema}} is used for representing the schema of catalog table and 
the schema of a {{Table}} object and operation. We should split those 
responsibilities both for a cleaner API and long-term separation of concerns. 
Connectors should work on a CatalogSchema instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] GJL opened a new pull request #12210: [FLINK-17792][tests] Catch and log exception if jstack fails

2020-05-18 Thread GitBox


GJL opened a new pull request #12210:
URL: https://github.com/apache/flink/pull/12210


   ## What is the purpose of the change
   
   *This catches and logs exceptions thrown when invoking jstack. jstack can 
fail if the JVM process that we want to sample exits while or before we invoke 
jstack. Since this is not unexpected behavior, we should not propagate the 
exception and fail the test prematurely.*
   
   
   ## Brief change log
   
 - *See commit*
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as *flink-jepsen*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17792) Failing to invoking jstack on TM processes should not fail Jepsen Tests

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17792:
---
Labels: pull-request-available  (was: )

> Failing to invoking jstack on TM processes should not fail Jepsen Tests
> ---
>
> Key: FLINK-17792
> URL: https://issues.apache.org/jira/browse/FLINK-17792
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.1, 1.11.0
>Reporter: Gary Yao
>Assignee: Gary Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> {{jstack}} can fail if the JVM process exits prematurely while or before we 
> invoke {{jstack}}. If {{jstack}} fails, the exception propagates and exits 
> the Jepsen Tests prematurely.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap

2020-05-18 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109970#comment-17109970
 ] 

Jingsong Lee commented on FLINK-17789:
--

Hi [~chesnay], thanks for quick response.

> an option `k0.prefix.prefix.v1` would never work since you're removing the 
>second `prefix`.

What do you mean? `prefix.k0.prefix.v1` will be removed to `k0.prefix.v1`, but 
`k0.prefix.prefix.v1` will be filtered. The logical should be like 
`DelegatingConfiguration.addAllToProperties`.

 

Actually, I think the `toMap` should be consistent with `addAllToProperties`, 
but now:
{code:java}
System.out.println(dc.toMap()); // {prefix.k0=v0, prefix.prefix.k1=v1}

Properties properties = new Properties();
dc.addAllToProperties(properties);
System.out.println(properties); // {k1=v1}
{code}
 

> DelegatingConfiguration should remove prefix instead of add prefix in toMap
> ---
>
> Key: FLINK-17789
> URL: https://issues.apache.org/jira/browse/FLINK-17789
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> {code:java}
> Configuration conf = new Configuration();
> conf.setString("k0", "v0");
> conf.setString("prefix.k1", "v1");
> DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix.");
> System.out.println(dc.getString("k0", "empty")); // empty
> System.out.println(dc.getString("k1", "empty")); // v1
> System.out.println(dc.toMap().get("k1")); // should be v1, but null
> System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, 
> but v1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-16478) add restApi to modify loglevel

2020-05-18 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109971#comment-17109971
 ] 

Chesnay Schepler edited comment on FLINK-16478 at 5/18/20, 7:22 AM:


Creating as custom DSL and implementing it for several logging backend sounds 
like quite a maintenance burden. Extensions to the DSL, and supported backends, 
could become quite an effort.

Additionally, from personal experience (recently when switching to Log4j2), 
projects working against programmatic logging APIs are quite a pain to handle. 
IMO we should try to stay away from any API bar SLF4J.

Have you considered making the logging files modifiable via the REST API 
instead? This would be way more general, and there would only be 1 way of 
changing the logging configuration in all deployments: change the configuration 
files.
Both logback and log4j2 can pick up changes from the configuration files, which 
let's not forget was one of the main reasons we switched in the first place..



was (Author: zentol):
Creating as custom DSL per logging backend sounds like quite a maintenance 
burden. Extensions to the DSL, and supported backends, could become quite an 
effort.

Additionally, from personal experience (recently when switching to Log4j2), 
projects working against programmatic logging APIs are quite a pain to handle. 
IMO we should try to stay away from any API bar SLF4J.

Have you considered making the logging files modifiable via the REST API 
instead? This would be way more general, and there would only be 1 way of 
changing the logging configuration in all deployments: change the configuration 
files.
Both logback and log4j2 can pick up changes from the configuration files, which 
let's not forget was one of the main reasons we switched in the first place..


> add restApi to modify loglevel 
> ---
>
> Key: FLINK-16478
> URL: https://issues.apache.org/jira/browse/FLINK-16478
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: xiaodao
>Priority: Minor
>
> sometimes we may need to change loglevel to get more information to resolved 
> bug, now we need to stop it and modify conf/log4j.properties and resubmit it 
> ,i think it's better to add rest api to modify loglevel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16478) add restApi to modify loglevel

2020-05-18 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109971#comment-17109971
 ] 

Chesnay Schepler commented on FLINK-16478:
--

Creating as custom DSL per logging backend sounds like quite a maintenance 
burden. Extensions to the DSL, and supported backends, could become quite an 
effort.

Additionally, from personal experience (recently when switching to Log4j2), 
projects working against programmatic logging APIs are quite a pain to handle. 
IMO we should try to stay away from any API bar SLF4J.

Have you considered making the logging files modifiable via the REST API 
instead? This would be way more general, and there would only be 1 way of 
changing the logging configuration in all deployments: change the configuration 
files.
Both logback and log4j2 can pick up changes from the configuration files, which 
let's not forget was one of the main reasons we switched in the first place..


> add restApi to modify loglevel 
> ---
>
> Key: FLINK-16478
> URL: https://issues.apache.org/jira/browse/FLINK-16478
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: xiaodao
>Priority: Minor
>
> sometimes we may need to change loglevel to get more information to resolved 
> bug, now we need to stop it and modify conf/log4j.properties and resubmit it 
> ,i think it's better to add rest api to modify loglevel.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on a change in pull request #12202: [FLINK-17791][table][streaming] Support collecting query results under all execution and network environments

2020-05-18 Thread GitBox


godfreyhe commented on a change in pull request #12202:
URL: https://github.com/apache/flink/pull/12202#discussion_r426364305



##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/ExecutionConfigOptions.java
##
@@ -219,6 +219,22 @@
"NOTE: MiniBatch only works for non-windowed 
aggregations currently. If " + TABLE_EXEC_MINIBATCH_ENABLED.key() +
" is set true, its value must be positive.");
 
+   // 

+   //  Result Collect Options
+   // 

+
+   public static final ConfigOption TABLE_EXEC_COLLECT_BATCH_SIZE 
=
+   key("table.exec.collect.batch.size")
+   .defaultValue(1)
+   .withDescription("The maximum number of results 
transmitted from the sink function to the client each time. " +
+   "This option can be set to a larger value if 
both network bandwidth and task manager's memory are enough.");
+
+   public static final ConfigOption 
TABLE_EXEC_COLLECT_SOCKET_TIMEOUT =
+   key("table.exec.collect.socket.timeout")
+   .defaultValue(1)

Review comment:
   use `Duration`

##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/DataStreamUtils.java
##
@@ -47,44 +44,30 @@
 * Returns an iterator to iterate over the elements of the DataStream.
 * @return The iterator
 */
-   public static  Iterator collect(DataStream stream) 
throws IOException {
-
+   public static  Iterator collect(DataStream stream) {
TypeSerializer serializer = 
stream.getType().createSerializer(
-   stream.getExecutionEnvironment().getConfig());
+   stream.getExecutionEnvironment().getConfig());
+   String id = UUID.randomUUID().toString();
+   String accumulatorName = "dataStreamCollect_" + id;
 
-   SocketStreamIterator iter = new 
SocketStreamIterator(serializer);
+   CollectSinkOperatorFactory factory = new 
CollectSinkOperatorFactory<>(serializer, accumulatorName);
+   CollectSinkOperator operator = (CollectSinkOperator) 
factory.getOperator();
+   CollectResultIterator iterator = new 
CollectResultIterator<>(
+   operator.getOperatorIdFuture(), serializer, 
accumulatorName);
+   CollectStreamSink sink = new CollectStreamSink<>(stream, 
factory);
+   sink.name("Data stream collect sink");
 
-   //Find out what IP of us should be given to CollectSink, that 
it will be able to connect to
StreamExecutionEnvironment env = 
stream.getExecutionEnvironment();
-   InetAddress clientAddress;
-
-   if (env instanceof RemoteStreamEnvironment) {
-   String host = ((RemoteStreamEnvironment) env).getHost();
-   int port = ((RemoteStreamEnvironment) env).getPort();
-   try {
-   clientAddress = 
ConnectionUtils.findConnectingAddress(new InetSocketAddress(host, port), 2000, 
400);
-   }
-   catch (Exception e) {
-   throw new IOException("Could not determine an 
suitable network address to " +
-   "receive back data from the 
streaming program.", e);
-   }
-   } else if (env instanceof LocalStreamEnvironment) {
-   clientAddress = InetAddress.getLoopbackAddress();
-   } else {
-   try {
-   clientAddress = InetAddress.getLocalHost();
-   } catch (UnknownHostException e) {
-   throw new IOException("Could not determine this 
machines own local address to " +
-   "receive back data from the 
streaming program.", e);
-   }
-   }
-
-   DataStreamSink sink = stream.addSink(new 
CollectSink(clientAddress, iter.getPort(), serializer));
-   sink.setParallelism(1); // It would not work if multiple 
instances would connect to the same port
+   env.addOperator(sink.getTransformation());
 
-   (new CallExecute(env, iter)).start();
+   try {
+   JobClient jobClient = 
env.executeAsync("DataStreamCollect_" + id);

Review comment:
   id is `meaningless ` here, use "data stream collect" ?

##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/sinks/BatchSelectTableSink.java
##
@@ 

[GitHub] [flink] flinkbot edited a comment on pull request #12202: [FLINK-17791][table][streaming] Support collecting query results under all execution and network environments

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12202:
URL: https://github.com/apache/flink/pull/12202#issuecomment-629806443


   
   ## CI report:
   
   * 9385209a72fa1314e604a3998a149d93a12617d9 UNKNOWN
   * 1b46780c0bf016a524a379cec83d120563ddcb9d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1636)
 
   * a13b735e5ecbb1c90479056ebda8d738a1a43584 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1661)
 
   * dc4b338596d9d3dee5dae9b1dfa3bebe1a5d902d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1662)
 
   * 3a44fbe9824e576068a4f172ca5738a7dd5cf9d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1689)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12199: [FLINK-17774] [table] supports all kinds of changes for select result

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12199:
URL: https://github.com/apache/flink/pull/12199#issuecomment-629793563


   
   ## CI report:
   
   * f11005c6596ecf41efd898ba324374948b2eb8cb Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1678)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12186: [FLINK-16383][task] Do not relay notifyCheckpointComplete to closed operators

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12186:
URL: https://github.com/apache/flink/pull/12186#issuecomment-629492236


   
   ## CI report:
   
   * f512eeef60f86107d945f975b1ca8dead57db9c4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1639)
 
   * 26b98ea10d229f1a49fbbc232dc5cdb83572ac3b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1688)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12181:
URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595


   
   ## CI report:
   
   * 9a73076f072352ba5539bf558f90a94572fb6c36 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1510)
 
   * b954ba073cba912b98c5992b05caec91e7657871 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1687)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12210: [FLINK-17792][tests] Catch and log exception if jstack fails

2020-05-18 Thread GitBox


flinkbot commented on pull request #12210:
URL: https://github.com/apache/flink/pull/12210#issuecomment-629996745


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 079380181830d2fdfa7399c8014bb86e7776d004 (Mon May 18 
07:26:21 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17626) Migrate format properties to new FLIP-122

2020-05-18 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-17626:
-
Priority: Critical  (was: Major)

> Migrate format properties to new FLIP-122
> -
>
> Key: FLINK-17626
> URL: https://issues.apache.org/jira/browse/FLINK-17626
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Critical
> Fix For: 1.11.0
>
>
> format.parquet.compression -> parquet.compression
> format.field-delimiter -> csv.field-delimiter



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17626) Migrate format properties to new FLIP-122

2020-05-18 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-17626:
-
Parent: (was: FLINK-14256)
Issue Type: Bug  (was: Sub-task)

> Migrate format properties to new FLIP-122
> -
>
> Key: FLINK-17626
> URL: https://issues.apache.org/jira/browse/FLINK-17626
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> format.parquet.compression -> parquet.compression
> format.field-delimiter -> csv.field-delimiter



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17077) FLINK_CONF_DIR environment variable to locate flink-conf.yaml in Docker container

2020-05-18 Thread Andrey Zagrebin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109973#comment-17109973
 ] 

Andrey Zagrebin commented on FLINK-17077:
-

When you say copying flink-conf.yaml to another path, 
do you want to provide a custom Flink image with 2 flink-conf.yaml locations?

Did you consider mounting another type of volume (not configuration) to 
'/opt/flink/conf'?
The volume could contain your default flink-conf.yaml, but with write access.

> FLINK_CONF_DIR environment variable to locate flink-conf.yaml in Docker 
> container
> -
>
> Key: FLINK-17077
> URL: https://issues.apache.org/jira/browse/FLINK-17077
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Docker
>Reporter: Eui Heo
>Assignee: Eui Heo
>Priority: Major
>  Labels: Kubernetes, docker
>
> To use flink-conf.yaml outside Flink home directory, we should use 
> FLINK_CONF_DIR.
> But despite of FLINK_CONF_DIR is provided, docker-entrypoint.sh in official 
> flink-docker doesn't know FLINK_CONF_DIR and it is ignored when append 
> additional flink properties to flink-conf.yaml. It would be good to use 
> FLINK_CONF_DIR for the location of flink-conf.yaml, if user provide it.
> https://github.com/apache/flink-docker/blob/master/docker-entrypoint.sh#L23



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17626) Fs connector should use FLIP-122 format options style

2020-05-18 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-17626:
-
Summary: Fs connector should use FLIP-122 format options style  (was: 
Migrate format properties to new FLIP-122)

> Fs connector should use FLIP-122 format options style
> -
>
> Key: FLINK-17626
> URL: https://issues.apache.org/jira/browse/FLINK-17626
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Critical
> Fix For: 1.11.0
>
>
> format.parquet.compression -> parquet.compression
> format.field-delimiter -> csv.field-delimiter



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol opened a new pull request #12211: [FLINK-17790][kafka] Fix JDK 11 compile error

2020-05-18 Thread GitBox


zentol opened a new pull request #12211:
URL: https://github.com/apache/flink/pull/12211


   `initializePartitioner` returns a `FlinkKafkaPartitioner`, without any 
generic parameter, as a result of which the type of the Optional is not 
well-defined.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17790) flink-connector-kafka-base does not compile on Java11

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17790:
---
Labels: pull-request-available  (was: )

> flink-connector-kafka-base does not compile on Java11
> -
>
> Key: FLINK-17790
> URL: https://issues.apache.org/jira/browse/FLINK-17790
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1657&view=logs&j=946871de-358d-5815-3994-8175615bc253&t=e0240c62-4570-5d1c-51af-dd63d2093da1
> [ERROR] 
> /__w/3/s/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaOptions.java:[271,41]
>  incompatible types: cannot infer type-variable(s) U,T,T,T,T
> (argument mismatch; bad return type in lambda expression
>   
> java.util.Optional
>  cannot be converted to java.util.Optional org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner>)
> [INFO] 1 error



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi opened a new pull request #12212: [FLINK-17626][fs-connector] Fs connector should use FLIP-122 format options style

2020-05-18 Thread GitBox


JingsongLi opened a new pull request #12212:
URL: https://github.com/apache/flink/pull/12212


   
   ## What is the purpose of the change
   
   Fs connector should use FLIP-122 format options style. Like:
   ```
   create table t (...) with (
 'connector'='filesystem',
 'path'='...',
 'format'='csv',
 'csv.field-delimiter'=';'
   )
   ```
   
   ## Brief change log
   
   - FileSystemFormatFactory implements FLIP-95 Factory
   - Update formats
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17626) Fs connector should use FLIP-122 format options style

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17626:
---
Labels: pull-request-available  (was: )

> Fs connector should use FLIP-122 format options style
> -
>
> Key: FLINK-17626
> URL: https://issues.apache.org/jira/browse/FLINK-17626
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> format.parquet.compression -> parquet.compression
> format.field-delimiter -> csv.field-delimiter



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12211: [FLINK-17790][kafka] Fix JDK 11 compile error

2020-05-18 Thread GitBox


flinkbot commented on pull request #12211:
URL: https://github.com/apache/flink/pull/12211#issuecomment-63482


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 439f4f136e73eebb6c424f7f706ab91655516ed3 (Mon May 18 
07:34:15 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-17763) No log files when starting scala-shell

2020-05-18 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-17763:


Assignee: Chesnay Schepler

> No log files when starting scala-shell
> --
>
> Key: FLINK-17763
> URL: https://issues.apache.org/jira/browse/FLINK-17763
> Project: Flink
>  Issue Type: Bug
>  Components: Scala Shell
>Affects Versions: 1.11.0
>Reporter: Jeff Zhang
>Assignee: Chesnay Schepler
>Priority: Minor
>
> I see the following error when starting scala shell.
>  
> {code:java}
> Starting Flink Shell:
> ERROR StatusLogger No Log4j 2 configuration file found. Using default 
> configuration (logging only errors to the console), or user programmatically 
> provided configurations. Set system property 'log4j2.debug' to show Log4j 2 
> internal initialization logging. See 
> https://logging.apache.org/log4j/2.x/manual/configuration.html for 
> instructions on how to configure Log4j 2 {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12212: [FLINK-17626][fs-connector] Fs connector should use FLIP-122 format options style

2020-05-18 Thread GitBox


flinkbot commented on pull request #12212:
URL: https://github.com/apache/flink/pull/12212#issuecomment-630001771


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 0ec02542ed4721376e60ea71090cbb335885e6b0 (Mon May 18 
07:36:56 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] GJL commented on a change in pull request #12204: [FLINK-17777][tests] Set HADOOP_CLASSPATH for Mesos TaskManagers

2020-05-18 Thread GitBox


GJL commented on a change in pull request #12204:
URL: https://github.com/apache/flink/pull/12204#discussion_r426424405



##
File path: flink-jepsen/src/jepsen/flink/db.clj
##
@@ -327,9 +327,14 @@
   :delay 4000)]
   (info "Submitted Flink Application via Marathon" marathon-response
 
+(defn- flink-mesos-configuration!
+  []
+  {:containerized.taskmanager.env.HADOOP_CLASSPATH (hadoop/hadoop-classpath!)})

Review comment:
   Actually, an easier fix would be to add
   ```
   "-Dcontainerized.taskmanager.env.HADOOP_CLASSPATH=$(/opt/hadoop/bin/hadoop 
classpath)"
   ```
   
   to the mesos app master command.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17763) No log files when starting scala-shell

2020-05-18 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-17763:
-
Affects Version/s: (was: 1.11.0)
   1.9.2
   1.10.0

> No log files when starting scala-shell
> --
>
> Key: FLINK-17763
> URL: https://issues.apache.org/jira/browse/FLINK-17763
> Project: Flink
>  Issue Type: Bug
>  Components: Scala Shell
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Jeff Zhang
>Assignee: Chesnay Schepler
>Priority: Minor
>
> I see the following error when starting scala shell.
>  
> {code:java}
> Starting Flink Shell:
> ERROR StatusLogger No Log4j 2 configuration file found. Using default 
> configuration (logging only errors to the console), or user programmatically 
> provided configurations. Set system property 'log4j2.debug' to show Log4j 2 
> internal initialization logging. See 
> https://logging.apache.org/log4j/2.x/manual/configuration.html for 
> instructions on how to configure Log4j 2 {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17763) No log files when starting scala-shell

2020-05-18 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-17763:
-
Fix Version/s: 1.9.4
   1.10.2
   1.11.0

> No log files when starting scala-shell
> --
>
> Key: FLINK-17763
> URL: https://issues.apache.org/jira/browse/FLINK-17763
> Project: Flink
>  Issue Type: Bug
>  Components: Scala Shell
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Jeff Zhang
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.11.0, 1.10.2, 1.9.4
>
>
> I see the following error when starting scala shell.
>  
> {code:java}
> Starting Flink Shell:
> ERROR StatusLogger No Log4j 2 configuration file found. Using default 
> configuration (logging only errors to the console), or user programmatically 
> provided configurations. Set system property 'log4j2.debug' to show Log4j 2 
> internal initialization logging. See 
> https://logging.apache.org/log4j/2.x/manual/configuration.html for 
> instructions on how to configure Log4j 2 {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol opened a new pull request #12213: Revert "[FLINK-13827][script] shell variable should be escaped"

2020-05-18 Thread GitBox


zentol opened a new pull request #12213:
URL: https://github.com/apache/flink/pull/12213


   This reverts commit 865cc4c7a39f7aa610a02cc4a0f41424edcd6279.
   
   The addition of quotes means that the log settings are passed as a single 
argument, rendering it ineffective.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out

2020-05-18 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17109980#comment-17109980
 ] 

Robert Metzger commented on FLINK-17730:


In FLINK-17336, we introduced a timeout of 5 minutes. Based on the error 
reports there, we saw the following times w/o maven output:
- 6:30 minutes
- 6 m
- 5:07 m
- 11 m
- 25 m
- 8m
- 7m
- 6m
- 7m

Based on this analysis, I propose a timeout of 15 minutes. This would cause 
timeouts only for the very severely delated case of 25m.

> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart
>  times out
> 
>
> Key: FLINK-17730
> URL: https://issues.apache.org/jira/browse/FLINK-17730
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
> After 5 minutes 
> {code}
> 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000]
> 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE
> 2020-05-15T06:56:38.1689028Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-05-15T06:56:38.1689496Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-05-15T06:56:38.1689921Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-05-15T06:56:38.1690316Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-05-15T06:56:38.1690723Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-05-15T06:56:38.1691196Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-05-15T06:56:38.1691608Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-05-15T06:56:38.1692023Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-05-15T06:56:38.1692558Z  - locked <0xb94644f8> (a 
> java.lang.Object)
> 2020-05-15T06:56:38.1692946Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-05-15T06:56:38.1693371Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-05-15T06:56:38.1694151Z  - locked <0xb9464d20> (a 
> sun.security.ssl.AppInputStream)
> 2020-05-15T06:56:38.1694908Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-05-15T06:56:38.1695475Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-05-15T06:56:38.1696007Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-05-15T06:56:38.1696509Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-05-15T06:56:38.1696993Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1697466Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1698069Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1698567Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699041Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699624Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1700090Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1700584Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-05-15T06:56:38.1701282Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1701800Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-05-15T06:56:38.1702328Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1702804Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-05-15T06:56:38.1703270Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown 
> Source)
> 2020-05-15T06:56:38.1703677Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-05-15T06:56:38.1704090Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-05-15T06:56:38

[GitHub] [flink] godfreyhe commented on a change in pull request #12188: [FLINK-17728] [sql-client] sql client supports parser statements via sql parser

2020-05-18 Thread GitBox


godfreyhe commented on a change in pull request #12188:
URL: https://github.com/apache/flink/pull/12188#discussion_r426426306



##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliClientTest.java
##
@@ -90,9 +92,9 @@ public void testFailedUpdateSubmission() throws Exception {
 
@Test
public void testSqlCompletion() throws IOException {
-   verifySqlCompletion("", 0, Arrays.asList("SELECT", "QUIT;", 
"RESET;"), Collections.emptyList());
-   verifySqlCompletion("SELEC", 5, 
Collections.singletonList("SELECT"), Collections.singletonList("QUIT;"));
-   verifySqlCompletion("SELE", 0, 
Collections.singletonList("SELECT"), Collections.singletonList("QUIT;"));
+   verifySqlCompletion("", 0, Arrays.asList("SOURCE", "QUIT;", 
"RESET;"), Collections.emptyList());

Review comment:
   before this pr, all commands are hint candidates. after this refactor,  
only the commands who has regex pattern are hint candidates, or fallback to 
Table API hinting (will delegate to `tableEnv.getCompletionHints` in 
LocalExecutor). in `MockExecutor`, `completeStatement` method only returns 
`HintA` and `Hint B` 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17167) Extend entry point script and docs with history server mode

2020-05-18 Thread Andrey Zagrebin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Zagrebin updated FLINK-17167:

Parent: (was: FLINK-17160)
Issue Type: Improvement  (was: Sub-task)

> Extend entry point script and docs with history server mode
> ---
>
> Key: FLINK-17167
> URL: https://issues.apache.org/jira/browse/FLINK-17167
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Docker
>Reporter: Andrey Zagrebin
>Assignee: Sebastian J.
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on a change in pull request #12188: [FLINK-17728] [sql-client] sql client supports parser statements via sql parser

2020-05-18 Thread GitBox


godfreyhe commented on a change in pull request #12188:
URL: https://github.com/apache/flink/pull/12188#discussion_r426428163



##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/SqlCommandParser.java
##
@@ -34,24 +61,130 @@ private SqlCommandParser() {
// private
}
 
-   public static Optional parse(String stmt) {
+   public static Optional parse(Parser sqlParser, String 
stmt) {
// normalize
stmt = stmt.trim();
// remove ';' at the end
if (stmt.endsWith(";")) {
stmt = stmt.substring(0, stmt.length() - 1).trim();
}
 
-   // parse
+   // parse statement via sql parser first
+   Optional callOpt = parseBySqlParser(sqlParser, 
stmt);
+   if (callOpt.isPresent()) {
+   return callOpt;
+   } else {
+   return parseByRegexMatching(stmt);
+   }
+   }
+
+   private static Optional parseBySqlParser(Parser 
sqlParser, String stmt) {
+   List operations;
+   try {
+   operations = sqlParser.parse(stmt);
+   } catch (Throwable e) {
+   if (e instanceof ValidationException) {
+   // can be parsed via sql parser, but is not 
validated.
+   // throw exception directly
+   throw new SqlExecutionException("Invalidate SQL 
statement.", e);
+   }
+   return Optional.empty();
+   }
+   if (operations.size() != 1) {
+   throw new SqlExecutionException("Only single statement 
is supported now.");
+   }
+
+   final SqlCommand cmd;
+   String[] operands = new String[0];
+   Operation operation = operations.get(0);
+   if (operation instanceof CatalogSinkModifyOperation) {
+   boolean overwrite = ((CatalogSinkModifyOperation) 
operation).isOverwrite();
+   cmd = overwrite ? SqlCommand.INSERT_OVERWRITE : 
SqlCommand.INSERT_INTO;
+   operands = new String[] { stmt };
+   } else if (operation instanceof CreateTableOperation) {
+   cmd = SqlCommand.CREATE_TABLE;
+   operands = new String[] { stmt };
+   } else if (operation instanceof DropTableOperation) {
+   cmd = SqlCommand.DROP_TABLE;
+   operands = new String[] { stmt };
+   } else if (operation instanceof AlterTableOperation) {
+   cmd = SqlCommand.ALTER_TABLE;
+   operands = new String[] { stmt };
+   } else if (operation instanceof CreateViewOperation) {
+   cmd = SqlCommand.CREATE_VIEW;
+   CreateViewOperation op = (CreateViewOperation) 
operation;
+   operands = new String[] { 
op.getViewIdentifier().asSerializableString(),
+   op.getCatalogView().getOriginalQuery() 
};
+   } else if (operation instanceof DropViewOperation) {
+   cmd = SqlCommand.DROP_VIEW;
+   operands = new String[] { ((DropViewOperation) 
operation).getViewIdentifier().asSerializableString() };
+   } else if (operation instanceof CreateDatabaseOperation) {
+   cmd = SqlCommand.CREATE_DATABASE;
+   operands = new String[] { stmt };
+   } else if (operation instanceof DropDatabaseOperation) {
+   cmd = SqlCommand.DROP_DATABASE;
+   operands = new String[] { stmt };
+   } else if (operation instanceof AlterDatabaseOperation) {
+   cmd = SqlCommand.ALTER_DATABASE;
+   operands = new String[] { stmt };
+   } else if (operation instanceof CreateCatalogOperation) {
+   cmd = SqlCommand.CREATE_CATALOG;
+   operands = new String[] { stmt };
+   } else if (operation instanceof UseCatalogOperation) {
+   cmd = SqlCommand.USE_CATALOG;
+   operands = new String[] { String.format("`%s`", 
((UseCatalogOperation) operation).getCatalogName()) };
+   } else if (operation instanceof UseDatabaseOperation) {
+   cmd = SqlCommand.USE;
+   UseDatabaseOperation op = ((UseDatabaseOperation) 
operation);
+   operands = new String[] { String.format("`%s`.`%s`", 
op.getCatalogName(), op.getDatabaseName()) };
+   } else if (operation instanceof ShowCatalogsOperation) {
+   cmd = SqlCommand.SHOW_CATA

[GitHub] [flink] wxplovecc commented on pull request #12035: [FLINK-17569][FileSystemsems]support ViewFileSystem when wait lease revoke of hadoop filesystem

2020-05-18 Thread GitBox


wxplovecc commented on pull request #12035:
URL: https://github.com/apache/flink/pull/12035#issuecomment-630006157


   @StephanEwen Ok, I will try to add a test



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12213: Revert "[FLINK-13827][script] shell variable should be escaped"

2020-05-18 Thread GitBox


flinkbot commented on pull request #12213:
URL: https://github.com/apache/flink/pull/12213#issuecomment-630006554


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 74c9e440c684f255faa2075c12e4438590631dae (Mon May 18 
07:46:53 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17071) Kubernetes session CLI logging output is either misleading or concerning

2020-05-18 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-17071:
-
Parent: FLINK-15856
Issue Type: Sub-task  (was: Bug)

> Kubernetes session CLI logging output is either misleading or concerning
> 
>
> Key: FLINK-17071
> URL: https://issues.apache.org/jira/browse/FLINK-17071
> Project: Flink
>  Issue Type: Sub-task
>  Components: Command Line Client, Deployment / Kubernetes
>Affects Versions: 1.11.0
>Reporter: Chesnay Schepler
>Priority: Major
>
> When running any command against the KubernetesSessionCLI it prints a log 
> message about having created a session cluster.
> This should certainly not appear when running a stop/help command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] tillrohrmann commented on a change in pull request #11987: [hotfix] Show hostname in failure error message

2020-05-18 Thread GitBox


tillrohrmann commented on a change in pull request #11987:
URL: https://github.com/apache/flink/pull/11987#discussion_r426428629



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java
##
@@ -1568,7 +1568,7 @@ private boolean transitionState(ExecutionState 
currentState, ExecutionState targ
if (error == null) {
LOG.info("{} ({}) switched from {} to {}.", 
getVertex().getTaskNameWithSubtaskIndex(), getAttemptId(), currentState, 
targetState);
} else {
-   LOG.info("{} ({}) switched from {} to {}.", 
getVertex().getTaskNameWithSubtaskIndex(), getAttemptId(), currentState, 
targetState, error);
+   LOG.info("{} ({}) switched from {} to {} on 
{}.", getVertex().getTaskNameWithSubtaskIndex(), getAttemptId(), currentState, 
targetState, getVertex().getCurrentAssignedResourceLocation(), error);

Review comment:
   ```suggestion
LOG.info("{} ({}) switched from {} to {} on 
{}.", getVertex().getTaskNameWithSubtaskIndex(), getAttemptId(), currentState, 
targetState, getAssignedResource(), error);
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-17768) UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel is instable

2020-05-18 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-17768:


Assignee: Zhijiang  (was: Piotr Nowojski)

> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
>  is instable
> -
>
> Key: FLINK-17768
> URL: https://issues.apache.org/jira/browse/FLINK-17768
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: Zhijiang
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel
>  and shouldPerformUnalignedCheckpointOnParallelRemoteChannel failed in azure:
> {code}
> 2020-05-16T12:41:32.3546620Z [ERROR] 
> shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)
>   Time elapsed: 18.865 s  <<< ERROR!
> 2020-05-16T12:41:32.3548739Z java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-16T12:41:32.3550177Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-05-16T12:41:32.3551416Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-05-16T12:41:32.3552959Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1665)
> 2020-05-16T12:41:32.3554979Z  at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)
> 2020-05-16T12:41:32.3556584Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1645)
> 2020-05-16T12:41:32.3558068Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1627)
> 2020-05-16T12:41:32.3559431Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:158)
> 2020-05-16T12:41:32.3560954Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.shouldPerformUnalignedCheckpointOnLocalAndRemoteChannel(UnalignedCheckpointITCase.java:145)
> 2020-05-16T12:41:32.3562203Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-16T12:41:32.3563433Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-16T12:41:32.3564846Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-16T12:41:32.3565894Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-16T12:41:32.3566870Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-16T12:41:32.3568064Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-16T12:41:32.3569727Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-16T12:41:32.3570818Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-16T12:41:32.3571840Z  at 
> org.junit.rules.Verifier$1.evaluate(Verifier.java:35)
> 2020-05-16T12:41:32.3572771Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-05-16T12:41:32.3574008Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-05-16T12:41:32.3575406Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-05-16T12:41:32.3576476Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-05-16T12:41:32.3577253Z  at java.lang.Thread.run(Thread.java:748)
> 2020-05-16T12:41:32.3578228Z Caused by: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-16T12:41:32.3579520Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-05-16T12:41:32.3580935Z  at 
> org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
> 2020-05-16T12:41:32.3582361Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2020-05-16T12:41:32.3583456Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2020-05-16T12:41:32.3584816Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2020-05-16T12:41:32.3585874Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2

[jira] [Commented] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap

2020-05-18 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1711#comment-1711
 ] 

Chesnay Schepler commented on FLINK-17789:
--

I'd argue that the behavior of {{DelegatingConfiguration.addAllToProperties}} 
is wrong.

But fair enough, my example was bad.
Let's say "prefix.prefix.v1" instead; we set "prefix" as the prefix (duh), and 
add "prefix.v1" as an option.
You are then removing "prefix", assuming it to be an error, but then the look 
will fail since you check "prefix.v1", instead of the supposed 
"prefix.prefix.v1" .

> DelegatingConfiguration should remove prefix instead of add prefix in toMap
> ---
>
> Key: FLINK-17789
> URL: https://issues.apache.org/jira/browse/FLINK-17789
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> {code:java}
> Configuration conf = new Configuration();
> conf.setString("k0", "v0");
> conf.setString("prefix.k1", "v1");
> DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix.");
> System.out.println(dc.getString("k0", "empty")); // empty
> System.out.println(dc.getString("k1", "empty")); // v1
> System.out.println(dc.toMap().get("k1")); // should be v1, but null
> System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, 
> but v1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger commented on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-18 Thread GitBox


rmetzger commented on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-630013296


   Note: changes merged through this PR broke master: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1686&view=results
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-05-18 Thread GitBox


wuchong commented on pull request #11837:
URL: https://github.com/apache/flink/pull/11837#issuecomment-630014046


   Passed: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1680&view=results
   Merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #11837: [FLINK-16160][table-planner-blink] Fix proctime()/rowtime() doesn't w…

2020-05-18 Thread GitBox


wuchong merged pull request #11837:
URL: https://github.com/apache/flink/pull/11837


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #8693: [FLINK-8871] Support to cancel checkpoing via notification

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #8693:
URL: https://github.com/apache/flink/pull/8693#issuecomment-542518065


   
   ## CI report:
   
   * 40f0f8f733b268c3ddcf2864313b3ec67fe3757c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1637)
 
   * 01016af81f4aca6c28525ef1fe896986bf60592c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17703) Default execution command fails due 'benchmark' profile being inactive

2020-05-18 Thread Nico Kruber (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber closed FLINK-17703.
---
Resolution: Fixed

Fixed in [https://github.com/dataArtisans/flink-benchmarks] (master) via 
8de66edad

> Default execution command fails due 'benchmark' profile being inactive
> --
>
> Key: FLINK-17703
> URL: https://issues.apache.org/jira/browse/FLINK-17703
> Project: Flink
>  Issue Type: Bug
>  Components: Benchmarks
>Affects Versions: 1.11.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
> Fix For: 1.11.0
>
>
> FLINK-17057 had some unfortunate side effects: by having the 
> "{{include-netty-tcnative-dynamic"}} profile active by default, the 
> "{{benchmark"}} profile was not active any more. Thus the following command 
> that was typically used for running the benchmarks failed unless the 
> "{{benchmark"}} profile was activated manually like this:
> {code:java}
> mvn -Dflink.version=1.11-SNAPSHOT clean package exec:exec -P benchmark{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16160) Schema#proctime and Schema#rowtime don't work in TableEnvironment#connect code path

2020-05-18 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17110012#comment-17110012
 ] 

Jark Wu commented on FLINK-16160:
-

Fixed in 
 - master (1.11.0): 0d9c46ea97e337acfcef932f86b73f3ff779c272
 - 1.10.2: TODO

> Schema#proctime and Schema#rowtime don't work in TableEnvironment#connect 
> code path
> ---
>
> Key: FLINK-16160
> URL: https://issues.apache.org/jira/browse/FLINK-16160
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In ConnectTableDescriptor#createTemporaryTable, the proctime/rowtime 
> properties are ignored so the generated catalog table is not correct. We 
> should fix this to let TableEnvironment#connect() support watermark.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12103: [FLINK-16998][core] Add a changeflag to Row

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12103:
URL: https://github.com/apache/flink/pull/12103#issuecomment-627456433


   
   ## CI report:
   
   * 05ab513e7a7aed7481001668eecddf26b8fd05cb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1621)
 
   * b20298d51eda267f008430478e375804ffa0f9df Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1677)
 
   * 460711a0fe014e079ea2eb9c6e98da11e1946b48 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17077) FLINK_CONF_DIR environment variable to locate flink-conf.yaml in Docker container

2020-05-18 Thread Eui Heo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17110019#comment-17110019
 ] 

Eui Heo commented on FLINK-17077:
-

In either case, flink-conf.yaml is mounted only with the configMap. If the 
runtime properties need to be added, pod initContainer copy the read-only 
flink-conf.yaml mounted with the configMap to another location and it is 
written additionally by the docker-entrypoint script.

> FLINK_CONF_DIR environment variable to locate flink-conf.yaml in Docker 
> container
> -
>
> Key: FLINK-17077
> URL: https://issues.apache.org/jira/browse/FLINK-17077
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Docker
>Reporter: Eui Heo
>Assignee: Eui Heo
>Priority: Major
>  Labels: Kubernetes, docker
>
> To use flink-conf.yaml outside Flink home directory, we should use 
> FLINK_CONF_DIR.
> But despite of FLINK_CONF_DIR is provided, docker-entrypoint.sh in official 
> flink-docker doesn't know FLINK_CONF_DIR and it is ignored when append 
> additional flink properties to flink-conf.yaml. It would be good to use 
> FLINK_CONF_DIR for the location of flink-conf.yaml, if user provide it.
> https://github.com/apache/flink-docker/blob/master/docker-entrypoint.sh#L23



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap

2020-05-18 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17110018#comment-17110018
 ] 

Jingsong Lee commented on FLINK-17789:
--

If there is an option = ConfigOptions.key("prefix.v1"), 
DelegatingConfiguration.get(option) -> should contains this key.

But I want to say is modify {{toMap}} , 
DelegatingConfiguration.toMap().get(options) -> should also contains this key.

> DelegatingConfiguration should remove prefix instead of add prefix in toMap
> ---
>
> Key: FLINK-17789
> URL: https://issues.apache.org/jira/browse/FLINK-17789
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> {code:java}
> Configuration conf = new Configuration();
> conf.setString("k0", "v0");
> conf.setString("prefix.k1", "v1");
> DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix.");
> System.out.println(dc.getString("k0", "empty")); // empty
> System.out.println(dc.getString("k1", "empty")); // v1
> System.out.println(dc.toMap().get("k1")); // should be v1, but null
> System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, 
> but v1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12211: [FLINK-17790][kafka] Fix JDK 11 compile error

2020-05-18 Thread GitBox


flinkbot commented on pull request #12211:
URL: https://github.com/apache/flink/pull/12211#issuecomment-630018919


   
   ## CI report:
   
   * 439f4f136e73eebb6c424f7f706ab91655516ed3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tillrohrmann closed pull request #11987: [hotfix] Show hostname in failure error message

2020-05-18 Thread GitBox


tillrohrmann closed pull request #11987:
URL: https://github.com/apache/flink/pull/11987


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12210: [FLINK-17792][tests] Catch and log exception if jstack fails

2020-05-18 Thread GitBox


flinkbot commented on pull request #12210:
URL: https://github.com/apache/flink/pull/12210#issuecomment-630018809


   
   ## CI report:
   
   * 35a0b961d4d1a1dbb7485ff847c7b5e2a5068c80 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12202: [FLINK-17791][table][streaming] Support collecting query results under all execution and network environments

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12202:
URL: https://github.com/apache/flink/pull/12202#issuecomment-629806443


   
   ## CI report:
   
   * 9385209a72fa1314e604a3998a149d93a12617d9 UNKNOWN
   * 1b46780c0bf016a524a379cec83d120563ddcb9d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1636)
 
   * a13b735e5ecbb1c90479056ebda8d738a1a43584 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1661)
 
   * dc4b338596d9d3dee5dae9b1dfa3bebe1a5d902d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1662)
 
   * 3a44fbe9824e576068a4f172ca5738a7dd5cf9d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1689)
 
   * 3d70dafb893db6a61dcbc1b614349e9164aafeab UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12213: Revert "[FLINK-13827][script] shell variable should be escaped"

2020-05-18 Thread GitBox


flinkbot commented on pull request #12213:
URL: https://github.com/apache/flink/pull/12213#issuecomment-630019128


   
   ## CI report:
   
   * 74c9e440c684f255faa2075c12e4438590631dae UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12212: [FLINK-17626][fs-connector] Fs connector should use FLIP-122 format options style

2020-05-18 Thread GitBox


flinkbot commented on pull request #12212:
URL: https://github.com/apache/flink/pull/12212#issuecomment-630019032


   
   ## CI report:
   
   * 0ec02542ed4721376e60ea71090cbb335885e6b0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17348) Expose metric group to ascendingTimestampExtractor

2020-05-18 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-17348.

Fix Version/s: 1.11.0
   Resolution: Fixed

This was added in the new interfaces of FLIP-126: 
https://issues.apache.org/jira/browse/FLINK-17653. You can now specify 
suppliers for both {{TimestampAssigner}} and {{WatermarkGenerator}}, and these 
get access to the metrics group.

Could you check if this works for you? If not, please re-open this issue.

> Expose metric group to ascendingTimestampExtractor
> --
>
> Key: FLINK-17348
> URL: https://issues.apache.org/jira/browse/FLINK-17348
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Theo Diefenthal
>Priority: Major
> Fix For: 1.11.0
>
>
> A common use case in Flink + kafka is that one has lots of kafka Partitions 
> with each having ascending timestamps.
> In my scenario, due to various operational reasons, we put log files from 
> Filesystem to kafka, one server per partition, and then consume those in 
> Flink.
> Sometimes, it can happen that we collect the files in wrong order into kafka 
> which leads to ascending timestamp problems. If that happens and we have the 
> default logging violation handler enabled, we produce several gb of logs in a 
> very short amount of time, which we would like to circumvent. 
> What we really want : track the number of violations in a metric and define 
> an alarm on that in our monitoring dashboard.
> Currently, there is sadly no way to reference the metric group from the 
> ascending timestamp extractor. I wish, there could be something similar like 
> the open method on other rich functions. 
> My current workaround is to add a custom map task post to the source. For 
> that task I need to pass on the kafka partition from the source, which I 
> usually don't care about and I need to keep track of each partitions current 
> timestamp manually, exactly the same way as the extractor does. - > 
> workaround with "polluting" my pipeline quite a bit just for a single metric. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17348) Expose metric group to ascendingTimestampExtractor

2020-05-18 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-17348:


Assignee: Aljoscha Krettek

> Expose metric group to ascendingTimestampExtractor
> --
>
> Key: FLINK-17348
> URL: https://issues.apache.org/jira/browse/FLINK-17348
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Theo Diefenthal
>Assignee: Aljoscha Krettek
>Priority: Major
> Fix For: 1.11.0
>
>
> A common use case in Flink + kafka is that one has lots of kafka Partitions 
> with each having ascending timestamps.
> In my scenario, due to various operational reasons, we put log files from 
> Filesystem to kafka, one server per partition, and then consume those in 
> Flink.
> Sometimes, it can happen that we collect the files in wrong order into kafka 
> which leads to ascending timestamp problems. If that happens and we have the 
> default logging violation handler enabled, we produce several gb of logs in a 
> very short amount of time, which we would like to circumvent. 
> What we really want : track the number of violations in a metric and define 
> an alarm on that in our monitoring dashboard.
> Currently, there is sadly no way to reference the metric group from the 
> ascending timestamp extractor. I wish, there could be something similar like 
> the open method on other rich functions. 
> My current workaround is to add a custom map task post to the source. For 
> that task I need to pass on the kafka partition from the source, which I 
> usually don't care about and I need to keep track of each partitions current 
> timestamp manually, exactly the same way as the extractor does. - > 
> workaround with "polluting" my pipeline quite a bit just for a single metric. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger commented on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-18 Thread GitBox


rmetzger commented on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-630021791


   Change has been reverted 
https://github.com/apache/flink/commit/bf58725e7ad80f276a458007a2d5b890d8ffc4f5



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Reopened] (FLINK-15670) Provide a Kafka Source/Sink pair that aligns Kafka's Partitions and Flink's KeyGroups

2020-05-18 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reopened FLINK-15670:


I reverted the change in 
https://github.com/apache/flink/commit/bf58725e7ad80f276a458007a2d5b890d8ffc4f5 
because it didn't compile.

> Provide a Kafka Source/Sink pair that aligns Kafka's Partitions and Flink's 
> KeyGroups
> -
>
> Key: FLINK-15670
> URL: https://issues.apache.org/jira/browse/FLINK-15670
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream, Connectors / Kafka
>Reporter: Stephan Ewen
>Assignee: Yuan Mei
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Source/Sink pair would serve two purposes:
> 1. You can read topics that are already partitioned by key and process them 
> without partitioning them again (avoid shuffles)
> 2. You can use this to shuffle through Kafka, thereby decomposing the job 
> into smaller jobs and independent pipelined regions that fail over 
> independently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] aljoscha commented on pull request #12132: [FLINK-17593][Connectors/FileSystem] Support arbitrary recovery mechanism for PartFileWriter

2020-05-18 Thread GitBox


aljoscha commented on pull request #12132:
URL: https://github.com/apache/flink/pull/12132#issuecomment-630022206


   No hard feeling, I hope! 😃
   
   I managed to make my version of the test also work, by allowing relative 
paths for the local filesystem: 
https://github.com/aljoscha/flink/commits/pr-12132-file-sink.
   
   @guoweiM I agree with you that actually testing the file moving might be a 
better approach (the only thing I didn't like there was the manual file 
moving/cleanup but maybe that's ok. Could you rename the commit that changes 
the test to `[FLINK-17593] Turn BucketStateSerializerTest into an upgrade 
test`. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-docker] wangxiyuan opened a new pull request #23: [FLINK-14241][test]Add arm64 support for docker e2e test

2020-05-18 Thread GitBox


wangxiyuan opened a new pull request #23:
URL: https://github.com/apache/flink-docker/pull/23


   The docker image `openjdk:8-jre` only works for amd64.
   
   When running  test on arm64, use `arm64v8/openjdk:8-jre` instead.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on pull request #11725: [FLINK-15670][API] Provide a Kafka Source/Sink pair as KafkaShuffle

2020-05-18 Thread GitBox


curcur commented on pull request #11725:
URL: https://github.com/apache/flink/pull/11725#issuecomment-630025655


   It is because of commit of unifying the watermark Strategy committed last 
night...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 opened a new pull request #12214: [FLINK-17618][flink-dist] Update the outdated comments in log4j2 configuration files

2020-05-18 Thread GitBox


wangyang0918 opened a new pull request #12214:
URL: https://github.com/apache/flink/pull/12214


   
   
   When we upgrade the log4j to log4j2, there are some residual log4j logger 
configuration in the comments. Just like following,
   
   log4j.properties and log4j-console.properties
   ```
   # Uncomment this if you want to _only_ change Flink's logging
   #log4j.logger.org.apache.flink=INFO
   ```
   
   We should update them to the log4j2 format.
```
   # Uncomment this if you want to _only_ change Flink's logging
   logger.flink.name = org.apache.flink
   logger.flink.level = INFO
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17618) Update the outdated comments in the log4j properties files

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17618:
---
Labels: pull-request-available  (was: )

> Update the outdated comments in the log4j properties files
> --
>
> Key: FLINK-17618
> URL: https://issues.apache.org/jira/browse/FLINK-17618
> Project: Flink
>  Issue Type: Bug
>Reporter: Yang Wang
>Priority: Major
>  Labels: pull-request-available
>
> When we upgrade the log4j to log4j2, there are some residual log4j logger 
> configuration in the comments. Just like following,
> log4j.properties and log4j-console.properties
> {code:java}
> # Uncomment this if you want to _only_ change Flink's logging
> #log4j.logger.org.apache.flink=INFO
> {code}
> We should update them to the log4j2 format.
>  
> {code:java}
> # Uncomment this if you want to _only_ change Flink's logging
> logger.flink.name = org.apache.flink
> logger.flink.level = INFO
> {code}
>  
> cc [~chesnay]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] guoweiM commented on pull request #12132: [FLINK-17593][Connectors/FileSystem] Support arbitrary recovery mechanism for PartFileWriter

2020-05-18 Thread GitBox


guoweiM commented on pull request #12132:
URL: https://github.com/apache/flink/pull/12132#issuecomment-630028998


   NP



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 commented on pull request #12214: [FLINK-17618][flink-dist] Update the outdated comments in log4j2 configuration files

2020-05-18 Thread GitBox


wangyang0918 commented on pull request #12214:
URL: https://github.com/apache/flink/pull/12214#issuecomment-630029338


   cc @zentol Could you have a look? When i update the `flink-console.sh`, i 
find this issue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #8693: [FLINK-8871] Support to cancel checkpoing via notification

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #8693:
URL: https://github.com/apache/flink/pull/8693#issuecomment-542518065


   
   ## CI report:
   
   * 40f0f8f733b268c3ddcf2864313b3ec67fe3757c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1637)
 
   * 01016af81f4aca6c28525ef1fe896986bf60592c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1691)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12214: [FLINK-17618][flink-dist] Update the outdated comments in log4j2 configuration files

2020-05-18 Thread GitBox


flinkbot commented on pull request #12214:
URL: https://github.com/apache/flink/pull/12214#issuecomment-630030778


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 78d9cd9f70ac69a78f6d045d998a004492ce4cfe (Mon May 18 
08:33:12 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-17618).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17565) Bump fabric8 version from 4.5.2 to 4.9.1

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17565:
---
Labels: pull-request-available  (was: )

> Bump fabric8 version from 4.5.2 to 4.9.1
> 
>
> Key: FLINK-17565
> URL: https://issues.apache.org/jira/browse/FLINK-17565
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently, we are using a version of 4.5.2, it's better that we upgrade it to 
> 4.9.1, some of the reasons are as follows:
> # It removed the use of reapers manually doing cascade deletion of resources, 
> leave it up to Kubernetes APIServer, which solves the issue of FLINK-17566, 
> more info:  https://github.com/fabric8io/kubernetes-client/issues/1880
> # It introduced a regression in building Quantity values in 4.7.0, release 
> note https://github.com/fabric8io/kubernetes-client/issues/1953.
> # It provided better support for K8s 1.17, release note: 
> https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0.
> For more release notes, please refer to [fabric8 
> releases|https://github.com/fabric8io/kubernetes-client/releases].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhengcanbin opened a new pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.1

2020-05-18 Thread GitBox


zhengcanbin opened a new pull request #12215:
URL: https://github.com/apache/flink/pull/12215


   ## What is the purpose of the change
   
   Bump fabric8 kubernetes-client from 4.5.2 to 4.7.1.
   
   It fixes the issue of  
[FLINK-17566](https://issues.apache.org/jira/browse/FLINK-17566).
   
   ## Brief change log
   
 - Update pom.xml about the `kubernetes.client.version` property.
 - Update the NOTICE file.
 - Update all the test cases which previously expect the method of 
`Quantity#getAmount` returns **${memoryAmount}${memoryFormat}**.
   
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12103: [FLINK-16998][core] Add a changeflag to Row

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12103:
URL: https://github.com/apache/flink/pull/12103#issuecomment-627456433


   
   ## CI report:
   
   * 05ab513e7a7aed7481001668eecddf26b8fd05cb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1621)
 
   * b20298d51eda267f008430478e375804ffa0f9df Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1677)
 
   * 460711a0fe014e079ea2eb9c6e98da11e1946b48 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1693)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12132: [FLINK-17593][Connectors/FileSystem] Support arbitrary recovery mechanism for PartFileWriter

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12132:
URL: https://github.com/apache/flink/pull/12132#issuecomment-628151415


   
   ## CI report:
   
   * 449b8494248924ab0c9a4a5187458933902a13a3 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1626)
 
   * 10ef0c696350fcd84866fde27f19ed2a0312ee4b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1683)
 
   * 79f3bb064a15bfde312932e603ae2a65e67545fd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12197: [FLINK-17357][table-planner-blink] add 'DROP catalog' DDL to blink pl…

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12197:
URL: https://github.com/apache/flink/pull/12197#issuecomment-629768433


   
   ## CI report:
   
   * 15cb31fea013766318c482cd266aa294b9df225b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1603)
 
   * 1dd4c872d42e9b26907e350aeacb5f278b2b74c1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12139: [FLINK-16076] Translate "Queryable State" page into Chinese

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12139:
URL: https://github.com/apache/flink/pull/12139#issuecomment-628360660


   
   ## CI report:
   
   * b02e502fa3bb3ae57d4678d23868cad20d51caca Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1230)
 
   * 158aea29d67643ce7c7e140f32c32e4c8fc177be UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12210: [FLINK-17792][tests] Catch and log exception if jstack fails

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12210:
URL: https://github.com/apache/flink/pull/12210#issuecomment-630018809


   
   ## CI report:
   
   * 35a0b961d4d1a1dbb7485ff847c7b5e2a5068c80 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1695)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12204: [FLINK-17777][tests] Set HADOOP_CLASSPATH for Mesos TaskManagers

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12204:
URL: https://github.com/apache/flink/pull/12204#issuecomment-629847515


   
   ## CI report:
   
   * 097ac2b06f10900c435115c24e699c5328ee3227 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1649)
 
   * 1b0f95eb45256569484cff22599d080d968841f0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12211: [FLINK-17790][kafka] Fix JDK 11 compile error

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12211:
URL: https://github.com/apache/flink/pull/12211#issuecomment-630018919


   
   ## CI report:
   
   * 439f4f136e73eebb6c424f7f706ab91655516ed3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1696)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12202: [FLINK-17791][table][streaming] Support collecting query results under all execution and network environments

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12202:
URL: https://github.com/apache/flink/pull/12202#issuecomment-629806443


   
   ## CI report:
   
   * 9385209a72fa1314e604a3998a149d93a12617d9 UNKNOWN
   * a13b735e5ecbb1c90479056ebda8d738a1a43584 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1661)
 
   * dc4b338596d9d3dee5dae9b1dfa3bebe1a5d902d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1662)
 
   * 3a44fbe9824e576068a4f172ca5738a7dd5cf9d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1689)
 
   * 3d70dafb893db6a61dcbc1b614349e9164aafeab Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1694)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12212: [FLINK-17626][fs-connector] Fs connector should use FLIP-122 format options style

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12212:
URL: https://github.com/apache/flink/pull/12212#issuecomment-630019032


   
   ## CI report:
   
   * 0ec02542ed4721376e60ea71090cbb335885e6b0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1698)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12213: Revert "[FLINK-13827][script] shell variable should be escaped"

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12213:
URL: https://github.com/apache/flink/pull/12213#issuecomment-630019128


   
   ## CI report:
   
   * 74c9e440c684f255faa2075c12e4438590631dae Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1699)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Issue Comment Deleted] (FLINK-17384) support read hbase conf dir from flink.conf just like hadoop_conf

2020-05-18 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-17384:
-
Comment: was deleted

(was: Hi [~liyu],  I have committed my code, but the log is below, which is not 
relevant with my code

2020-05-14T13:46:32.9352627Z [ERROR] Failures: 
 2020-05-14T13:46:32.9361371Z [ERROR] 
KafkaProducerExactlyOnceITCase>KafkaProducerTestBase.testExactlyOnceRegularSink:309->KafkaProducerTestBase.testExactlyOnce:370
 Test failed: Job execution failed

ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
on project flink-metrics-availability-test: Unable to generate classpath: 
org.apache.maven.artifact.resolver.ArtifactResolutionException: Could not 
transfer artifact org.apache.maven.surefire:surefire-grouper:jar:2.22.1 from/to 
alicloud-mvn-mirror 
([http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/):] Entry 
[id:18][route:{}->http://mavenmirror.alicloud.dak8s.net:][state:null] has 
not been leased from this pool.

 

How to solve it , and why that happends. how to make the 
[flinkbot|https://github.com/flinkbot] rerun azure)

> support read hbase conf dir from flink.conf just like hadoop_conf
> -
>
> Key: FLINK-17384
> URL: https://issues.apache.org/jira/browse/FLINK-17384
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Deployment / Scripts
>Affects Versions: 1.10.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> hi all:
> when user interacts with hbase should do 2 things when using sql
>  # export HBASE_CONF_DIR
>  # add hbase libs to flink_lib(because the hbase connnector doesn't have 
> client's( and others) jar)
> i think it needs to optimise it.
> for 1) we should support read hbase conf dir from flink.conf just like 
> hadoop_conf in  config.sh
> for 2) we should support HBASE_CLASSPATH in  config.sh. In case of jar 
> conflicts such as guava , we also should support flink-hbase-shaded just like 
> hadoop does



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] aljoscha commented on a change in pull request #12132: [FLINK-17593][Connectors/FileSystem] Support arbitrary recovery mechanism for PartFileWriter

2020-05-18 Thread GitBox


aljoscha commented on a change in pull request #12132:
URL: https://github.com/apache/flink/pull/12132#discussion_r426448856



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/TestUtils.java
##
@@ -414,4 +419,50 @@ public void clear() {
backingList.clear();
}
}
+
+   static class LocalRecoverableWriterForBucketStateMigrationTest extends 
LocalRecoverableWriter {

Review comment:
   I think we should move this directly to `BucketStateSerializerTest` and 
rename it to `AlwaysRelativeLocalRecoverableWriter`, and maybe give the prefix 
as a parameter. Otherwise `"src/test/resources/"` will appear in more than one 
place and has to be kept in sync.

##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/filesystem/TestUtils.java
##
@@ -414,4 +419,50 @@ public void clear() {
backingList.clear();
}
}
+
+   static class LocalRecoverableWriterForBucketStateMigrationTest extends 
LocalRecoverableWriter {
+
+   final String prefix = "src/test/resources/";
+
+   LocalRecoverableWriterForBucketStateMigrationTest() {
+   super(new LocalFileSystem());
+   }
+
+   public RecoverableFsDataOutputStream open(Path filePath) throws 
IOException {

Review comment:
   I think maybe we don't need this if we take the minor change from my 
branch that allows relative files: 
https://github.com/aljoscha/flink/commit/99810151e4dd37a5b61b89c8056a1ad1202d75a4





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   7   8   9   >