[GitHub] zhijiangW commented on a change in pull request #6698: [FLINK-8581][network] Move flushing remote subpartitions from OutputFlusher to netty

2018-10-15 Thread GitBox
zhijiangW commented on a change in pull request #6698: [FLINK-8581][network] 
Move flushing remote subpartitions from OutputFlusher to netty
URL: https://github.com/apache/flink/pull/6698#discussion_r225059097
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultPartition.java
 ##
 @@ -268,6 +291,25 @@ public void flushAll() {
}
}
 
+   @Override
+   public void flushAllLocal() {
+   for (ResultSubpartition localSubpartition : localSubpartitions) 
{
+   localSubpartition.flush();
+   }
+   }
+
+   @Override
+   public void setFlushTimeout(long flushTimeout) {
+   checkState(!this.flushTimeout.isPresent(), "Flush timeout can 
not be set twice");
+   for (ResultSubpartition subpartition: 
remoteSubpartitionsMissingPeriodicFlushes) {
+   checkState(subpartition.isLocal().isPresent());
+   checkState(!subpartition.isLocal().get());
+   subpartition.registerPeriodicFlush(flushTimeout);
+   }
+   remoteSubpartitionsMissingPeriodicFlushes.clear();
 
 Review comment:
   Yes, we may need add the synchronizing in above three methods for protecting 
 `localSubpartitions`, `remoteSubpartitionsMissingPeriodicFlushes` and 
`flushTimeout` which are operated by task thread, netty thread and flusher 
thread.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-8581) Improve performance for low latency network

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649831#comment-16649831
 ] 

ASF GitHub Bot commented on FLINK-8581:
---

zhijiangW commented on a change in pull request #6698: [FLINK-8581][network] 
Move flushing remote subpartitions from OutputFlusher to netty
URL: https://github.com/apache/flink/pull/6698#discussion_r225059097
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultPartition.java
 ##
 @@ -268,6 +291,25 @@ public void flushAll() {
}
}
 
+   @Override
+   public void flushAllLocal() {
+   for (ResultSubpartition localSubpartition : localSubpartitions) 
{
+   localSubpartition.flush();
+   }
+   }
+
+   @Override
+   public void setFlushTimeout(long flushTimeout) {
+   checkState(!this.flushTimeout.isPresent(), "Flush timeout can 
not be set twice");
+   for (ResultSubpartition subpartition: 
remoteSubpartitionsMissingPeriodicFlushes) {
+   checkState(subpartition.isLocal().isPresent());
+   checkState(!subpartition.isLocal().get());
+   subpartition.registerPeriodicFlush(flushTimeout);
+   }
+   remoteSubpartitionsMissingPeriodicFlushes.clear();
 
 Review comment:
   Yes, we may need add the synchronizing in above three methods for protecting 
 `localSubpartitions`, `remoteSubpartitionsMissingPeriodicFlushes` and 
`flushTimeout` which are operated by task thread, netty thread and flusher 
thread.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improve performance for low latency network
> ---
>
> Key: FLINK-8581
> URL: https://issues.apache.org/jira/browse/FLINK-8581
> Project: Flink
>  Issue Type: Improvement
>  Components: Network
>Affects Versions: 1.5.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10546) Remove StandaloneMiniCluster

2018-10-15 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649835#comment-16649835
 ] 

Till Rohrmann commented on FLINK-10546:
---

The {{StandaloneMiniCluster}} should no longer be in use. It got replaced by 
the {{MiniCluster}}.

> Remove StandaloneMiniCluster
> 
>
> Key: FLINK-10546
> URL: https://issues.apache.org/jira/browse/FLINK-10546
> Project: Flink
>  Issue Type: Task
>  Components: JobManager
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
> Fix For: 1.7.0
>
>
> Before doing as title, I want to check that is the {{StandaloneMiniCluster}} 
> still in use?
>  IIRC there once be a deprecation of {{start-local.sh}} but I don’t know if 
> it is relevant.
>  Further, this class seems unused in all other place. Since it depends on 
> legacy mode, I wonder whether we can *JUST* remove it.
>  
> cc [~till.rohrmann] and [~Zentol]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10547) Remove LegacyCLI

2018-10-15 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649842#comment-16649842
 ] 

vinoyang commented on FLINK-10547:
--

[~till.rohrmann] Can this class been removed?

> Remove LegacyCLI
> 
>
> Key: FLINK-10547
> URL: https://issues.apache.org/jira/browse/FLINK-10547
> Project: Flink
>  Issue Type: Sub-task
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] florianschmidt1994 opened a new pull request #6843: [FLINK-8482][DataStream] Allow users to choose from different timestamp strategies for interval join

2018-10-15 Thread GitBox
florianschmidt1994 opened a new pull request #6843: [FLINK-8482][DataStream] 
Allow users to choose from different timestamp strategies for interval join
URL: https://github.com/apache/flink/pull/6843
 
 
   ## What is the purpose of the change
   
   This change will allow users to choose from different timestamp strategies 
when using the IntervalJoin in the DataStream API. A timestamp strategy defines 
which timestamp gets assigned to two elements that are joined together. 
   
   The usage in the API looks as follows
   
   ```
   leftKeyedStream.intervalJoin(rightKeyedStream)
   .between(, )
   .assignLeftTimestamp()
   .process()
   ``` 
   
   The possible options to pick from are
   - `assignLeftTimestamp()`:
   - `assignRightTimestamp()`
   - `assignMinTimestamp()`
   - `assignMaxTimestamp()`
   
   In certain scenarios the watermark emitted by the IntervalJoinOperator needs 
to be delayed, in order to not produce any late data. This is only necessary 
when choosing `assignMinTimestamp()` or for certain combinations of upper / 
lower bound and `assignRightTimestamp()` / `assignLeftTimestamp()`. It is never 
necessary when using `assignMaxTimestamp()`.
   
   Delaying watermarks is implemented by subtracting the necessary delay from 
each incoming watermarks timestamp before emitting it. Note that this only 
takes effect with respect to downstream operators. The internal timerservice 
will still be advanced by the original watermark, so that timers (including 
those defined by the user) will still be fired with respect to the original 
watermark.
   
   ## Brief change log
 - Implement timestamp strategies in IntervalJoinOperator
 - Add to Scala DataStream API
 - Add to Java DataStream API
 - Add test cases to IntervalJoinITCase (both java & scala)
   
   ## Verifying this change
 - Extends integration tests in IntervalJoinITCase
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): yes
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? JavaDocs
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-8482) Implement and expose option to use min / max / left / right timestamp for joined streamrecords

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649849#comment-16649849
 ] 

ASF GitHub Bot commented on FLINK-8482:
---

florianschmidt1994 opened a new pull request #6843: [FLINK-8482][DataStream] 
Allow users to choose from different timestamp strategies for interval join
URL: https://github.com/apache/flink/pull/6843
 
 
   ## What is the purpose of the change
   
   This change will allow users to choose from different timestamp strategies 
when using the IntervalJoin in the DataStream API. A timestamp strategy defines 
which timestamp gets assigned to two elements that are joined together. 
   
   The usage in the API looks as follows
   
   ```
   leftKeyedStream.intervalJoin(rightKeyedStream)
   .between(, )
   .assignLeftTimestamp()
   .process()
   ``` 
   
   The possible options to pick from are
   - `assignLeftTimestamp()`:
   - `assignRightTimestamp()`
   - `assignMinTimestamp()`
   - `assignMaxTimestamp()`
   
   In certain scenarios the watermark emitted by the IntervalJoinOperator needs 
to be delayed, in order to not produce any late data. This is only necessary 
when choosing `assignMinTimestamp()` or for certain combinations of upper / 
lower bound and `assignRightTimestamp()` / `assignLeftTimestamp()`. It is never 
necessary when using `assignMaxTimestamp()`.
   
   Delaying watermarks is implemented by subtracting the necessary delay from 
each incoming watermarks timestamp before emitting it. Note that this only 
takes effect with respect to downstream operators. The internal timerservice 
will still be advanced by the original watermark, so that timers (including 
those defined by the user) will still be fired with respect to the original 
watermark.
   
   ## Brief change log
 - Implement timestamp strategies in IntervalJoinOperator
 - Add to Scala DataStream API
 - Add to Java DataStream API
 - Add test cases to IntervalJoinITCase (both java & scala)
   
   ## Verifying this change
 - Extends integration tests in IntervalJoinITCase
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): yes
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? JavaDocs
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Implement and expose option to use min / max / left / right timestamp for 
> joined streamrecords
> --
>
> Key: FLINK-8482
> URL: https://issues.apache.org/jira/browse/FLINK-8482
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Florian Schmidt
>Assignee: Florian Schmidt
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The idea: Expose the option of which timestamp to use for the result of a 
> join. The idea that is currently the floating around includes the options
>  * _left_: Use timestamp of the element in a join that came from the left 
> stream
>  * _right_: Use timestamp of the element in a join that came from the right 
> stream
>  * _max_: Use the max timestamp of both elements in a join
>  * _min_: Use the max timestamp of both elements in a join
> All options but _max_ require to introduce delaying watermarks in the 
> operator, which is something that we were hesitant to do until now. This 
> should probably under go discussion once more in order to see if / how we 
> want to add this now. We could even think of exposing this in a more general 
> way by adding a base operator that allows delayed watermarks.
> This will also be groundwork for supporting outer joins (FLINK-8483) for 
> which in any case we watermark delays to provide correctness. 
> Also the API for this needs some feedback in order to expose this in a 
> powerful, yet clear way. In my PoC at [1] I used the naming convention left / 
> right to refer to specific streams with currently is not something the api 
> exposes to the user, we should probably use something more clever here.
> Example
> {code:java}
> keyedStreamOne.
>

[jira] [Updated] (FLINK-8482) Implement and expose option to use min / max / left / right timestamp for joined streamrecords

2018-10-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-8482:
--
Labels: pull-request-available  (was: )

> Implement and expose option to use min / max / left / right timestamp for 
> joined streamrecords
> --
>
> Key: FLINK-8482
> URL: https://issues.apache.org/jira/browse/FLINK-8482
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Florian Schmidt
>Assignee: Florian Schmidt
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The idea: Expose the option of which timestamp to use for the result of a 
> join. The idea that is currently the floating around includes the options
>  * _left_: Use timestamp of the element in a join that came from the left 
> stream
>  * _right_: Use timestamp of the element in a join that came from the right 
> stream
>  * _max_: Use the max timestamp of both elements in a join
>  * _min_: Use the max timestamp of both elements in a join
> All options but _max_ require to introduce delaying watermarks in the 
> operator, which is something that we were hesitant to do until now. This 
> should probably under go discussion once more in order to see if / how we 
> want to add this now. We could even think of exposing this in a more general 
> way by adding a base operator that allows delayed watermarks.
> This will also be groundwork for supporting outer joins (FLINK-8483) for 
> which in any case we watermark delays to provide correctness. 
> Also the API for this needs some feedback in order to expose this in a 
> powerful, yet clear way. In my PoC at [1] I used the naming convention left / 
> right to refer to specific streams with currently is not something the api 
> exposes to the user, we should probably use something more clever here.
> Example
> {code:java}
> keyedStreamOne.
>.intervalJoin(keyedStreamTwo)
>.between(Time.milliseconds(0), Time.milliseconds(2))
>.assignMinTimestamp() // alternative .assignMaxTimestamp() 
> .assignLeftTimestamp() .assignRightTimestamp()
>.process(new ProcessJoinFunction() { /* impl */ })
> {code}
>  
> Any feedback is highly appreciated!
> [1] 
> https://github.com/florianschmidt1994/flink/tree/flink-8482-add-option-for-different-timestamp-strategies-to-interval-join-operator
> cc [~StephanEwen] [~kkl0u]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10551) Remove legacy REST handlers

2018-10-15 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-10551:


 Summary: Remove legacy REST handlers
 Key: FLINK-10551
 URL: https://issues.apache.org/jira/browse/FLINK-10551
 Project: Flink
  Issue Type: Sub-task
  Components: REST
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.7.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol closed pull request #6842: [FLINK-10549] [test] Remove Legacy* Tests based on legacy mode

2018-10-15 Thread GitBox
zentol closed pull request #6842: [FLINK-10549] [test] Remove Legacy* Tests 
based on legacy mode
URL: https://github.com/apache/flink/pull/6842
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-formats/flink-avro/src/test/java/org/apache/flink/formats/avro/LegacyAvroExternalJarProgramITCase.java
 
b/flink-formats/flink-avro/src/test/java/org/apache/flink/formats/avro/LegacyAvroExternalJarProgramITCase.java
deleted file mode 100644
index 1dd56a775aa..000
--- 
a/flink-formats/flink-avro/src/test/java/org/apache/flink/formats/avro/LegacyAvroExternalJarProgramITCase.java
+++ /dev/null
@@ -1,92 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.formats.avro;
-
-import org.apache.flink.client.program.PackagedProgram;
-import org.apache.flink.configuration.ConfigConstants;
-import org.apache.flink.configuration.Configuration;
-import org.apache.flink.configuration.JobManagerOptions;
-import org.apache.flink.core.fs.Path;
-import org.apache.flink.formats.avro.testjar.AvroExternalJarProgram;
-import org.apache.flink.runtime.minicluster.LocalFlinkMiniCluster;
-import org.apache.flink.test.util.TestEnvironment;
-import org.apache.flink.util.TestLogger;
-
-import org.junit.Assert;
-import org.junit.Test;
-
-import java.io.File;
-import java.net.URL;
-import java.util.Collections;
-
-/**
- * IT case for the {@link AvroExternalJarProgram}.
- */
-public class LegacyAvroExternalJarProgramITCase extends TestLogger {
-
-   private static final String JAR_FILE = "maven-test-jar.jar";
-
-   private static final String TEST_DATA_FILE = "/testdata.avro";
-
-   @Test
-   public void testExternalProgram() {
-
-   LocalFlinkMiniCluster testMiniCluster = null;
-
-   try {
-   int parallelism = 4;
-   Configuration config = new Configuration();
-   
config.setInteger(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, parallelism);
-   testMiniCluster = new LocalFlinkMiniCluster(config, 
false);
-   testMiniCluster.start();
-
-   String jarFile = JAR_FILE;
-   String testData = 
getClass().getResource(TEST_DATA_FILE).toString();
-
-   PackagedProgram program = new PackagedProgram(new 
File(jarFile), new String[] { testData });
-
-   TestEnvironment.setAsContext(
-   testMiniCluster,
-   parallelism,
-   Collections.singleton(new Path(jarFile)),
-   Collections.emptyList());
-
-   config.setString(JobManagerOptions.ADDRESS, 
"localhost");
-   config.setInteger(JobManagerOptions.PORT, 
testMiniCluster.getLeaderRPCPort());
-
-   program.invokeInteractiveModeForExecution();
-   }
-   catch (Throwable t) {
-   System.err.println(t.getMessage());
-   t.printStackTrace();
-   Assert.fail("Error during the packaged program 
execution: " + t.getMessage());
-   }
-   finally {
-   TestEnvironment.unsetAsContext();
-
-   if (testMiniCluster != null) {
-   try {
-   testMiniCluster.stop();
-   } catch (Throwable t) {
-   // ignore
-   }
-   }
-   }
-   }
-}
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/LegacyPartialConsumePipelinedResultTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/LegacyPartialConsumePipelinedResultTest.java
deleted file mode 100644
index b83067c11f4..000
--- 
a/flink-runtime/src/te

[jira] [Updated] (FLINK-10549) Remove Legacy* Tests based on legacy mode

2018-10-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10549:
---
Labels: pull-request-available  (was: )

> Remove Legacy* Tests based on legacy mode
> -
>
> Key: FLINK-10549
> URL: https://issues.apache.org/jira/browse/FLINK-10549
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> This Jira tracks the removal of tests based on legacy mode and starting with 
> "LegacyXXX" while covered by a test named "XXX" .
>  # We already have {{JobRetrievalITCase}} and all test cases of 
> {{LegacyJobRetrievalITCase}} are covered. Just simply remove it.
>  # We already have {{AccumulatorLiveITCase}} and all test cases of 
> {{LegacyAccumulatorLiveITCase}} are covered. Just simply remove it.
>  # We already have {{TaskCancelAsyncProducerConsumerITCase}} and all test 
> cases of {{LegacyTaskCancelAsyncProducerConsumerITCase}} are covered. Just 
> simply remove it.
>  # We already have {{ClassLoaderITCase}} and all test cases of 
> {{LegacyClassLoaderITCase}} are covered. Just simply remove it.
>  # We already have {{SlotCountExceedingParallelismTest}} and all test cases 
> of {{LegacySlotCountExceedingParallelismTest}} are covered. Just simply 
> remove it.
>  # We already have {{ScheduleOrUpdateConsumersTest}} and all test cases of 
> {{LegacyScheduleOrUpdateConsumersTest}} are covered. Just simply remove it.
>  # We already have {{PartialConsumePipelinedResultTest}} and all test cases 
> of {{LegacyPartialConsumePipelinedResultTest}} are covered. Just simply 
> remove it.
>  # We already have {{AvroExternalJarProgramITCase}} and all test cases of 
> {{LegacyAvroExternalJarProgramITCase}} are covered. Just simply remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-10549) Remove Legacy* Tests based on legacy mode

2018-10-15 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-10549:
--

> Remove Legacy* Tests based on legacy mode
> -
>
> Key: FLINK-10549
> URL: https://issues.apache.org/jira/browse/FLINK-10549
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> This Jira tracks the removal of tests based on legacy mode and starting with 
> "LegacyXXX" while covered by a test named "XXX" .
>  # We already have {{JobRetrievalITCase}} and all test cases of 
> {{LegacyJobRetrievalITCase}} are covered. Just simply remove it.
>  # We already have {{AccumulatorLiveITCase}} and all test cases of 
> {{LegacyAccumulatorLiveITCase}} are covered. Just simply remove it.
>  # We already have {{TaskCancelAsyncProducerConsumerITCase}} and all test 
> cases of {{LegacyTaskCancelAsyncProducerConsumerITCase}} are covered. Just 
> simply remove it.
>  # We already have {{ClassLoaderITCase}} and all test cases of 
> {{LegacyClassLoaderITCase}} are covered. Just simply remove it.
>  # We already have {{SlotCountExceedingParallelismTest}} and all test cases 
> of {{LegacySlotCountExceedingParallelismTest}} are covered. Just simply 
> remove it.
>  # We already have {{ScheduleOrUpdateConsumersTest}} and all test cases of 
> {{LegacyScheduleOrUpdateConsumersTest}} are covered. Just simply remove it.
>  # We already have {{PartialConsumePipelinedResultTest}} and all test cases 
> of {{LegacyPartialConsumePipelinedResultTest}} are covered. Just simply 
> remove it.
>  # We already have {{AvroExternalJarProgramITCase}} and all test cases of 
> {{LegacyAvroExternalJarProgramITCase}} are covered. Just simply remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10549) Remove Legacy* Tests based on legacy mode

2018-10-15 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-10549.

Resolution: Fixed

master: 50b1a5287a315ee117cae480c68a8db131f13b25

> Remove Legacy* Tests based on legacy mode
> -
>
> Key: FLINK-10549
> URL: https://issues.apache.org/jira/browse/FLINK-10549
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> This Jira tracks the removal of tests based on legacy mode and starting with 
> "LegacyXXX" while covered by a test named "XXX" .
>  # We already have {{JobRetrievalITCase}} and all test cases of 
> {{LegacyJobRetrievalITCase}} are covered. Just simply remove it.
>  # We already have {{AccumulatorLiveITCase}} and all test cases of 
> {{LegacyAccumulatorLiveITCase}} are covered. Just simply remove it.
>  # We already have {{TaskCancelAsyncProducerConsumerITCase}} and all test 
> cases of {{LegacyTaskCancelAsyncProducerConsumerITCase}} are covered. Just 
> simply remove it.
>  # We already have {{ClassLoaderITCase}} and all test cases of 
> {{LegacyClassLoaderITCase}} are covered. Just simply remove it.
>  # We already have {{SlotCountExceedingParallelismTest}} and all test cases 
> of {{LegacySlotCountExceedingParallelismTest}} are covered. Just simply 
> remove it.
>  # We already have {{ScheduleOrUpdateConsumersTest}} and all test cases of 
> {{LegacyScheduleOrUpdateConsumersTest}} are covered. Just simply remove it.
>  # We already have {{PartialConsumePipelinedResultTest}} and all test cases 
> of {{LegacyPartialConsumePipelinedResultTest}} are covered. Just simply 
> remove it.
>  # We already have {{AvroExternalJarProgramITCase}} and all test cases of 
> {{LegacyAvroExternalJarProgramITCase}} are covered. Just simply remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10549) Remove Legacy* Tests based on legacy mode

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649851#comment-16649851
 ] 

ASF GitHub Bot commented on FLINK-10549:


zentol closed pull request #6842: [FLINK-10549] [test] Remove Legacy* Tests 
based on legacy mode
URL: https://github.com/apache/flink/pull/6842
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-formats/flink-avro/src/test/java/org/apache/flink/formats/avro/LegacyAvroExternalJarProgramITCase.java
 
b/flink-formats/flink-avro/src/test/java/org/apache/flink/formats/avro/LegacyAvroExternalJarProgramITCase.java
deleted file mode 100644
index 1dd56a775aa..000
--- 
a/flink-formats/flink-avro/src/test/java/org/apache/flink/formats/avro/LegacyAvroExternalJarProgramITCase.java
+++ /dev/null
@@ -1,92 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.formats.avro;
-
-import org.apache.flink.client.program.PackagedProgram;
-import org.apache.flink.configuration.ConfigConstants;
-import org.apache.flink.configuration.Configuration;
-import org.apache.flink.configuration.JobManagerOptions;
-import org.apache.flink.core.fs.Path;
-import org.apache.flink.formats.avro.testjar.AvroExternalJarProgram;
-import org.apache.flink.runtime.minicluster.LocalFlinkMiniCluster;
-import org.apache.flink.test.util.TestEnvironment;
-import org.apache.flink.util.TestLogger;
-
-import org.junit.Assert;
-import org.junit.Test;
-
-import java.io.File;
-import java.net.URL;
-import java.util.Collections;
-
-/**
- * IT case for the {@link AvroExternalJarProgram}.
- */
-public class LegacyAvroExternalJarProgramITCase extends TestLogger {
-
-   private static final String JAR_FILE = "maven-test-jar.jar";
-
-   private static final String TEST_DATA_FILE = "/testdata.avro";
-
-   @Test
-   public void testExternalProgram() {
-
-   LocalFlinkMiniCluster testMiniCluster = null;
-
-   try {
-   int parallelism = 4;
-   Configuration config = new Configuration();
-   
config.setInteger(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, parallelism);
-   testMiniCluster = new LocalFlinkMiniCluster(config, 
false);
-   testMiniCluster.start();
-
-   String jarFile = JAR_FILE;
-   String testData = 
getClass().getResource(TEST_DATA_FILE).toString();
-
-   PackagedProgram program = new PackagedProgram(new 
File(jarFile), new String[] { testData });
-
-   TestEnvironment.setAsContext(
-   testMiniCluster,
-   parallelism,
-   Collections.singleton(new Path(jarFile)),
-   Collections.emptyList());
-
-   config.setString(JobManagerOptions.ADDRESS, 
"localhost");
-   config.setInteger(JobManagerOptions.PORT, 
testMiniCluster.getLeaderRPCPort());
-
-   program.invokeInteractiveModeForExecution();
-   }
-   catch (Throwable t) {
-   System.err.println(t.getMessage());
-   t.printStackTrace();
-   Assert.fail("Error during the packaged program 
execution: " + t.getMessage());
-   }
-   finally {
-   TestEnvironment.unsetAsContext();
-
-   if (testMiniCluster != null) {
-   try {
-   testMiniCluster.stop();
-   } catch (Throwable t) {
-   // ignore
-   }
-   }
-   }
-   }
-}
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/io/network/p

[GitHub] florianschmidt1994 commented on issue #6843: [FLINK-8482][DataStream] Allow users to choose from different timestamp strategies for interval join

2018-10-15 Thread GitBox
florianschmidt1994 commented on issue #6843: [FLINK-8482][DataStream] Allow 
users to choose from different timestamp strategies for interval join
URL: https://github.com/apache/flink/pull/6843#issuecomment-429741544
 
 
   @kl0u 👋 Maybe you can have a look at this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10549) Remove Legacy* tests

2018-10-15 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-10549:
-
Summary: Remove Legacy* tests  (was: Remove Legacy* Tests based on legacy 
mode)

> Remove Legacy* tests
> 
>
> Key: FLINK-10549
> URL: https://issues.apache.org/jira/browse/FLINK-10549
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> This Jira tracks the removal of tests based on legacy mode and starting with 
> "LegacyXXX" while covered by a test named "XXX" .
>  # We already have {{JobRetrievalITCase}} and all test cases of 
> {{LegacyJobRetrievalITCase}} are covered. Just simply remove it.
>  # We already have {{AccumulatorLiveITCase}} and all test cases of 
> {{LegacyAccumulatorLiveITCase}} are covered. Just simply remove it.
>  # We already have {{TaskCancelAsyncProducerConsumerITCase}} and all test 
> cases of {{LegacyTaskCancelAsyncProducerConsumerITCase}} are covered. Just 
> simply remove it.
>  # We already have {{ClassLoaderITCase}} and all test cases of 
> {{LegacyClassLoaderITCase}} are covered. Just simply remove it.
>  # We already have {{SlotCountExceedingParallelismTest}} and all test cases 
> of {{LegacySlotCountExceedingParallelismTest}} are covered. Just simply 
> remove it.
>  # We already have {{ScheduleOrUpdateConsumersTest}} and all test cases of 
> {{LegacyScheduleOrUpdateConsumersTest}} are covered. Just simply remove it.
>  # We already have {{PartialConsumePipelinedResultTest}} and all test cases 
> of {{LegacyPartialConsumePipelinedResultTest}} are covered. Just simply 
> remove it.
>  # We already have {{AvroExternalJarProgramITCase}} and all test cases of 
> {{LegacyAvroExternalJarProgramITCase}} are covered. Just simply remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8482) Implement and expose option to use min / max / left / right timestamp for joined streamrecords

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649853#comment-16649853
 ] 

ASF GitHub Bot commented on FLINK-8482:
---

florianschmidt1994 commented on issue #6843: [FLINK-8482][DataStream] Allow 
users to choose from different timestamp strategies for interval join
URL: https://github.com/apache/flink/pull/6843#issuecomment-429741544
 
 
   @kl0u 👋 Maybe you can have a look at this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Implement and expose option to use min / max / left / right timestamp for 
> joined streamrecords
> --
>
> Key: FLINK-8482
> URL: https://issues.apache.org/jira/browse/FLINK-8482
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Florian Schmidt
>Assignee: Florian Schmidt
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The idea: Expose the option of which timestamp to use for the result of a 
> join. The idea that is currently the floating around includes the options
>  * _left_: Use timestamp of the element in a join that came from the left 
> stream
>  * _right_: Use timestamp of the element in a join that came from the right 
> stream
>  * _max_: Use the max timestamp of both elements in a join
>  * _min_: Use the max timestamp of both elements in a join
> All options but _max_ require to introduce delaying watermarks in the 
> operator, which is something that we were hesitant to do until now. This 
> should probably under go discussion once more in order to see if / how we 
> want to add this now. We could even think of exposing this in a more general 
> way by adding a base operator that allows delayed watermarks.
> This will also be groundwork for supporting outer joins (FLINK-8483) for 
> which in any case we watermark delays to provide correctness. 
> Also the API for this needs some feedback in order to expose this in a 
> powerful, yet clear way. In my PoC at [1] I used the naming convention left / 
> right to refer to specific streams with currently is not something the api 
> exposes to the user, we should probably use something more clever here.
> Example
> {code:java}
> keyedStreamOne.
>.intervalJoin(keyedStreamTwo)
>.between(Time.milliseconds(0), Time.milliseconds(2))
>.assignMinTimestamp() // alternative .assignMaxTimestamp() 
> .assignLeftTimestamp() .assignRightTimestamp()
>.process(new ProcessJoinFunction() { /* impl */ })
> {code}
>  
> Any feedback is highly appreciated!
> [1] 
> https://github.com/florianschmidt1994/flink/tree/flink-8482-add-option-for-different-timestamp-strategies-to-interval-join-operator
> cc [~StephanEwen] [~kkl0u]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10549) Remove Legacy* tests

2018-10-15 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-10549.

Resolution: Fixed

> Remove Legacy* tests
> 
>
> Key: FLINK-10549
> URL: https://issues.apache.org/jira/browse/FLINK-10549
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> This Jira tracks the removal of tests based on legacy mode and starting with 
> "LegacyXXX" while covered by a test named "XXX" .
>  # We already have {{JobRetrievalITCase}} and all test cases of 
> {{LegacyJobRetrievalITCase}} are covered. Just simply remove it.
>  # We already have {{AccumulatorLiveITCase}} and all test cases of 
> {{LegacyAccumulatorLiveITCase}} are covered. Just simply remove it.
>  # We already have {{TaskCancelAsyncProducerConsumerITCase}} and all test 
> cases of {{LegacyTaskCancelAsyncProducerConsumerITCase}} are covered. Just 
> simply remove it.
>  # We already have {{ClassLoaderITCase}} and all test cases of 
> {{LegacyClassLoaderITCase}} are covered. Just simply remove it.
>  # We already have {{SlotCountExceedingParallelismTest}} and all test cases 
> of {{LegacySlotCountExceedingParallelismTest}} are covered. Just simply 
> remove it.
>  # We already have {{ScheduleOrUpdateConsumersTest}} and all test cases of 
> {{LegacyScheduleOrUpdateConsumersTest}} are covered. Just simply remove it.
>  # We already have {{PartialConsumePipelinedResultTest}} and all test cases 
> of {{LegacyPartialConsumePipelinedResultTest}} are covered. Just simply 
> remove it.
>  # We already have {{AvroExternalJarProgramITCase}} and all test cases of 
> {{LegacyAvroExternalJarProgramITCase}} are covered. Just simply remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10168) support filtering files by modified/created time in StreamExecutionEnvironment.readFile()

2018-10-15 Thread Fabian Hueske (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649863#comment-16649863
 ] 

Fabian Hueske commented on FLINK-10168:
---

I like the idea of extending filters for FileInputFormat.

The FileInputFormat has already some filters based on the file name. If we 
extend the logic, the existing code an API needs to be taken into account.

Not sure if I'd change the API of StreamExecutionEnvironment. I think we should 
limit the changes to FileInputFormat.

> support filtering files by modified/created time in 
> StreamExecutionEnvironment.readFile()
> -
>
> Key: FLINK-10168
> URL: https://issues.apache.org/jira/browse/FLINK-10168
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.6.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.7.0
>
>
> support filtering files by modified/created time in 
> {{StreamExecutionEnvironment.readFile()}}
> for example, in a source dir with lots of file, we only want to read files 
> that is created or modified after a specific time.
> This API can expose a generic filter function of files, and let users define 
> filtering rules. Currently Flink only supports filtering files by path. What 
> this means is that, currently the API is 
> {{FileInputFormat.setFilesFilters(PathFiter)}} that takes only one file path 
> filter. A more generic API that can take more filters can look like this 1) 
> {{FileInputFormat.setFilesFilters(List (PathFiter, ModifiedTileFilter, ... 
> ))}}
> 2) or {{FileInputFormat.setFilesFilters(FileFiter),}} and {{FileFilter}} 
> exposes all file attributes that Flink's file system can provide, like path 
> and modified time
> I lean towards the 2nd option, because it gives users more flexibility to 
> define complex filtering rules based on combinations of file attributes.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] aljoscha commented on a change in pull request #6703: [FLINK-9697] Provide connector for Kafka 2.0.0

2018-10-15 Thread GitBox
aljoscha commented on a change in pull request #6703: [FLINK-9697] Provide 
connector for Kafka 2.0.0
URL: https://github.com/apache/flink/pull/6703#discussion_r225070780
 
 

 ##
 File path: 
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/table/descriptors/KafkaValidator.java
 ##
 @@ -36,6 +36,7 @@
public static final String CONNECTOR_VERSION_VALUE_09 = "0.9";
public static final String CONNECTOR_VERSION_VALUE_010 = "0.10";
public static final String CONNECTOR_VERSION_VALUE_011 = "0.11";
+   public static final String CONNECTOR_VERSION_VALUE_20 = "2.0";
 
 Review comment:
   I think it's okay to leave it in for now because I don't want to mess with 
how the table API source/sink validation works for now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9697) Provide connector for Kafka 2.0.0

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649868#comment-16649868
 ] 

ASF GitHub Bot commented on FLINK-9697:
---

aljoscha commented on a change in pull request #6703: [FLINK-9697] Provide 
connector for Kafka 2.0.0
URL: https://github.com/apache/flink/pull/6703#discussion_r225070780
 
 

 ##
 File path: 
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/table/descriptors/KafkaValidator.java
 ##
 @@ -36,6 +36,7 @@
public static final String CONNECTOR_VERSION_VALUE_09 = "0.9";
public static final String CONNECTOR_VERSION_VALUE_010 = "0.10";
public static final String CONNECTOR_VERSION_VALUE_011 = "0.11";
+   public static final String CONNECTOR_VERSION_VALUE_20 = "2.0";
 
 Review comment:
   I think it's okay to leave it in for now because I don't want to mess with 
how the table API source/sink validation works for now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Provide connector for Kafka 2.0.0
> -
>
> Key: FLINK-9697
> URL: https://issues.apache.org/jira/browse/FLINK-9697
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> Kafka 2.0.0 would be released soon.
> Here is vote thread:
> [http://search-hadoop.com/m/Kafka/uyzND1vxnEd23QLxb?subj=+VOTE+2+0+0+RC1]
> We should provide connector for Kafka 2.0.0 once it is released.
> Upgrade to 2.0 documentation : 
> http://kafka.apache.org/20/documentation.html#upgrade_2_0_0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10552) Provide RichAsyncFunction for scala API

2018-10-15 Thread Shimin Yang (JIRA)
Shimin Yang created FLINK-10552:
---

 Summary: Provide RichAsyncFunction for scala API
 Key: FLINK-10552
 URL: https://issues.apache.org/jira/browse/FLINK-10552
 Project: Flink
  Issue Type: Bug
  Components: DataStream API
Reporter: Shimin Yang
Assignee: Shimin Yang


Currently, only Java API provide a RichAsyncFunction abstract class while scala 
dose not. Thought it would be nice to provide the same function for scala api. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on a change in pull request #6703: [FLINK-9697] Provide connector for Kafka 2.0.0

2018-10-15 Thread GitBox
yanghua commented on a change in pull request #6703: [FLINK-9697] Provide 
connector for Kafka 2.0.0
URL: https://github.com/apache/flink/pull/6703#discussion_r225071935
 
 

 ##
 File path: 
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/table/descriptors/KafkaValidator.java
 ##
 @@ -36,6 +36,7 @@
public static final String CONNECTOR_VERSION_VALUE_09 = "0.9";
public static final String CONNECTOR_VERSION_VALUE_010 = "0.10";
public static final String CONNECTOR_VERSION_VALUE_011 = "0.11";
+   public static final String CONNECTOR_VERSION_VALUE_20 = "2.0";
 
 Review comment:
   OK, agree.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9697) Provide connector for Kafka 2.0.0

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649874#comment-16649874
 ] 

ASF GitHub Bot commented on FLINK-9697:
---

yanghua commented on a change in pull request #6703: [FLINK-9697] Provide 
connector for Kafka 2.0.0
URL: https://github.com/apache/flink/pull/6703#discussion_r225071935
 
 

 ##
 File path: 
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/table/descriptors/KafkaValidator.java
 ##
 @@ -36,6 +36,7 @@
public static final String CONNECTOR_VERSION_VALUE_09 = "0.9";
public static final String CONNECTOR_VERSION_VALUE_010 = "0.10";
public static final String CONNECTOR_VERSION_VALUE_011 = "0.11";
+   public static final String CONNECTOR_VERSION_VALUE_20 = "2.0";
 
 Review comment:
   OK, agree.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Provide connector for Kafka 2.0.0
> -
>
> Key: FLINK-9697
> URL: https://issues.apache.org/jira/browse/FLINK-9697
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> Kafka 2.0.0 would be released soon.
> Here is vote thread:
> [http://search-hadoop.com/m/Kafka/uyzND1vxnEd23QLxb?subj=+VOTE+2+0+0+RC1]
> We should provide connector for Kafka 2.0.0 once it is released.
> Upgrade to 2.0 documentation : 
> http://kafka.apache.org/20/documentation.html#upgrade_2_0_0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10516) YarnApplicationMasterRunner fail to initialize FileSystem with correct Flink Configuration during setup

2018-10-15 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649875#comment-16649875
 ] 

Till Rohrmann commented on FLINK-10516:
---

Yes, I think this would be a good idea since in {{release-1.5}} and 
{{release-1.6}} we still offer the legacy mode. I will do the cherry-picking.

> YarnApplicationMasterRunner fail to initialize FileSystem with correct Flink 
> Configuration during setup
> ---
>
> Key: FLINK-10516
> URL: https://issues.apache.org/jira/browse/FLINK-10516
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.4.0, 1.5.0, 1.6.0, 1.7.0
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0, 1.6.2, 1.5.5
>
>
> Will add a fix, and refactor YarnApplicationMasterRunner to add a unittest to 
> prevent future regression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun opened a new pull request #6844: [FLINK-10546] Remove StandaloneMiniCluster

2018-10-15 Thread GitBox
TisonKun opened a new pull request #6844: [FLINK-10546] Remove 
StandaloneMiniCluster
URL: https://github.com/apache/flink/pull/6844
 
 
   
   
   ## What is the purpose of the change
   
   Remove `StandaloneMiniCluster` which is based on legacy mode. As 
@tillrohrmann states at [the corresponding 
JIRA](https://issues.apache.org/jira/browse/FLINK-10549):
   
   > The StandaloneMiniCluster should no longer be in use. It got replaced by 
the MiniCluster.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (**no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**no**)
 - The serializers: (**no**)
 - The runtime per-record code paths (performance sensitive): (**no**)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (**no**)
 - The S3 file system connector: (**no**)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**no**)
 - If yes, how is the feature documented? (**not applicable**)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10546) Remove StandaloneMiniCluster

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649876#comment-16649876
 ] 

ASF GitHub Bot commented on FLINK-10546:


TisonKun opened a new pull request #6844: [FLINK-10546] Remove 
StandaloneMiniCluster
URL: https://github.com/apache/flink/pull/6844
 
 
   
   
   ## What is the purpose of the change
   
   Remove `StandaloneMiniCluster` which is based on legacy mode. As 
@tillrohrmann states at [the corresponding 
JIRA](https://issues.apache.org/jira/browse/FLINK-10549):
   
   > The StandaloneMiniCluster should no longer be in use. It got replaced by 
the MiniCluster.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (**no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**no**)
 - The serializers: (**no**)
 - The runtime per-record code paths (performance sensitive): (**no**)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (**no**)
 - The S3 file system connector: (**no**)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**no**)
 - If yes, how is the feature documented? (**not applicable**)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Remove StandaloneMiniCluster
> 
>
> Key: FLINK-10546
> URL: https://issues.apache.org/jira/browse/FLINK-10546
> Project: Flink
>  Issue Type: Task
>  Components: JobManager
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Before doing as title, I want to check that is the {{StandaloneMiniCluster}} 
> still in use?
>  IIRC there once be a deprecation of {{start-local.sh}} but I don’t know if 
> it is relevant.
>  Further, this class seems unused in all other place. Since it depends on 
> legacy mode, I wonder whether we can *JUST* remove it.
>  
> cc [~till.rohrmann] and [~Zentol]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10546) Remove StandaloneMiniCluster

2018-10-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10546:
---
Labels: pull-request-available  (was: )

> Remove StandaloneMiniCluster
> 
>
> Key: FLINK-10546
> URL: https://issues.apache.org/jira/browse/FLINK-10546
> Project: Flink
>  Issue Type: Task
>  Components: JobManager
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Before doing as title, I want to check that is the {{StandaloneMiniCluster}} 
> still in use?
>  IIRC there once be a deprecation of {{start-local.sh}} but I don’t know if 
> it is relevant.
>  Further, this class seems unused in all other place. Since it depends on 
> legacy mode, I wonder whether we can *JUST* remove it.
>  
> cc [~till.rohrmann] and [~Zentol]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10533) job parallelism equals task slots number but not use the same number of the task slots as the parallelism

2018-10-15 Thread Shimin Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649879#comment-16649879
 ] 

Shimin Yang commented on FLINK-10533:
-

Flink uses Slot Sharing Group to optimize the Resource Usage and allow subtasks 
in same job to share the slot. It's based on task but not job graph. If you 
want to use four slots you can set parallelism to 4. Please refer to 
[https://ci.apache.org/projects/flink/flink-docs-stable/concepts/runtime.html#task-slots-and-resources]
 for more info. If you think this might solve your confusion, I will close this 
issue.

> job parallelism equals task slots number but not use the same number of the 
> task slots as the parallelism
> -
>
> Key: FLINK-10533
> URL: https://issues.apache.org/jira/browse/FLINK-10533
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.5.3, 1.6.0, 1.6.1
>Reporter: sean.miao
>Priority: Major
> Attachments: image-2018-10-12-10-35-57-443.png, 
> image-2018-10-12-10-36-13-503.png
>
>
> i use the table api and do not use the datastream api。
>  
> my job has two graph and every parallelism is two.so the total parallelism is 
> four;
> but if give my job four slots but it just use two slots.
>  
> !image-2018-10-12-10-36-13-503.png!
> !image-2018-10-12-10-35-57-443.png!
>  
> thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10534) Add idle timeout for a flink session cluster

2018-10-15 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649880#comment-16649880
 ] 

Till Rohrmann commented on FLINK-10534:
---

You're right [~yeshan] that we start a session cluster in the attached per-job 
mode which is a legacy artifact. In the future we should rework this and let 
every job part start its own per-job cluster.

In the mean time an idleness timeout would solve the problem, I agree. Alright, 
let's add it but it should only be used by the "per-job" mode where we launch a 
session cluster.

> Add idle timeout for a flink session cluster
> 
>
> Key: FLINK-10534
> URL: https://issues.apache.org/jira/browse/FLINK-10534
> Project: Flink
>  Issue Type: New Feature
>  Components: Cluster Management
>Affects Versions: 1.7.0
>Reporter: ouyangzhe
>Assignee: ouyangzhe
>Priority: Major
> Attachments: 屏幕快照 2018-10-12 上午10.24.08.png
>
>
> The flink session cluster on yarn will aways be running while has no jobs 
> running at all, it will occupy the yarn resources for no use.
> Taskmanagers will be released after an idle timeout, but jobmanager will keep 
> running.
> I propose to add a configuration to limit the idle timeout for jobmanager 
> too, if no job running after a specified timeout, the flink cluster auto 
> finish itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10546) Remove StandaloneMiniCluster

2018-10-15 Thread TisonKun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TisonKun updated FLINK-10546:
-
Issue Type: Sub-task  (was: Task)
Parent: FLINK-10392

> Remove StandaloneMiniCluster
> 
>
> Key: FLINK-10546
> URL: https://issues.apache.org/jira/browse/FLINK-10546
> Project: Flink
>  Issue Type: Sub-task
>  Components: JobManager
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Before doing as title, I want to check that is the {{StandaloneMiniCluster}} 
> still in use?
>  IIRC there once be a deprecation of {{start-local.sh}} but I don’t know if 
> it is relevant.
>  Further, this class seems unused in all other place. Since it depends on 
> legacy mode, I wonder whether we can *JUST* remove it.
>  
> cc [~till.rohrmann] and [~Zentol]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10479) Add Firebase Firestore Connector

2018-10-15 Thread Niels Basjes (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649885#comment-16649885
 ] 

Niels Basjes commented on FLINK-10479:
--

Have a look at the pull request we put up for FLINK-9311 (Flink support for 
Google PubSub).

There we use the Google provided emulators in a docker image so you are testing 
against something "pretty close to the real thing".

I suspect you can reuse some of the ideas for testing against 
Firestore/Firebase.

 

> Add Firebase Firestore Connector
> 
>
> Key: FLINK-10479
> URL: https://issues.apache.org/jira/browse/FLINK-10479
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Reporter: Niklas Gögge
>Priority: Major
>  Labels: proposal
>
> This feature would add a new connector(sink/source) for Cloud Firestore which 
> is part of Firebase.
> Firebase is google's backend as a service platform for building apps. You can 
> read more about it  [here|https://firebase.google.com/].
> Firestore is a NoSQL document database service which could act as data sink 
> and source because it provides realtime listeners for data changes.
> I would like to implement this myself if this proposal is accepted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8578) Implement rowtime DataStream to Table upsert conversion.

2018-10-15 Thread Hequn Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-8578:
---
Description: 
Flink-8577 implements upsert from stream under proctime. This task is going to 
solve the order problem introduce by proctime. As proposed by Fabian in 
FLINK-8545, it would be good to be able to declare a time attribute that 
decides whether an upsert is performed or not.
{code:java}
Table table = tEnv.upsertFromStream(input, 'a, 'b.rowtime.upsertOrder, 'c.key)
{code}
This is a good way to solve the order problem using rowtime. And an idea comes 
to my mind that we can even remove the `.upsertOrder`, because the rowtime 
attribute can only be defined once in a table schema. Removing `.upsertOrder` 
also makes it easier to design api for TableSource and sql, i.e, we don't need 
to add another new feature for the api.

Any suggestions are welcomed!

  was:Add rowtime support for DataStream to Table upsert conversion. We can 
ignore unordered messages according to the rowtime.


> Implement rowtime DataStream to Table upsert conversion.
> 
>
> Key: FLINK-8578
> URL: https://issues.apache.org/jira/browse/FLINK-8578
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>
> Flink-8577 implements upsert from stream under proctime. This task is going 
> to solve the order problem introduce by proctime. As proposed by Fabian in 
> FLINK-8545, it would be good to be able to declare a time attribute that 
> decides whether an upsert is performed or not.
> {code:java}
> Table table = tEnv.upsertFromStream(input, 'a, 'b.rowtime.upsertOrder, 'c.key)
> {code}
> This is a good way to solve the order problem using rowtime. And an idea 
> comes to my mind that we can even remove the `.upsertOrder`, because the 
> rowtime attribute can only be defined once in a table schema. Removing 
> `.upsertOrder` also makes it easier to design api for TableSource and sql, 
> i.e, we don't need to add another new feature for the api.
> Any suggestions are welcomed!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10436) Example config uses deprecated key jobmanager.rpc.address

2018-10-15 Thread TisonKun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649891#comment-16649891
 ] 

TisonKun commented on FLINK-10436:
--

[~till.rohrmann] Sounds reasonable. It is a bit strange if jobmanager can 
somehow replaced by rest. I do a bisect and the commit is 
https://github.com/apache/flink/commit/efd7336fa693a9f82b9ecfb5d81c0ef747ab7801 
which is authored by [~gjy] and is reviewed by [~till.rohrmann]. Thus I involve 
[~gjy] here to hear the original purpose. If we go into the same direction of 
Till's thought above, we can revert the 
{{.withDeprecatedKeys(JobManagerOptions.ADDRESS.key())}} line.

> Example config uses deprecated key jobmanager.rpc.address
> -
>
> Key: FLINK-10436
> URL: https://issues.apache.org/jira/browse/FLINK-10436
> Project: Flink
>  Issue Type: Sub-task
>  Components: Startup Shell Scripts
>Affects Versions: 1.7.0
>Reporter: Ufuk Celebi
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The example {{flink-conf.yaml}} shipped as part of the Flink distribution 
> (https://github.com/apache/flink/blob/master/flink-dist/src/main/resources/flink-conf.yaml)
>  has the following entry:
> {code}
> jobmanager.rpc.address: localhost
> {code}
> When using this key, the following deprecation warning is logged.
> {code}
> 2018-09-26 12:01:46,608 WARN  org.apache.flink.configuration.Configuration
>   - Config uses deprecated configuration key 
> 'jobmanager.rpc.address' instead of proper key 'rest.address'
> {code}
> The example config should not use deprecated config options.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10547) Remove LegacyCLI

2018-10-15 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649890#comment-16649890
 ] 

Till Rohrmann commented on FLINK-10547:
---

[~yanghua] yes, but we need to check which of the 
{{CliFrontendPackageProgramTest}} we need to port to use the {{DefaultCLI}}.

> Remove LegacyCLI
> 
>
> Key: FLINK-10547
> URL: https://issues.apache.org/jira/browse/FLINK-10547
> Project: Flink
>  Issue Type: Sub-task
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add Tanh math function supported in Table API and SQL

2018-10-15 Thread GitBox
pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225080465
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1219,6 +1219,18 @@ TAN(numeric)
   
 
 
+
+  
+{% highlight text %}
+TANH(numeric)
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
+Return value type is DOUBLE.
 
 Review comment:
   In my review I didn't focus as much on the style of the documentation, since 
I'm not familiar with it myself, especially once @xccui jumped in to this part 
I wanted to refer to his judgement there. Somehow this issue must have slipped 
our (mine and @xccui) attention that at the end it wasn't properly addressed in 
the `cosh` PR. Before merging I had an impression that @xccui comments were 
fully addressed, sorry if that wasn't the case.
   
   btw, @yanghua I think you skipped two occurrences of `NUMERIC`. Here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add Tanh math function supported in Table API and SQL

2018-10-15 Thread GitBox
pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225080465
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1219,6 +1219,18 @@ TAN(numeric)
   
 
 
+
+  
+{% highlight text %}
+TANH(numeric)
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
+Return value type is DOUBLE.
 
 Review comment:
   In my review I didn't focus as much on the style of the documentation, since 
I'm not familiar with it myself, especially once @xccui jumped in to this part 
I wanted to refer to his judgement there. Somehow this issue must have slipped 
our (mine and @xccui) attention that at the end it wasn't properly addressed in 
the `cosh` PR. Before merging I had an impression that @xccui comments were 
fully addressed, sorry if that wasn't the case.
   
   btw, @yanghua I think you skipped two occurrences of `NUMERIC`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add Tanh math function supported in Table API and SQL

2018-10-15 Thread GitBox
pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225080553
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1666,6 +1678,18 @@ NUMERIC.tan()
   
 
 
+
+  
+{% highlight java %}
+NUMERIC.tanh()
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
 
 Review comment:
   here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add Tanh math function supported in Table API and SQL

2018-10-15 Thread GitBox
pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225080591
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -2114,6 +2138,18 @@ NUMERIC.tan()
   
 
 
+
+  
+{% highlight scala %}
+NUMERIC.tanh()
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
 
 Review comment:
   and here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10398) Add Tanh math function supported in Table API and SQL

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649892#comment-16649892
 ] 

ASF GitHub Bot commented on FLINK-10398:


pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225080465
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1219,6 +1219,18 @@ TAN(numeric)
   
 
 
+
+  
+{% highlight text %}
+TANH(numeric)
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
+Return value type is DOUBLE.
 
 Review comment:
   In my review I didn't focus as much on the style of the documentation, since 
I'm not familiar with it myself, especially once @xccui jumped in to this part 
I wanted to refer to his judgement there. Somehow this issue must have slipped 
our (mine and @xccui) attention that at the end it wasn't properly addressed in 
the `cosh` PR. Before merging I had an impression that @xccui comments were 
fully addressed, sorry if that wasn't the case.
   
   btw, @yanghua I think you skipped two occurrences of `NUMERIC`. Here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Tanh math function supported in Table API and SQL
> -
>
> Key: FLINK-10398
> URL: https://issues.apache.org/jira/browse/FLINK-10398
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : https://www.techonthenet.com/oracle/functions/tanh.php



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10398) Add Tanh math function supported in Table API and SQL

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649895#comment-16649895
 ] 

ASF GitHub Bot commented on FLINK-10398:


pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225080553
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1666,6 +1678,18 @@ NUMERIC.tan()
   
 
 
+
+  
+{% highlight java %}
+NUMERIC.tanh()
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
 
 Review comment:
   here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Tanh math function supported in Table API and SQL
> -
>
> Key: FLINK-10398
> URL: https://issues.apache.org/jira/browse/FLINK-10398
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : https://www.techonthenet.com/oracle/functions/tanh.php



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10398) Add Tanh math function supported in Table API and SQL

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649894#comment-16649894
 ] 

ASF GitHub Bot commented on FLINK-10398:


pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225080591
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -2114,6 +2138,18 @@ NUMERIC.tan()
   
 
 
+
+  
+{% highlight scala %}
+NUMERIC.tanh()
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
 
 Review comment:
   and here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Tanh math function supported in Table API and SQL
> -
>
> Key: FLINK-10398
> URL: https://issues.apache.org/jira/browse/FLINK-10398
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : https://www.techonthenet.com/oracle/functions/tanh.php



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10398) Add Tanh math function supported in Table API and SQL

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649893#comment-16649893
 ] 

ASF GitHub Bot commented on FLINK-10398:


pnowojski commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225080465
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1219,6 +1219,18 @@ TAN(numeric)
   
 
 
+
+  
+{% highlight text %}
+TANH(numeric)
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
+Return value type is DOUBLE.
 
 Review comment:
   In my review I didn't focus as much on the style of the documentation, since 
I'm not familiar with it myself, especially once @xccui jumped in to this part 
I wanted to refer to his judgement there. Somehow this issue must have slipped 
our (mine and @xccui) attention that at the end it wasn't properly addressed in 
the `cosh` PR. Before merging I had an impression that @xccui comments were 
fully addressed, sorry if that wasn't the case.
   
   btw, @yanghua I think you skipped two occurrences of `NUMERIC`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Tanh math function supported in Table API and SQL
> -
>
> Key: FLINK-10398
> URL: https://issues.apache.org/jira/browse/FLINK-10398
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : https://www.techonthenet.com/oracle/functions/tanh.php



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] tillrohrmann commented on a change in pull request #6839: [FLINK-10253] Run MetricQueryService with lower priority

2018-10-15 Thread GitBox
tillrohrmann commented on a change in pull request #6839: [FLINK-10253] Run 
MetricQueryService with lower priority
URL: https://github.com/apache/flink/pull/6839#discussion_r225083752
 
 

 ##
 File path: 
flink-runtime/src/main/scala/akka/dispatch/PriorityThreadFactory.scala
 ##
 @@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package akka.dispatch
 
 Review comment:
   can this go into the `org.apache.flink.runtime.akka.dispatch` package?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tillrohrmann commented on a change in pull request #6839: [FLINK-10253] Run MetricQueryService with lower priority

2018-10-15 Thread GitBox
tillrohrmann commented on a change in pull request #6839: [FLINK-10253] Run 
MetricQueryService with lower priority
URL: https://github.com/apache/flink/pull/6839#discussion_r225082915
 
 

 ##
 File path: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala
 ##
 @@ -291,12 +291,22 @@ object AkkaUtils {
 ConfigFactory.parseString(config)
   }
 
-  def getThreadPoolExecutorConfig: Config = {
+  def getThreadPoolExecutorConfig(configuration: Configuration): Config = {
+val priority = 
configuration.getInteger(MetricOptions.QUERY_SERVICE_THREAD_PRIORITY)
 
 Review comment:
   I would prefer to pass in the the thread priority as a parameter of the 
`getThreadPoolExecutorConfig` because otherwise we couple all 
`ThreadPoolExecutors` to the `MetricOptions.QUERY_SERVICE_THREAD_PRIORITY`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10253) Run MetricQueryService with lower priority

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649904#comment-16649904
 ] 

ASF GitHub Bot commented on FLINK-10253:


tillrohrmann commented on a change in pull request #6839: [FLINK-10253] Run 
MetricQueryService with lower priority
URL: https://github.com/apache/flink/pull/6839#discussion_r225083752
 
 

 ##
 File path: 
flink-runtime/src/main/scala/akka/dispatch/PriorityThreadFactory.scala
 ##
 @@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package akka.dispatch
 
 Review comment:
   can this go into the `org.apache.flink.runtime.akka.dispatch` package?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Run MetricQueryService with lower priority
> --
>
> Key: FLINK-10253
> URL: https://issues.apache.org/jira/browse/FLINK-10253
> Project: Flink
>  Issue Type: Sub-task
>  Components: Metrics
>Affects Versions: 1.5.3, 1.6.0, 1.7.0
>Reporter: Till Rohrmann
>Assignee: vinoyang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.7.0, 1.6.2, 1.5.5
>
>
> We should run the {{MetricQueryService}} with a lower priority than the main 
> Flink components. An idea would be to start the underlying threads with a 
> lower priority.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10253) Run MetricQueryService with lower priority

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649905#comment-16649905
 ] 

ASF GitHub Bot commented on FLINK-10253:


tillrohrmann commented on a change in pull request #6839: [FLINK-10253] Run 
MetricQueryService with lower priority
URL: https://github.com/apache/flink/pull/6839#discussion_r225082915
 
 

 ##
 File path: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala
 ##
 @@ -291,12 +291,22 @@ object AkkaUtils {
 ConfigFactory.parseString(config)
   }
 
-  def getThreadPoolExecutorConfig: Config = {
+  def getThreadPoolExecutorConfig(configuration: Configuration): Config = {
+val priority = 
configuration.getInteger(MetricOptions.QUERY_SERVICE_THREAD_PRIORITY)
 
 Review comment:
   I would prefer to pass in the the thread priority as a parameter of the 
`getThreadPoolExecutorConfig` because otherwise we couple all 
`ThreadPoolExecutors` to the `MetricOptions.QUERY_SERVICE_THREAD_PRIORITY`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Run MetricQueryService with lower priority
> --
>
> Key: FLINK-10253
> URL: https://issues.apache.org/jira/browse/FLINK-10253
> Project: Flink
>  Issue Type: Sub-task
>  Components: Metrics
>Affects Versions: 1.5.3, 1.6.0, 1.7.0
>Reporter: Till Rohrmann
>Assignee: vinoyang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.7.0, 1.6.2, 1.5.5
>
>
> We should run the {{MetricQueryService}} with a lower priority than the main 
> Flink components. An idea would be to start the underlying threads with a 
> lower priority.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] tillrohrmann commented on issue #6839: [FLINK-10253] Run MetricQueryService with lower priority

2018-10-15 Thread GitBox
tillrohrmann commented on issue #6839: [FLINK-10253] Run MetricQueryService 
with lower priority
URL: https://github.com/apache/flink/pull/6839#issuecomment-429760767
 
 
   Concerning the configuration option, I'm ok with adding it. This gives more 
flexibility even though I wouldn't expect any user to change it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10253) Run MetricQueryService with lower priority

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649906#comment-16649906
 ] 

ASF GitHub Bot commented on FLINK-10253:


tillrohrmann commented on issue #6839: [FLINK-10253] Run MetricQueryService 
with lower priority
URL: https://github.com/apache/flink/pull/6839#issuecomment-429760767
 
 
   Concerning the configuration option, I'm ok with adding it. This gives more 
flexibility even though I wouldn't expect any user to change it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Run MetricQueryService with lower priority
> --
>
> Key: FLINK-10253
> URL: https://issues.apache.org/jira/browse/FLINK-10253
> Project: Flink
>  Issue Type: Sub-task
>  Components: Metrics
>Affects Versions: 1.5.3, 1.6.0, 1.7.0
>Reporter: Till Rohrmann
>Assignee: vinoyang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.7.0, 1.6.2, 1.5.5
>
>
> We should run the {{MetricQueryService}} with a lower priority than the main 
> Flink components. An idea would be to start the underlying threads with a 
> lower priority.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] tillrohrmann closed pull request #6840: [FLINK-10545] [tests] Remove JobManagerLeaderSessionIDITCase

2018-10-15 Thread GitBox
tillrohrmann closed pull request #6840: [FLINK-10545] [tests] Remove 
JobManagerLeaderSessionIDITCase
URL: https://github.com/apache/flink/pull/6840
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-tests/src/test/scala/org/apache/flink/api/scala/runtime/jobmanager/JobManagerLeaderSessionIDITCase.scala
 
b/flink-tests/src/test/scala/org/apache/flink/api/scala/runtime/jobmanager/JobManagerLeaderSessionIDITCase.scala
deleted file mode 100644
index c0ccd53c875..000
--- 
a/flink-tests/src/test/scala/org/apache/flink/api/scala/runtime/jobmanager/JobManagerLeaderSessionIDITCase.scala
+++ /dev/null
@@ -1,103 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.api.scala.runtime.jobmanager
-
-import java.util.UUID
-import java.util.concurrent.CountDownLatch
-
-import akka.actor.ActorSystem
-import akka.testkit.{ImplicitSender, TestKit}
-import org.apache.flink.core.testutils.OneShotLatch
-import org.apache.flink.runtime.akka.{AkkaUtils, ListeningBehaviour}
-import org.apache.flink.runtime.execution.Environment
-import org.apache.flink.runtime.jobgraph.tasks.AbstractInvokable
-import org.apache.flink.runtime.jobgraph.{JobGraph, JobVertex}
-import org.apache.flink.runtime.messages.JobManagerMessages._
-import org.apache.flink.runtime.testingUtils.{ScalaTestingUtils, TestingUtils}
-import org.junit.runner.RunWith
-import org.scalatest.junit.JUnitRunner
-import org.scalatest.{BeforeAndAfterAll, FunSuiteLike, Matchers}
-
-@RunWith(classOf[JUnitRunner])
-class JobManagerLeaderSessionIDITCase(_system: ActorSystem)
-  extends TestKit(_system)
-  with ImplicitSender
-  with FunSuiteLike
-  with Matchers
-  with BeforeAndAfterAll
-  with ScalaTestingUtils {
-
-  import BlockingUntilSignalNoOpInvokable._
-
-  val cluster = TestingUtils.startTestingCluster(
-taskManagerNumSlots,
-numTaskManagers,
-TestingUtils.DEFAULT_AKKA_ASK_TIMEOUT);
-
-  def this() = this(ActorSystem("TestingActorSystem", 
AkkaUtils.getDefaultAkkaConfig))
-
-  override def afterAll(): Unit = {
-cluster.stop()
-TestKit.shutdownActorSystem(system)
-  }
-
-  test("A JobManager should not process CancelJob messages with the wrong 
leader session ID") {
-val sender = new JobVertex("BlockingSender");
-sender.setParallelism(numSlots)
-sender.setInvokableClass(classOf[BlockingUntilSignalNoOpInvokable])
-val jobGraph = new JobGraph("TestJob", sender)
-
-val oldSessionID = UUID.randomUUID()
-
-val jmGateway = cluster.getLeaderGateway(TestingUtils.TESTING_DURATION)
-val jm = jmGateway.actor()
-
-within(TestingUtils.TESTING_DURATION) {
-  jmGateway.tell(SubmitJob(jobGraph, ListeningBehaviour.EXECUTION_RESULT), 
self)
-
-  expectMsg(JobSubmitSuccess(jobGraph.getJobID))
-
-  BlockingUntilSignalNoOpInvokable.countDownLatch.await()
-
-  jm ! LeaderSessionMessage(oldSessionID, CancelJob(jobGraph.getJobID))
-
-  BlockingUntilSignalNoOpInvokable.oneShotLatch.trigger()
-
-  expectMsgType[JobResultSuccess]
-}
-  }
-}
-
-class BlockingUntilSignalNoOpInvokable(env: Environment) extends 
AbstractInvokable(env) {
-
-  override def invoke(): Unit = {
-BlockingUntilSignalNoOpInvokable.countDownLatch.countDown()
-BlockingUntilSignalNoOpInvokable.oneShotLatch.await()
-  }
-}
-
-object BlockingUntilSignalNoOpInvokable {
-  val numTaskManagers = 2
-  val taskManagerNumSlots = 2
-  val numSlots = numTaskManagers * taskManagerNumSlots
-
-  val countDownLatch = new CountDownLatch(numSlots)
-
-  val oneShotLatch = new OneShotLatch()
-}


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tillrohrmann commented on issue #6840: [FLINK-10545] [tests] Remove JobManagerLeaderSessionIDITCase

2018-10-15 Thread GitBox
tillrohrmann commented on issue #6840: [FLINK-10545] [tests] Remove 
JobManagerLeaderSessionIDITCase
URL: https://github.com/apache/flink/pull/6840#issuecomment-429761872
 
 
   Thanks for your contribution @TisonKun. The fencing is covered by 
`FencedRpcEndpointTest`. Merging this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10545) Remove JobManagerLeaderSessionIDITCase

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649908#comment-16649908
 ] 

ASF GitHub Bot commented on FLINK-10545:


tillrohrmann closed pull request #6840: [FLINK-10545] [tests] Remove 
JobManagerLeaderSessionIDITCase
URL: https://github.com/apache/flink/pull/6840
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-tests/src/test/scala/org/apache/flink/api/scala/runtime/jobmanager/JobManagerLeaderSessionIDITCase.scala
 
b/flink-tests/src/test/scala/org/apache/flink/api/scala/runtime/jobmanager/JobManagerLeaderSessionIDITCase.scala
deleted file mode 100644
index c0ccd53c875..000
--- 
a/flink-tests/src/test/scala/org/apache/flink/api/scala/runtime/jobmanager/JobManagerLeaderSessionIDITCase.scala
+++ /dev/null
@@ -1,103 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.flink.api.scala.runtime.jobmanager
-
-import java.util.UUID
-import java.util.concurrent.CountDownLatch
-
-import akka.actor.ActorSystem
-import akka.testkit.{ImplicitSender, TestKit}
-import org.apache.flink.core.testutils.OneShotLatch
-import org.apache.flink.runtime.akka.{AkkaUtils, ListeningBehaviour}
-import org.apache.flink.runtime.execution.Environment
-import org.apache.flink.runtime.jobgraph.tasks.AbstractInvokable
-import org.apache.flink.runtime.jobgraph.{JobGraph, JobVertex}
-import org.apache.flink.runtime.messages.JobManagerMessages._
-import org.apache.flink.runtime.testingUtils.{ScalaTestingUtils, TestingUtils}
-import org.junit.runner.RunWith
-import org.scalatest.junit.JUnitRunner
-import org.scalatest.{BeforeAndAfterAll, FunSuiteLike, Matchers}
-
-@RunWith(classOf[JUnitRunner])
-class JobManagerLeaderSessionIDITCase(_system: ActorSystem)
-  extends TestKit(_system)
-  with ImplicitSender
-  with FunSuiteLike
-  with Matchers
-  with BeforeAndAfterAll
-  with ScalaTestingUtils {
-
-  import BlockingUntilSignalNoOpInvokable._
-
-  val cluster = TestingUtils.startTestingCluster(
-taskManagerNumSlots,
-numTaskManagers,
-TestingUtils.DEFAULT_AKKA_ASK_TIMEOUT);
-
-  def this() = this(ActorSystem("TestingActorSystem", 
AkkaUtils.getDefaultAkkaConfig))
-
-  override def afterAll(): Unit = {
-cluster.stop()
-TestKit.shutdownActorSystem(system)
-  }
-
-  test("A JobManager should not process CancelJob messages with the wrong 
leader session ID") {
-val sender = new JobVertex("BlockingSender");
-sender.setParallelism(numSlots)
-sender.setInvokableClass(classOf[BlockingUntilSignalNoOpInvokable])
-val jobGraph = new JobGraph("TestJob", sender)
-
-val oldSessionID = UUID.randomUUID()
-
-val jmGateway = cluster.getLeaderGateway(TestingUtils.TESTING_DURATION)
-val jm = jmGateway.actor()
-
-within(TestingUtils.TESTING_DURATION) {
-  jmGateway.tell(SubmitJob(jobGraph, ListeningBehaviour.EXECUTION_RESULT), 
self)
-
-  expectMsg(JobSubmitSuccess(jobGraph.getJobID))
-
-  BlockingUntilSignalNoOpInvokable.countDownLatch.await()
-
-  jm ! LeaderSessionMessage(oldSessionID, CancelJob(jobGraph.getJobID))
-
-  BlockingUntilSignalNoOpInvokable.oneShotLatch.trigger()
-
-  expectMsgType[JobResultSuccess]
-}
-  }
-}
-
-class BlockingUntilSignalNoOpInvokable(env: Environment) extends 
AbstractInvokable(env) {
-
-  override def invoke(): Unit = {
-BlockingUntilSignalNoOpInvokable.countDownLatch.countDown()
-BlockingUntilSignalNoOpInvokable.oneShotLatch.await()
-  }
-}
-
-object BlockingUntilSignalNoOpInvokable {
-  val numTaskManagers = 2
-  val taskManagerNumSlots = 2
-  val numSlots = numTaskManagers * taskManagerNumSlots
-
-  val countDownLatch = new CountDownLatch(numSlots)
-
-  val oneShotLatch = new OneShotLatch()
-}


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub

[jira] [Commented] (FLINK-10545) Remove JobManagerLeaderSessionIDITCase

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649907#comment-16649907
 ] 

ASF GitHub Bot commented on FLINK-10545:


tillrohrmann commented on issue #6840: [FLINK-10545] [tests] Remove 
JobManagerLeaderSessionIDITCase
URL: https://github.com/apache/flink/pull/6840#issuecomment-429761872
 
 
   Thanks for your contribution @TisonKun. The fencing is covered by 
`FencedRpcEndpointTest`. Merging this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Remove JobManagerLeaderSessionIDITCase
> --
>
> Key: FLINK-10545
> URL: https://issues.apache.org/jira/browse/FLINK-10545
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> {{JobManagerLeaderSessionIDITCase.scala}} is based on legacy mode and I think 
> we now have {{FencingToken}} and need not to maintain or port this test. Just 
> simply remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-10545) Remove JobManagerLeaderSessionIDITCase

2018-10-15 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-10545.
---
Resolution: Fixed

Fixed via 
https://github.com/apache/flink/commit/9770e78edcb96a2210de92955b6010b5b7a6892a

> Remove JobManagerLeaderSessionIDITCase
> --
>
> Key: FLINK-10545
> URL: https://issues.apache.org/jira/browse/FLINK-10545
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> {{JobManagerLeaderSessionIDITCase.scala}} is based on legacy mode and I think 
> we now have {{FencingToken}} and need not to maintain or port this test. Just 
> simply remove it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10251) Handle oversized response messages in AkkaRpcActor

2018-10-15 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649910#comment-16649910
 ] 

Till Rohrmann commented on FLINK-10251:
---

Great pickup [~yanghua] :-)

> Handle oversized response messages in AkkaRpcActor
> --
>
> Key: FLINK-10251
> URL: https://issues.apache.org/jira/browse/FLINK-10251
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.5.3, 1.6.0, 1.7.0
>Reporter: Till Rohrmann
>Assignee: vinoyang
>Priority: Major
> Fix For: 1.7.0, 1.6.2, 1.5.5
>
>
> The {{AkkaRpcActor}} should check whether an RPC response which is sent to a 
> remote sender does not exceed the maximum framesize of the underlying 
> {{ActorSystem}}. If this is the case we should fail fast instead. We can 
> achieve this by serializing the response and sending the serialized byte 
> array.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10220) StreamSQL E2E test fails on travis

2018-10-15 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649913#comment-16649913
 ] 

Till Rohrmann commented on FLINK-10220:
---

Have you tried to run this test in a loop on your local machine or some cloud 
instance? Usually, I could provoke test failures like this before [~hequn8128].

> StreamSQL E2E test fails on travis
> --
>
> Key: FLINK-10220
> URL: https://issues.apache.org/jira/browse/FLINK-10220
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Hequn Cheng
>Priority: Critical
> Fix For: 1.7.0
>
>
> https://travis-ci.org/zentol/flink-ci/jobs/420972344
> {code}
> [FAIL] 'Streaming SQL end-to-end test' failed after 1 minutes and 49 seconds! 
> Test exited with exit code 0 but the logs contained errors, exceptions or 
> non-empty .out files
> 2018-08-27 07:34:36,311 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- window: 
> (TumblingGroupWindow('w$, 'rowtime, 2.millis)), select: ($SUM0(correct) 
> AS correct, start('w$) AS w$start, end('w$) AS w$end, rowtime('w$) AS 
> w$rowtime, proctime('w$) AS w$proctime) -> select: (correct, w$start AS 
> rowtime) -> to: Row -> Map -> Sink: Unnamed (1/1) 
> (97d055e4661ff3361a504626257d406d) switched from RUNNING to FAILED.
> java.lang.RuntimeException: Exception occurred while processing valve output 
> watermark: 
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor$ForwardingValveOutputHandler.handleWatermark(StreamInputProcessor.java:265)
>   at 
> org.apache.flink.streaming.runtime.streamstatus.StatusWatermarkValve.findAndOutputNewMinWatermarkAcrossAlignedChannels(StatusWatermarkValve.java:189)
>   at 
> org.apache.flink.streaming.runtime.streamstatus.StatusWatermarkValve.inputWatermark(StatusWatermarkValve.java:111)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:184)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
> Could not forward element to next operator
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:596)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667)
>   at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
>   at 
> org.apache.flink.table.runtime.aggregate.TimeWindowPropertyCollector.collect(TimeWindowPropertyCollector.scala:65)
>   at 
> org.apache.flink.table.runtime.aggregate.IncrementalAggregateAllWindowFunction.apply(IncrementalAggregateAllWindowFunction.scala:62)
>   at 
> org.apache.flink.table.runtime.aggregate.IncrementalAggregateAllTimeWindowFunction.apply(IncrementalAggregateAllTimeWindowFunction.scala:65)
>   at 
> org.apache.flink.table.runtime.aggregate.IncrementalAggregateAllTimeWindowFunction.apply(IncrementalAggregateAllTimeWindowFunction.scala:37)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.functions.InternalSingleValueAllWindowFunction.process(InternalSingleValueAllWindowFunction.java:46)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.functions.InternalSingleValueAllWindowFunction.process(InternalSingleValueAllWindowFunction.java:34)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.emitWindowContents(WindowOperator.java:546)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onEventTime(WindowOperator.java:454)
>   at 
> org.apache.flink.streaming.api.operators.InternalTimerServiceImpl.advanceWatermark(InternalTimerServiceImpl.java:251)
>   at 
> org.apache.flink.streaming.api.operators.InternalTimeServiceManager.advanceWatermark(InternalTimeServiceManager.java:128)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.processWatermark(AbstractStre

[jira] [Commented] (FLINK-8578) Implement rowtime DataStream to Table upsert conversion.

2018-10-15 Thread Fabian Hueske (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649915#comment-16649915
 ] 

Fabian Hueske commented on FLINK-8578:
--

I proposed the \{{.upsertOrder}} to distinguish cases where a table had two 
different time attributes. We could of course restrict upsert tables to only 
support one time attribute.

However, I'm not sure if we would prohibit legit use cases with such a 
restriction. For example, a data stream could not be converted into a proc-time 
upsert table while extracting the event-time attribute from the StreamRecord.

Btw. did we discuss what should happen with time attributes after the upsert 
conversion? Do they lose their time-attribute properties, i.e., is the 
proc-time attribute materialized?

> Implement rowtime DataStream to Table upsert conversion.
> 
>
> Key: FLINK-8578
> URL: https://issues.apache.org/jira/browse/FLINK-8578
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>
> Flink-8577 implements upsert from stream under proctime. This task is going 
> to solve the order problem introduce by proctime. As proposed by Fabian in 
> FLINK-8545, it would be good to be able to declare a time attribute that 
> decides whether an upsert is performed or not.
> {code:java}
> Table table = tEnv.upsertFromStream(input, 'a, 'b.rowtime.upsertOrder, 'c.key)
> {code}
> This is a good way to solve the order problem using rowtime. And an idea 
> comes to my mind that we can even remove the `.upsertOrder`, because the 
> rowtime attribute can only be defined once in a table schema. Removing 
> `.upsertOrder` also makes it easier to design api for TableSource and sql, 
> i.e, we don't need to add another new feature for the api.
> Any suggestions are welcomed!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add Tanh math function supported in Table API and SQL

2018-10-15 Thread GitBox
yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225086571
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1666,6 +1678,18 @@ NUMERIC.tan()
   
 
 
+
+  
+{% highlight java %}
+NUMERIC.tanh()
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
 
 Review comment:
   According to the agreement of @xccui  before, it should be no problem, it is 
based on this line: 
   
   ```
   NUMERIC.tanh()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10398) Add Tanh math function supported in Table API and SQL

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649916#comment-16649916
 ] 

ASF GitHub Bot commented on FLINK-10398:


yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225086571
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1666,6 +1678,18 @@ NUMERIC.tan()
   
 
 
+
+  
+{% highlight java %}
+NUMERIC.tanh()
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
 
 Review comment:
   According to the agreement of @xccui  before, it should be no problem, it is 
based on this line: 
   
   ```
   NUMERIC.tanh()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Tanh math function supported in Table API and SQL
> -
>
> Key: FLINK-10398
> URL: https://issues.apache.org/jira/browse/FLINK-10398
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : https://www.techonthenet.com/oracle/functions/tanh.php



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add Tanh math function supported in Table API and SQL

2018-10-15 Thread GitBox
yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225086722
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -2114,6 +2138,18 @@ NUMERIC.tan()
   
 
 
+
+  
+{% highlight scala %}
+NUMERIC.tanh()
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
 
 Review comment:
   the same, it based on : 
   
   ```
   NUMERIC.tanh()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10398) Add Tanh math function supported in Table API and SQL

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649918#comment-16649918
 ] 

ASF GitHub Bot commented on FLINK-10398:


yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225086722
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -2114,6 +2138,18 @@ NUMERIC.tan()
   
 
 
+
+  
+{% highlight scala %}
+NUMERIC.tanh()
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
 
 Review comment:
   the same, it based on : 
   
   ```
   NUMERIC.tanh()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Tanh math function supported in Table API and SQL
> -
>
> Key: FLINK-10398
> URL: https://issues.apache.org/jira/browse/FLINK-10398
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : https://www.techonthenet.com/oracle/functions/tanh.php



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread GitBox
TisonKun commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in 
YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429763575
 
 
   Hi @yanghua , I see your last commit and think it is still use the legacy 
mode. (Since you start an actor system and use `ActorGateway` instead of using 
FLIP-6 components.)
   
   To minimize this pull request as its title "Cleanup constant isNewMode in 
YarnTestBase", I propose remain the clean up in `YARNSessionFIFOITCase`, simply 
remove `isNewMode` and add `@Ignore` to `YARNHighAvailabilityITCase`, 
`YARNSessionCapacitySchedulerITCase#testClientStartup` and 
`YARNSessionCapacitySchedulerITCase#testTaskManagerFailure`. Meanwhile, raising 
a JIRA about port those tests.   What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10527) Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649920#comment-16649920
 ] 

ASF GitHub Bot commented on FLINK-10527:


TisonKun commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in 
YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429763575
 
 
   Hi @yanghua , I see your last commit and think it is still use the legacy 
mode. (Since you start an actor system and use `ActorGateway` instead of using 
FLIP-6 components.)
   
   To minimize this pull request as its title "Cleanup constant isNewMode in 
YarnTestBase", I propose remain the clean up in `YARNSessionFIFOITCase`, simply 
remove `isNewMode` and add `@Ignore` to `YARNHighAvailabilityITCase`, 
`YARNSessionCapacitySchedulerITCase#testClientStartup` and 
`YARNSessionCapacitySchedulerITCase#testTaskManagerFailure`. Meanwhile, raising 
a JIRA about port those tests.   What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Cleanup constant isNewMode in YarnTestBase
> --
>
> Key: FLINK-10527
> URL: https://issues.apache.org/jira/browse/FLINK-10527
> Project: Flink
>  Issue Type: Sub-task
>  Components: YARN
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> This seems to be a residual problem with FLINK-10396. It is set to true in 
> that PR. Currently it has three usage scenarios:
> 1. assert, caused an error
> {code:java}
> assumeTrue("The new mode does not start TMs upfront.", !isNewMode);
> {code}
> 2. if (!isNewMode) the logic in the block would not have invoked, the if 
> block can be removed
> 3. if (isNewMode) always been invoked, the if statement can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun edited a comment on issue #6816: [FLINK-10527] Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread GitBox
TisonKun edited a comment on issue #6816: [FLINK-10527] Cleanup constant 
isNewMode in YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429763575
 
 
   Hi @yanghua , I see your last commit and think it is still use the legacy 
mode. (Since you start an actor system and use `ActorGateway` instead of using 
FLIP-6 components.)
   
   To minimize this pull request as its title "Cleanup constant isNewMode in 
YarnTestBase", I propose to retain the clean up in `YARNSessionFIFOITCase`, 
simply remove `isNewMode` and add `@Ignore` to `YARNHighAvailabilityITCase`, 
`YARNSessionCapacitySchedulerITCase#testClientStartup` and 
`YARNSessionCapacitySchedulerITCase#testTaskManagerFailure`. Meanwhile, raising 
a JIRA about port those tests.   What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10527) Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649921#comment-16649921
 ] 

ASF GitHub Bot commented on FLINK-10527:


TisonKun edited a comment on issue #6816: [FLINK-10527] Cleanup constant 
isNewMode in YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429763575
 
 
   Hi @yanghua , I see your last commit and think it is still use the legacy 
mode. (Since you start an actor system and use `ActorGateway` instead of using 
FLIP-6 components.)
   
   To minimize this pull request as its title "Cleanup constant isNewMode in 
YarnTestBase", I propose to retain the clean up in `YARNSessionFIFOITCase`, 
simply remove `isNewMode` and add `@Ignore` to `YARNHighAvailabilityITCase`, 
`YARNSessionCapacitySchedulerITCase#testClientStartup` and 
`YARNSessionCapacitySchedulerITCase#testTaskManagerFailure`. Meanwhile, raising 
a JIRA about port those tests.   What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Cleanup constant isNewMode in YarnTestBase
> --
>
> Key: FLINK-10527
> URL: https://issues.apache.org/jira/browse/FLINK-10527
> Project: Flink
>  Issue Type: Sub-task
>  Components: YARN
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> This seems to be a residual problem with FLINK-10396. It is set to true in 
> that PR. Currently it has three usage scenarios:
> 1. assert, caused an error
> {code:java}
> assumeTrue("The new mode does not start TMs upfront.", !isNewMode);
> {code}
> 2. if (!isNewMode) the logic in the block would not have invoked, the if 
> block can be removed
> 3. if (isNewMode) always been invoked, the if statement can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] tillrohrmann commented on a change in pull request #6841: [FLINK-10405] [tests] Port JobManagerFailsITCase to new code base

2018-10-15 Thread GitBox
tillrohrmann commented on a change in pull request #6841: [FLINK-10405] [tests] 
Port JobManagerFailsITCase to new code base
URL: https://github.com/apache/flink/pull/6841#discussion_r225087527
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureRecoveryITCase.java
 ##
 @@ -325,6 +325,21 @@ public void run() {
dispatcherProcesses[1] = new DispatcherProcess(1, 
config);
dispatcherProcesses[1].startProcess();
 
+   // get new dispatcher gateway
+   
leaderListener.waitForNewLeader(deadline.timeLeft().toMillis());
+
+   leaderAddress = leaderListener.getAddress();
+   leaderId = leaderListener.getLeaderSessionID();
+
+   final CompletableFuture 
newDispatcherGatewayFuture = rpcService.connect(
+   leaderAddress,
+   DispatcherId.fromUuid(leaderId),
+   DispatcherGateway.class);
+   final DispatcherGateway newDispatcherGateway = 
newDispatcherGatewayFuture.get();
+
+   // Wait for all task managers to connect to the new 
leading job manager
+   waitForTaskManagers(numberOfTaskManagers, 
newDispatcherGateway, deadline.timeLeft());
 
 Review comment:
   Why do we have to add this code block? Shouldn't this implicitly be asserted 
if the jobs finishes successfully?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10405) Port JobManagerFailsITCase to new code base

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649925#comment-16649925
 ] 

ASF GitHub Bot commented on FLINK-10405:


tillrohrmann commented on a change in pull request #6841: [FLINK-10405] [tests] 
Port JobManagerFailsITCase to new code base
URL: https://github.com/apache/flink/pull/6841#discussion_r225087527
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureRecoveryITCase.java
 ##
 @@ -325,6 +325,21 @@ public void run() {
dispatcherProcesses[1] = new DispatcherProcess(1, 
config);
dispatcherProcesses[1].startProcess();
 
+   // get new dispatcher gateway
+   
leaderListener.waitForNewLeader(deadline.timeLeft().toMillis());
+
+   leaderAddress = leaderListener.getAddress();
+   leaderId = leaderListener.getLeaderSessionID();
+
+   final CompletableFuture 
newDispatcherGatewayFuture = rpcService.connect(
+   leaderAddress,
+   DispatcherId.fromUuid(leaderId),
+   DispatcherGateway.class);
+   final DispatcherGateway newDispatcherGateway = 
newDispatcherGatewayFuture.get();
+
+   // Wait for all task managers to connect to the new 
leading job manager
+   waitForTaskManagers(numberOfTaskManagers, 
newDispatcherGateway, deadline.timeLeft());
 
 Review comment:
   Why do we have to add this code block? Shouldn't this implicitly be asserted 
if the jobs finishes successfully?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Port JobManagerFailsITCase to new code base
> ---
>
> Key: FLINK-10405
> URL: https://issues.apache.org/jira/browse/FLINK-10405
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Port {{JobManagerFailsITCase}} to new code base.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun commented on a change in pull request #6841: [FLINK-10405] [tests] Port JobManagerFailsITCase to new code base

2018-10-15 Thread GitBox
TisonKun commented on a change in pull request #6841: [FLINK-10405] [tests] 
Port JobManagerFailsITCase to new code base
URL: https://github.com/apache/flink/pull/6841#discussion_r225089055
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureRecoveryITCase.java
 ##
 @@ -325,6 +325,21 @@ public void run() {
dispatcherProcesses[1] = new DispatcherProcess(1, 
config);
dispatcherProcesses[1].startProcess();
 
+   // get new dispatcher gateway
+   
leaderListener.waitForNewLeader(deadline.timeLeft().toMillis());
+
+   leaderAddress = leaderListener.getAddress();
+   leaderId = leaderListener.getLeaderSessionID();
+
+   final CompletableFuture 
newDispatcherGatewayFuture = rpcService.connect(
+   leaderAddress,
+   DispatcherId.fromUuid(leaderId),
+   DispatcherGateway.class);
+   final DispatcherGateway newDispatcherGateway = 
newDispatcherGatewayFuture.get();
+
+   // Wait for all task managers to connect to the new 
leading job manager
+   waitForTaskManagers(numberOfTaskManagers, 
newDispatcherGateway, deadline.timeLeft());
 
 Review comment:
   Yes of course. I saw the removed test "A TaskManager detect a lost 
connection to the JobManager and try to reconnect to it" explicitly asserts 
this and thus add this block. I am not opposite to revert it since it is 
implicitly 
asserted if the jobs finishes successfully.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10405) Port JobManagerFailsITCase to new code base

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649931#comment-16649931
 ] 

ASF GitHub Bot commented on FLINK-10405:


TisonKun commented on a change in pull request #6841: [FLINK-10405] [tests] 
Port JobManagerFailsITCase to new code base
URL: https://github.com/apache/flink/pull/6841#discussion_r225089055
 
 

 ##
 File path: 
flink-tests/src/test/java/org/apache/flink/test/recovery/JobManagerHAProcessFailureRecoveryITCase.java
 ##
 @@ -325,6 +325,21 @@ public void run() {
dispatcherProcesses[1] = new DispatcherProcess(1, 
config);
dispatcherProcesses[1].startProcess();
 
+   // get new dispatcher gateway
+   
leaderListener.waitForNewLeader(deadline.timeLeft().toMillis());
+
+   leaderAddress = leaderListener.getAddress();
+   leaderId = leaderListener.getLeaderSessionID();
+
+   final CompletableFuture 
newDispatcherGatewayFuture = rpcService.connect(
+   leaderAddress,
+   DispatcherId.fromUuid(leaderId),
+   DispatcherGateway.class);
+   final DispatcherGateway newDispatcherGateway = 
newDispatcherGatewayFuture.get();
+
+   // Wait for all task managers to connect to the new 
leading job manager
+   waitForTaskManagers(numberOfTaskManagers, 
newDispatcherGateway, deadline.timeLeft());
 
 Review comment:
   Yes of course. I saw the removed test "A TaskManager detect a lost 
connection to the JobManager and try to reconnect to it" explicitly asserts 
this and thus add this block. I am not opposite to revert it since it is 
implicitly 
asserted if the jobs finishes successfully.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Port JobManagerFailsITCase to new code base
> ---
>
> Key: FLINK-10405
> URL: https://issues.apache.org/jira/browse/FLINK-10405
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Port {{JobManagerFailsITCase}} to new code base.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] tillrohrmann closed pull request #6827: [FLINK-10530][tests] Harden ProcessFailureCancelingITCase and AbstractTaskManagerProcessFailureRecovery

2018-10-15 Thread GitBox
tillrohrmann closed pull request #6827: [FLINK-10530][tests] Harden 
ProcessFailureCancelingITCase and AbstractTaskManagerProcessFailureRecovery
URL: https://github.com/apache/flink/pull/6827
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/recovery/AbstractTaskManagerProcessFailureRecoveryTest.java
 
b/flink-tests/src/test/java/org/apache/flink/test/recovery/AbstractTaskManagerProcessFailureRecoveryTest.java
index 5d7f26bb886..83298aa78ec 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/recovery/AbstractTaskManagerProcessFailureRecoveryTest.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/recovery/AbstractTaskManagerProcessFailureRecoveryTest.java
@@ -18,18 +18,19 @@
 
 package org.apache.flink.test.recovery;
 
+import org.apache.flink.api.java.utils.ParameterTool;
 import org.apache.flink.configuration.AkkaOptions;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.HeartbeatManagerOptions;
+import org.apache.flink.configuration.HighAvailabilityOptions;
 import org.apache.flink.configuration.JobManagerOptions;
-import org.apache.flink.configuration.RestOptions;
 import org.apache.flink.configuration.TaskManagerOptions;
 import org.apache.flink.runtime.clusterframework.types.ResourceID;
 import org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint;
 import org.apache.flink.runtime.taskexecutor.TaskManagerRunner;
 import org.apache.flink.runtime.testutils.CommonTestUtils;
 import org.apache.flink.runtime.util.BlobServerResource;
-import org.apache.flink.util.NetUtils;
+import org.apache.flink.runtime.zookeeper.ZooKeeperResource;
 import org.apache.flink.util.TestLogger;
 
 import org.junit.Rule;
@@ -42,6 +43,8 @@
 import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.StringWriter;
+import java.util.ArrayList;
+import java.util.Map;
 import java.util.concurrent.atomic.AtomicReference;
 
 import static 
org.apache.flink.runtime.testutils.CommonTestUtils.getCurrentClasspath;
@@ -76,6 +79,9 @@
@Rule
public final BlobServerResource blobServerResource = new 
BlobServerResource();
 
+   @Rule
+   public final ZooKeeperResource zooKeeperResource = new 
ZooKeeperResource();
+
@Test
public void testTaskManagerProcessFailure() throws Exception {
 
@@ -89,18 +95,19 @@ public void testTaskManagerProcessFailure() throws 
Exception {
 
File coordinateTempDir = null;
 
-   final int jobManagerPort = NetUtils.getAvailablePort();
-   final int restPort = NetUtils.getAvailablePort();
-
-   Configuration jmConfig = new Configuration();
-   jmConfig.setString(AkkaOptions.ASK_TIMEOUT, "100 s");
-   jmConfig.setString(JobManagerOptions.ADDRESS, "localhost");
-   jmConfig.setInteger(JobManagerOptions.PORT, jobManagerPort);
-   jmConfig.setLong(HeartbeatManagerOptions.HEARTBEAT_INTERVAL, 
500L);
-   jmConfig.setLong(HeartbeatManagerOptions.HEARTBEAT_TIMEOUT, 
1L);
-   jmConfig.setInteger(RestOptions.PORT, restPort);
-
-   try (final StandaloneSessionClusterEntrypoint clusterEntrypoint 
= new StandaloneSessionClusterEntrypoint(jmConfig)) {
+   Configuration config = new Configuration();
+   config.setString(AkkaOptions.ASK_TIMEOUT, "100 s");
+   config.setString(JobManagerOptions.ADDRESS, "localhost");
+   config.setLong(HeartbeatManagerOptions.HEARTBEAT_INTERVAL, 
500L);
+   config.setLong(HeartbeatManagerOptions.HEARTBEAT_TIMEOUT, 
1L);
+   config.setString(HighAvailabilityOptions.HA_MODE, "zookeeper");
+   config.setString(HighAvailabilityOptions.HA_ZOOKEEPER_QUORUM, 
zooKeeperResource.getConnectString());
+   config.setString(HighAvailabilityOptions.HA_STORAGE_PATH, 
temporaryFolder.newFolder().getAbsolutePath());
+   config.setInteger(TaskManagerOptions.NUM_TASK_SLOTS, 2);
+   config.setString(TaskManagerOptions.MANAGED_MEMORY_SIZE, "4m");
+   config.setInteger(TaskManagerOptions.NETWORK_NUM_BUFFERS, 100);
+
+   try (final StandaloneSessionClusterEntrypoint clusterEntrypoint 
= new StandaloneSessionClusterEntrypoint(config)) {
// check that we run this test only if the java command
// is available on this machine
String javaCommand = getJavaCommandPath();
@@ -119,21 +126,28 @@ public void testTaskManagerProcessFailure() throws 
Exception {
 
clusterEntrypoint.startCluster();
 
+   final Map keyValues = 

[jira] [Resolved] (FLINK-10530) ProcessFailureCancelingITCase.testCancelingOnProcessFailure failed on Travis.

2018-10-15 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-10530.
---
Resolution: Fixed

Fixed via 
https://github.com/apache/flink/commit/bbee77ab33f8ac2595ca3608d9d0d875c1715863

> ProcessFailureCancelingITCase.testCancelingOnProcessFailure failed on Travis.
> -
>
> Key: FLINK-10530
> URL: https://issues.apache.org/jira/browse/FLINK-10530
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: Kostas Kloudas
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.7.0
>
>
> The logs from Travis: https://api.travis-ci.org/v3/job/440109944/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10530) ProcessFailureCancelingITCase.testCancelingOnProcessFailure failed on Travis.

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649932#comment-16649932
 ] 

ASF GitHub Bot commented on FLINK-10530:


tillrohrmann closed pull request #6827: [FLINK-10530][tests] Harden 
ProcessFailureCancelingITCase and AbstractTaskManagerProcessFailureRecovery
URL: https://github.com/apache/flink/pull/6827
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/recovery/AbstractTaskManagerProcessFailureRecoveryTest.java
 
b/flink-tests/src/test/java/org/apache/flink/test/recovery/AbstractTaskManagerProcessFailureRecoveryTest.java
index 5d7f26bb886..83298aa78ec 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/recovery/AbstractTaskManagerProcessFailureRecoveryTest.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/recovery/AbstractTaskManagerProcessFailureRecoveryTest.java
@@ -18,18 +18,19 @@
 
 package org.apache.flink.test.recovery;
 
+import org.apache.flink.api.java.utils.ParameterTool;
 import org.apache.flink.configuration.AkkaOptions;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.HeartbeatManagerOptions;
+import org.apache.flink.configuration.HighAvailabilityOptions;
 import org.apache.flink.configuration.JobManagerOptions;
-import org.apache.flink.configuration.RestOptions;
 import org.apache.flink.configuration.TaskManagerOptions;
 import org.apache.flink.runtime.clusterframework.types.ResourceID;
 import org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint;
 import org.apache.flink.runtime.taskexecutor.TaskManagerRunner;
 import org.apache.flink.runtime.testutils.CommonTestUtils;
 import org.apache.flink.runtime.util.BlobServerResource;
-import org.apache.flink.util.NetUtils;
+import org.apache.flink.runtime.zookeeper.ZooKeeperResource;
 import org.apache.flink.util.TestLogger;
 
 import org.junit.Rule;
@@ -42,6 +43,8 @@
 import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.StringWriter;
+import java.util.ArrayList;
+import java.util.Map;
 import java.util.concurrent.atomic.AtomicReference;
 
 import static 
org.apache.flink.runtime.testutils.CommonTestUtils.getCurrentClasspath;
@@ -76,6 +79,9 @@
@Rule
public final BlobServerResource blobServerResource = new 
BlobServerResource();
 
+   @Rule
+   public final ZooKeeperResource zooKeeperResource = new 
ZooKeeperResource();
+
@Test
public void testTaskManagerProcessFailure() throws Exception {
 
@@ -89,18 +95,19 @@ public void testTaskManagerProcessFailure() throws 
Exception {
 
File coordinateTempDir = null;
 
-   final int jobManagerPort = NetUtils.getAvailablePort();
-   final int restPort = NetUtils.getAvailablePort();
-
-   Configuration jmConfig = new Configuration();
-   jmConfig.setString(AkkaOptions.ASK_TIMEOUT, "100 s");
-   jmConfig.setString(JobManagerOptions.ADDRESS, "localhost");
-   jmConfig.setInteger(JobManagerOptions.PORT, jobManagerPort);
-   jmConfig.setLong(HeartbeatManagerOptions.HEARTBEAT_INTERVAL, 
500L);
-   jmConfig.setLong(HeartbeatManagerOptions.HEARTBEAT_TIMEOUT, 
1L);
-   jmConfig.setInteger(RestOptions.PORT, restPort);
-
-   try (final StandaloneSessionClusterEntrypoint clusterEntrypoint 
= new StandaloneSessionClusterEntrypoint(jmConfig)) {
+   Configuration config = new Configuration();
+   config.setString(AkkaOptions.ASK_TIMEOUT, "100 s");
+   config.setString(JobManagerOptions.ADDRESS, "localhost");
+   config.setLong(HeartbeatManagerOptions.HEARTBEAT_INTERVAL, 
500L);
+   config.setLong(HeartbeatManagerOptions.HEARTBEAT_TIMEOUT, 
1L);
+   config.setString(HighAvailabilityOptions.HA_MODE, "zookeeper");
+   config.setString(HighAvailabilityOptions.HA_ZOOKEEPER_QUORUM, 
zooKeeperResource.getConnectString());
+   config.setString(HighAvailabilityOptions.HA_STORAGE_PATH, 
temporaryFolder.newFolder().getAbsolutePath());
+   config.setInteger(TaskManagerOptions.NUM_TASK_SLOTS, 2);
+   config.setString(TaskManagerOptions.MANAGED_MEMORY_SIZE, "4m");
+   config.setInteger(TaskManagerOptions.NETWORK_NUM_BUFFERS, 100);
+
+   try (final StandaloneSessionClusterEntrypoint clusterEntrypoint 
= new StandaloneSessionClusterEntrypoint(config)) {
// check that we run this test only if the java command
// is available on this machine
 

[GitHub] yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add Tanh math function supported in Table API and SQL

2018-10-15 Thread GitBox
yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225090888
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1219,6 +1219,18 @@ TAN(numeric)
   
 
 
+
+  
+{% highlight text %}
+TANH(numeric)
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
+Return value type is DOUBLE.
 
 Review comment:
   @pnowojski Yes, in cosh PR, @xccui  has given this review suggestion. But 
because I submitted a lot of PRs, I forgot this suggestion, and you probably 
didn't pay attention to it (this is why I disscuss with xccui that if we review 
more efficiently and it would be easier to avoid this happening). In the end, I 
used that PR as a template.
   
   I think this question has been 100% clear. Respect your and @xccui 's review 
suggestions. Thank you again for your time and effort.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tillrohrmann commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread GitBox
tillrohrmann commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode 
in YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429767716
 
 
   I agree with @TisonKun to do the porting step by step. I would even say to 
remove the `isNewMode` last after all the porting has been done.
   
   Concerning `YARNSessionCapacitySchedulerITCase#testTaskManagerFailure`, I 
think you're right @yanghua and we have to submit a blocking job to test the 
behaviour.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10398) Add Tanh math function supported in Table API and SQL

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649942#comment-16649942
 ] 

ASF GitHub Bot commented on FLINK-10398:


yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225090888
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1219,6 +1219,18 @@ TAN(numeric)
   
 
 
+
+  
+{% highlight text %}
+TANH(numeric)
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of NUMERIC. 
+Return value type is DOUBLE.
 
 Review comment:
   @pnowojski Yes, in cosh PR, @xccui  has given this review suggestion. But 
because I submitted a lot of PRs, I forgot this suggestion, and you probably 
didn't pay attention to it (this is why I disscuss with xccui that if we review 
more efficiently and it would be easier to avoid this happening). In the end, I 
used that PR as a template.
   
   I think this question has been 100% clear. Respect your and @xccui 's review 
suggestions. Thank you again for your time and effort.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Tanh math function supported in Table API and SQL
> -
>
> Key: FLINK-10398
> URL: https://issues.apache.org/jira/browse/FLINK-10398
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : https://www.techonthenet.com/oracle/functions/tanh.php



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add Tanh math function supported in Table API and SQL

2018-10-15 Thread GitBox
yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225091618
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1219,6 +1219,18 @@ TAN(numeric)
   
 
 
+
+  
+{% highlight text %}
+TANH(numeric)
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of numeric. 
 
 Review comment:
   Using `numeric` is because of : 
   
   ```
   TANH(numeric)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10527) Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649943#comment-16649943
 ] 

ASF GitHub Bot commented on FLINK-10527:


tillrohrmann commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode 
in YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429767716
 
 
   I agree with @TisonKun to do the porting step by step. I would even say to 
remove the `isNewMode` last after all the porting has been done.
   
   Concerning `YARNSessionCapacitySchedulerITCase#testTaskManagerFailure`, I 
think you're right @yanghua and we have to submit a blocking job to test the 
behaviour.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Cleanup constant isNewMode in YarnTestBase
> --
>
> Key: FLINK-10527
> URL: https://issues.apache.org/jira/browse/FLINK-10527
> Project: Flink
>  Issue Type: Sub-task
>  Components: YARN
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> This seems to be a residual problem with FLINK-10396. It is set to true in 
> that PR. Currently it has three usage scenarios:
> 1. assert, caused an error
> {code:java}
> assumeTrue("The new mode does not start TMs upfront.", !isNewMode);
> {code}
> 2. if (!isNewMode) the logic in the block would not have invoked, the if 
> block can be removed
> 3. if (isNewMode) always been invoked, the if statement can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add Tanh math function supported in Table API and SQL

2018-10-15 Thread GitBox
yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225091618
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1219,6 +1219,18 @@ TAN(numeric)
   
 
 
+
+  
+{% highlight text %}
+TANH(numeric)
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of numeric. 
 
 Review comment:
   @pnowojski  Using `numeric` is because of : 
   
   ```
   TANH(numeric)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10398) Add Tanh math function supported in Table API and SQL

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649946#comment-16649946
 ] 

ASF GitHub Bot commented on FLINK-10398:


yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225091618
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1219,6 +1219,18 @@ TAN(numeric)
   
 
 
+
+  
+{% highlight text %}
+TANH(numeric)
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of numeric. 
 
 Review comment:
   @pnowojski  Using `numeric` is because of : 
   
   ```
   TANH(numeric)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Tanh math function supported in Table API and SQL
> -
>
> Key: FLINK-10398
> URL: https://issues.apache.org/jira/browse/FLINK-10398
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : https://www.techonthenet.com/oracle/functions/tanh.php



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10398) Add Tanh math function supported in Table API and SQL

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649944#comment-16649944
 ] 

ASF GitHub Bot commented on FLINK-10398:


yanghua commented on a change in pull request #6736: [FLINK-10398][table] Add 
Tanh math function supported in Table API and SQL
URL: https://github.com/apache/flink/pull/6736#discussion_r225091618
 
 

 ##
 File path: docs/dev/table/functions.md
 ##
 @@ -1219,6 +1219,18 @@ TAN(numeric)
   
 
 
+
+  
+{% highlight text %}
+TANH(numeric)
+{% endhighlight %}
+  
+  
+Returns the hyperbolic tangent of numeric. 
 
 Review comment:
   Using `numeric` is because of : 
   
   ```
   TANH(numeric)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Tanh math function supported in Table API and SQL
> -
>
> Key: FLINK-10398
> URL: https://issues.apache.org/jira/browse/FLINK-10398
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> refer to : https://www.techonthenet.com/oracle/functions/tanh.php



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread GitBox
yanghua commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in 
YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429769018
 
 
   > Hi @yanghua , I see your last commit and think it is still use the legacy 
mode. (Since you start an actor system and use `ActorGateway` instead of using 
FLIP-6 components.)
   
   @TisonKun  From my last comment, I am currently focusing on the failure of 
YARNSessionCapacitySchedulerITCase#testTaskManagerFailure.
   
   > To minimize this pull request as its title "Cleanup constant isNewMode in 
YarnTestBase", I propose to retain the clean up in `YARNSessionFIFOITCase`, 
simply remove `isNewMode` and add `@Ignore` to `YARNHighAvailabilityITCase`, 
`YARNSessionCapacitySchedulerITCase#testClientStartup` and 
`YARNSessionCapacitySchedulerITCase#testTaskManagerFailure`. Meanwhile, raising 
a JIRA about port those tests. What do you think?
   
   Agree your opinion.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10527) Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649950#comment-16649950
 ] 

ASF GitHub Bot commented on FLINK-10527:


yanghua commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in 
YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429769018
 
 
   > Hi @yanghua , I see your last commit and think it is still use the legacy 
mode. (Since you start an actor system and use `ActorGateway` instead of using 
FLIP-6 components.)
   
   @TisonKun  From my last comment, I am currently focusing on the failure of 
YARNSessionCapacitySchedulerITCase#testTaskManagerFailure.
   
   > To minimize this pull request as its title "Cleanup constant isNewMode in 
YarnTestBase", I propose to retain the clean up in `YARNSessionFIFOITCase`, 
simply remove `isNewMode` and add `@Ignore` to `YARNHighAvailabilityITCase`, 
`YARNSessionCapacitySchedulerITCase#testClientStartup` and 
`YARNSessionCapacitySchedulerITCase#testTaskManagerFailure`. Meanwhile, raising 
a JIRA about port those tests. What do you think?
   
   Agree your opinion.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Cleanup constant isNewMode in YarnTestBase
> --
>
> Key: FLINK-10527
> URL: https://issues.apache.org/jira/browse/FLINK-10527
> Project: Flink
>  Issue Type: Sub-task
>  Components: YARN
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> This seems to be a residual problem with FLINK-10396. It is set to true in 
> that PR. Currently it has three usage scenarios:
> 1. assert, caused an error
> {code:java}
> assumeTrue("The new mode does not start TMs upfront.", !isNewMode);
> {code}
> 2. if (!isNewMode) the logic in the block would not have invoked, the if 
> block can be removed
> 3. if (isNewMode) always been invoked, the if statement can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread GitBox
yanghua commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in 
YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429770481
 
 
   > Concerning `YARNSessionCapacitySchedulerITCase#testTaskManagerFailure`, I 
think you're right @yanghua and we have to submit a blocking job to test the 
behaviour.
   
   For now, submitting jobs will cause them to share stdout and stderr, which 
will invalidate the original test logic. So, it seems that it is now difficult 
to modify it to the FLIP-6 mode.
   
   @tillrohrmann  I think we have reached a consensus: follow the 
recommendations of @TisonKun .


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10527) Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649956#comment-16649956
 ] 

ASF GitHub Bot commented on FLINK-10527:


yanghua commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in 
YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429770481
 
 
   > Concerning `YARNSessionCapacitySchedulerITCase#testTaskManagerFailure`, I 
think you're right @yanghua and we have to submit a blocking job to test the 
behaviour.
   
   For now, submitting jobs will cause them to share stdout and stderr, which 
will invalidate the original test logic. So, it seems that it is now difficult 
to modify it to the FLIP-6 mode.
   
   @tillrohrmann  I think we have reached a consensus: follow the 
recommendations of @TisonKun .


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Cleanup constant isNewMode in YarnTestBase
> --
>
> Key: FLINK-10527
> URL: https://issues.apache.org/jira/browse/FLINK-10527
> Project: Flink
>  Issue Type: Sub-task
>  Components: YARN
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> This seems to be a residual problem with FLINK-10396. It is set to true in 
> that PR. Currently it has three usage scenarios:
> 1. assert, caused an error
> {code:java}
> assumeTrue("The new mode does not start TMs upfront.", !isNewMode);
> {code}
> 2. if (!isNewMode) the logic in the block would not have invoked, the if 
> block can be removed
> 3. if (isNewMode) always been invoked, the if statement can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10511) Code duplication of creating RPC service in ClusterEntrypoint and AkkaRpcServiceUtils

2018-10-15 Thread Shimin Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shimin Yang updated FLINK-10511:

Issue Type: Improvement  (was: Bug)

> Code duplication of creating RPC service in ClusterEntrypoint and 
> AkkaRpcServiceUtils
> -
>
> Key: FLINK-10511
> URL: https://issues.apache.org/jira/browse/FLINK-10511
> Project: Flink
>  Issue Type: Improvement
>  Components: Cluster Management
>Reporter: Shimin Yang
>Assignee: Shimin Yang
>Priority: Minor
>
> The TaskManagerRunner is using AkkaRpcServiceUtils to create RPC service, but 
> the ClusterEntrypoint use a protected method to do the same job. I think it's 
> better to use the same method in AkkaRpcServiceUtils for reuse of code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua commented on a change in pull request #6839: [FLINK-10253] Run MetricQueryService with lower priority

2018-10-15 Thread GitBox
yanghua commented on a change in pull request #6839: [FLINK-10253] Run 
MetricQueryService with lower priority
URL: https://github.com/apache/flink/pull/6839#discussion_r225098847
 
 

 ##
 File path: 
flink-runtime/src/main/scala/akka/dispatch/PriorityThreadFactory.scala
 ##
 @@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package akka.dispatch
 
 Review comment:
   I tried this, we can not do it. Because `DefaultDispatcherPrerequisites` can 
not be accessed here except `akka.dispatch`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10253) Run MetricQueryService with lower priority

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649973#comment-16649973
 ] 

ASF GitHub Bot commented on FLINK-10253:


yanghua commented on a change in pull request #6839: [FLINK-10253] Run 
MetricQueryService with lower priority
URL: https://github.com/apache/flink/pull/6839#discussion_r225098847
 
 

 ##
 File path: 
flink-runtime/src/main/scala/akka/dispatch/PriorityThreadFactory.scala
 ##
 @@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package akka.dispatch
 
 Review comment:
   I tried this, we can not do it. Because `DefaultDispatcherPrerequisites` can 
not be accessed here except `akka.dispatch`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Run MetricQueryService with lower priority
> --
>
> Key: FLINK-10253
> URL: https://issues.apache.org/jira/browse/FLINK-10253
> Project: Flink
>  Issue Type: Sub-task
>  Components: Metrics
>Affects Versions: 1.5.3, 1.6.0, 1.7.0
>Reporter: Till Rohrmann
>Assignee: vinoyang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.7.0, 1.6.2, 1.5.5
>
>
> We should run the {{MetricQueryService}} with a lower priority than the main 
> Flink components. An idea would be to start the underlying threads with a 
> lower priority.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] Clarkkkkk opened a new pull request #6845: [FLINK-10511][Cluster Management] Reuse the port selection and RPC se…

2018-10-15 Thread GitBox
Clark opened a new pull request #6845: [FLINK-10511][Cluster Management] 
Reuse the port selection and RPC se…
URL: https://github.com/apache/flink/pull/6845
 
 
   ## What is the purpose of the change
   
   Reuse the port selection and RPC service creation logic in JM and TM
   
   ## Brief change log
 - Add support for creating RPC service for a port range in 
AkkaRpcServiceUtils
 - Get rid of port selection of TaskManager and use AkkaRpcServiceUtils 
instead
 - Move the creation of RpcService to AkkaRpcServiceUtils in 
ClusterEntrypoint
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
TaskManagerRunnerTest and all kinds of integration tests.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10511) Code duplication of creating RPC service in ClusterEntrypoint and AkkaRpcServiceUtils

2018-10-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10511:
---
Labels: pull-request-available  (was: )

> Code duplication of creating RPC service in ClusterEntrypoint and 
> AkkaRpcServiceUtils
> -
>
> Key: FLINK-10511
> URL: https://issues.apache.org/jira/browse/FLINK-10511
> Project: Flink
>  Issue Type: Improvement
>  Components: Cluster Management
>Reporter: Shimin Yang
>Assignee: Shimin Yang
>Priority: Minor
>  Labels: pull-request-available
>
> The TaskManagerRunner is using AkkaRpcServiceUtils to create RPC service, but 
> the ClusterEntrypoint use a protected method to do the same job. I think it's 
> better to use the same method in AkkaRpcServiceUtils for reuse of code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10511) Code duplication of creating RPC service in ClusterEntrypoint and AkkaRpcServiceUtils

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649976#comment-16649976
 ] 

ASF GitHub Bot commented on FLINK-10511:


Clark opened a new pull request #6845: [FLINK-10511][Cluster Management] 
Reuse the port selection and RPC se…
URL: https://github.com/apache/flink/pull/6845
 
 
   ## What is the purpose of the change
   
   Reuse the port selection and RPC service creation logic in JM and TM
   
   ## Brief change log
 - Add support for creating RPC service for a port range in 
AkkaRpcServiceUtils
 - Get rid of port selection of TaskManager and use AkkaRpcServiceUtils 
instead
 - Move the creation of RpcService to AkkaRpcServiceUtils in 
ClusterEntrypoint
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
TaskManagerRunnerTest and all kinds of integration tests.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Code duplication of creating RPC service in ClusterEntrypoint and 
> AkkaRpcServiceUtils
> -
>
> Key: FLINK-10511
> URL: https://issues.apache.org/jira/browse/FLINK-10511
> Project: Flink
>  Issue Type: Improvement
>  Components: Cluster Management
>Reporter: Shimin Yang
>Assignee: Shimin Yang
>Priority: Minor
>  Labels: pull-request-available
>
> The TaskManagerRunner is using AkkaRpcServiceUtils to create RPC service, but 
> the ClusterEntrypoint use a protected method to do the same job. I think it's 
> better to use the same method in AkkaRpcServiceUtils for reuse of code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread GitBox
TisonKun commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in 
YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429777905
 
 
   > Concerning YARNSessionCapacitySchedulerITCase#testTaskManagerFailure, I 
think you're right @yanghua and we have to submit a blocking job to test the 
behaviour.
   
   Agreed, as like in `JobManagerHAProcessFailureBatchRecoveryITCase` but this 
time we have a Yarn based AM and TMs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10527) Cleanup constant isNewMode in YarnTestBase

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649980#comment-16649980
 ] 

ASF GitHub Bot commented on FLINK-10527:


TisonKun commented on issue #6816: [FLINK-10527] Cleanup constant isNewMode in 
YarnTestBase
URL: https://github.com/apache/flink/pull/6816#issuecomment-429777905
 
 
   > Concerning YARNSessionCapacitySchedulerITCase#testTaskManagerFailure, I 
think you're right @yanghua and we have to submit a blocking job to test the 
behaviour.
   
   Agreed, as like in `JobManagerHAProcessFailureBatchRecoveryITCase` but this 
time we have a Yarn based AM and TMs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Cleanup constant isNewMode in YarnTestBase
> --
>
> Key: FLINK-10527
> URL: https://issues.apache.org/jira/browse/FLINK-10527
> Project: Flink
>  Issue Type: Sub-task
>  Components: YARN
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> This seems to be a residual problem with FLINK-10396. It is set to true in 
> that PR. Currently it has three usage scenarios:
> 1. assert, caused an error
> {code:java}
> assumeTrue("The new mode does not start TMs upfront.", !isNewMode);
> {code}
> 2. if (!isNewMode) the logic in the block would not have invoked, the if 
> block can be removed
> 3. if (isNewMode) always been invoked, the if statement can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] Clarkkkkk commented on a change in pull request #6839: [FLINK-10253] Run MetricQueryService with lower priority

2018-10-15 Thread GitBox
Clark commented on a change in pull request #6839: [FLINK-10253] Run 
MetricQueryService with lower priority
URL: https://github.com/apache/flink/pull/6839#discussion_r225103089
 
 

 ##
 File path: 
flink-core/src/main/java/org/apache/flink/configuration/MetricOptions.java
 ##
 @@ -156,6 +156,17 @@
"ports to avoid collisions when multiple Flink 
components are running on the same machine. Per default " +
"Flink will pick a random port.");
 
+   /**
+* The thread priority for Flink's internal metric query service. The 
{@code 1} means the min priority and the
+* {@code 10} means the max priority.
+*/
+   public static final ConfigOption QUERY_SERVICE_THREAD_PRIORITY 
=
+   key("metrics.internal.query-service.thread-priority")
+   .defaultValue(1)
+   .withDescription("The thread priority used for Flink's internal 
metric query service. The thread is created" +
+   " by Akka's thread pool executor. " +
+   "The range of the priority is from 1 (MIN_PRIORITY) to 
10 (MAX_PRIORITY).");
+
 
 Review comment:
   Add a remind that increase this might affect the main component would be 
nicer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10253) Run MetricQueryService with lower priority

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649984#comment-16649984
 ] 

ASF GitHub Bot commented on FLINK-10253:


Clark commented on a change in pull request #6839: [FLINK-10253] Run 
MetricQueryService with lower priority
URL: https://github.com/apache/flink/pull/6839#discussion_r225103089
 
 

 ##
 File path: 
flink-core/src/main/java/org/apache/flink/configuration/MetricOptions.java
 ##
 @@ -156,6 +156,17 @@
"ports to avoid collisions when multiple Flink 
components are running on the same machine. Per default " +
"Flink will pick a random port.");
 
+   /**
+* The thread priority for Flink's internal metric query service. The 
{@code 1} means the min priority and the
+* {@code 10} means the max priority.
+*/
+   public static final ConfigOption QUERY_SERVICE_THREAD_PRIORITY 
=
+   key("metrics.internal.query-service.thread-priority")
+   .defaultValue(1)
+   .withDescription("The thread priority used for Flink's internal 
metric query service. The thread is created" +
+   " by Akka's thread pool executor. " +
+   "The range of the priority is from 1 (MIN_PRIORITY) to 
10 (MAX_PRIORITY).");
+
 
 Review comment:
   Add a remind that increase this might affect the main component would be 
nicer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Run MetricQueryService with lower priority
> --
>
> Key: FLINK-10253
> URL: https://issues.apache.org/jira/browse/FLINK-10253
> Project: Flink
>  Issue Type: Sub-task
>  Components: Metrics
>Affects Versions: 1.5.3, 1.6.0, 1.7.0
>Reporter: Till Rohrmann
>Assignee: vinoyang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.7.0, 1.6.2, 1.5.5
>
>
> We should run the {{MetricQueryService}} with a lower priority than the main 
> Flink components. An idea would be to start the underlying threads with a 
> lower priority.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] aljoscha commented on issue #6703: [FLINK-9697] Provide connector for Kafka 2.0.0

2018-10-15 Thread GitBox
aljoscha commented on issue #6703: [FLINK-9697] Provide connector for Kafka 
2.0.0
URL: https://github.com/apache/flink/pull/6703#issuecomment-429781892
 
 
   Thanks, @tzulitai!
   
   @eliaslevy Are you happy with the state of the PR or is there anything else 
that needs changing?
   
   @yanghua I'd like to make some changes while merging, if that's ok for you:
   
   - rename `FlinkKafkaInnerProducer` to `FlinkKafkaInternalProducer`
   - rename `KafkaMetricMuttableWrapper` to `KafkaMetricMutableWrapper`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9697) Provide connector for Kafka 2.0.0

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649987#comment-16649987
 ] 

ASF GitHub Bot commented on FLINK-9697:
---

aljoscha commented on issue #6703: [FLINK-9697] Provide connector for Kafka 
2.0.0
URL: https://github.com/apache/flink/pull/6703#issuecomment-429781892
 
 
   Thanks, @tzulitai!
   
   @eliaslevy Are you happy with the state of the PR or is there anything else 
that needs changing?
   
   @yanghua I'd like to make some changes while merging, if that's ok for you:
   
   - rename `FlinkKafkaInnerProducer` to `FlinkKafkaInternalProducer`
   - rename `KafkaMetricMuttableWrapper` to `KafkaMetricMutableWrapper`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Provide connector for Kafka 2.0.0
> -
>
> Key: FLINK-9697
> URL: https://issues.apache.org/jira/browse/FLINK-9697
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> Kafka 2.0.0 would be released soon.
> Here is vote thread:
> [http://search-hadoop.com/m/Kafka/uyzND1vxnEd23QLxb?subj=+VOTE+2+0+0+RC1]
> We should provide connector for Kafka 2.0.0 once it is released.
> Upgrade to 2.0 documentation : 
> http://kafka.apache.org/20/documentation.html#upgrade_2_0_0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] aljoscha commented on issue #6703: [FLINK-9697] Provide connector for Kafka 2.0.0

2018-10-15 Thread GitBox
aljoscha commented on issue #6703: [FLINK-9697] Provide connector for Kafka 
2.0.0
URL: https://github.com/apache/flink/pull/6703#issuecomment-429782199
 
 
   As a follow-up, we should change/or add end-to-end tests that use the new 
Kafka consumer. We also need to do this for Scala 2.12 support because the 
existing connector hierarchy doesn't work with Scala 2.12.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9697) Provide connector for Kafka 2.0.0

2018-10-15 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16649989#comment-16649989
 ] 

ASF GitHub Bot commented on FLINK-9697:
---

aljoscha commented on issue #6703: [FLINK-9697] Provide connector for Kafka 
2.0.0
URL: https://github.com/apache/flink/pull/6703#issuecomment-429782199
 
 
   As a follow-up, we should change/or add end-to-end tests that use the new 
Kafka consumer. We also need to do this for Scala 2.12 support because the 
existing connector hierarchy doesn't work with Scala 2.12.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Provide connector for Kafka 2.0.0
> -
>
> Key: FLINK-9697
> URL: https://issues.apache.org/jira/browse/FLINK-9697
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> Kafka 2.0.0 would be released soon.
> Here is vote thread:
> [http://search-hadoop.com/m/Kafka/uyzND1vxnEd23QLxb?subj=+VOTE+2+0+0+RC1]
> We should provide connector for Kafka 2.0.0 once it is released.
> Upgrade to 2.0 documentation : 
> http://kafka.apache.org/20/documentation.html#upgrade_2_0_0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10519) flink-parent:1.6.1 artifact can't be found on maven central

2018-10-15 Thread Piotr Nowojski (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-10519:
---
Priority: Blocker  (was: Critical)

> flink-parent:1.6.1 artifact can't be found on maven central
> ---
>
> Key: FLINK-10519
> URL: https://issues.apache.org/jira/browse/FLINK-10519
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.6.1, 1.5.4
>Reporter: Florian Schmidt
>Priority: Blocker
>
> The flink-parent:1.6.1 artifact can't be found on maven central:
> *Stacktrace from maven*
> {code:java}
> ...
> Caused by: org.eclipse.aether.transfer.ArtifactNotFoundException: Could not 
> find artifact org.apache.flink:flink-parent:pom:1.6.1 in central 
> (https://repo.maven.apache.org/maven2)
> ...
> {code}
>  
> Also when browsing the repository in the browser 
> ([https://repo.maven.apache.org/maven2/org/apache/flink/flink-parent/1.6.1/]) 
> it will show the flink-parent artifact in the list, but return 404 when 
> trying to download it. This does only seem to happen from some networks, as I 
> was able to successfully run the following on a server that I ssh'd into, but 
> not on my local device
> {code:java}
> curl 
> https://repo.maven.apache.org/maven2/org/apache/flink/flink-parent/1.6.1/flink-parent-1.6.1.pom{code}
> The artifact can't be found locally, where repo.maven.apache.org resolves to
> {code}
> > host repo.maven.apache.org
> repo.maven.apache.org is an alias for repo.apache.maven.org.
> repo.apache.maven.org is an alias for maven.map.fastly.net.
> maven.map.fastly.net has address 151.101.112.215
> {code}
> On my server repo.maven.apache.org resolves to 151.101.132.215 where the 
> artifact is present.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10519) flink-parent:1.6.1 artifact can't be found on maven central

2018-10-15 Thread Piotr Nowojski (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-10519:
---
Fix Version/s: 1.5.5
   1.6.2
   1.7.0

> flink-parent:1.6.1 artifact can't be found on maven central
> ---
>
> Key: FLINK-10519
> URL: https://issues.apache.org/jira/browse/FLINK-10519
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.6.1, 1.5.4, 1.6.2, 1.5.5
>Reporter: Florian Schmidt
>Priority: Blocker
> Fix For: 1.7.0, 1.6.2, 1.5.5
>
>
> The flink-parent:1.6.1 artifact can't be found on maven central:
> *Stacktrace from maven*
> {code:java}
> ...
> Caused by: org.eclipse.aether.transfer.ArtifactNotFoundException: Could not 
> find artifact org.apache.flink:flink-parent:pom:1.6.1 in central 
> (https://repo.maven.apache.org/maven2)
> ...
> {code}
>  
> Also when browsing the repository in the browser 
> ([https://repo.maven.apache.org/maven2/org/apache/flink/flink-parent/1.6.1/]) 
> it will show the flink-parent artifact in the list, but return 404 when 
> trying to download it. This does only seem to happen from some networks, as I 
> was able to successfully run the following on a server that I ssh'd into, but 
> not on my local device
> {code:java}
> curl 
> https://repo.maven.apache.org/maven2/org/apache/flink/flink-parent/1.6.1/flink-parent-1.6.1.pom{code}
> The artifact can't be found locally, where repo.maven.apache.org resolves to
> {code}
> > host repo.maven.apache.org
> repo.maven.apache.org is an alias for repo.apache.maven.org.
> repo.apache.maven.org is an alias for maven.map.fastly.net.
> maven.map.fastly.net has address 151.101.112.215
> {code}
> On my server repo.maven.apache.org resolves to 151.101.132.215 where the 
> artifact is present.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10519) flink-parent:1.6.1 artifact can't be found on maven central

2018-10-15 Thread Piotr Nowojski (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-10519:
---
Affects Version/s: 1.5.5
   1.6.2

> flink-parent:1.6.1 artifact can't be found on maven central
> ---
>
> Key: FLINK-10519
> URL: https://issues.apache.org/jira/browse/FLINK-10519
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.6.1, 1.5.4, 1.6.2, 1.5.5
>Reporter: Florian Schmidt
>Priority: Blocker
> Fix For: 1.7.0, 1.6.2, 1.5.5
>
>
> The flink-parent:1.6.1 artifact can't be found on maven central:
> *Stacktrace from maven*
> {code:java}
> ...
> Caused by: org.eclipse.aether.transfer.ArtifactNotFoundException: Could not 
> find artifact org.apache.flink:flink-parent:pom:1.6.1 in central 
> (https://repo.maven.apache.org/maven2)
> ...
> {code}
>  
> Also when browsing the repository in the browser 
> ([https://repo.maven.apache.org/maven2/org/apache/flink/flink-parent/1.6.1/]) 
> it will show the flink-parent artifact in the list, but return 404 when 
> trying to download it. This does only seem to happen from some networks, as I 
> was able to successfully run the following on a server that I ssh'd into, but 
> not on my local device
> {code:java}
> curl 
> https://repo.maven.apache.org/maven2/org/apache/flink/flink-parent/1.6.1/flink-parent-1.6.1.pom{code}
> The artifact can't be found locally, where repo.maven.apache.org resolves to
> {code}
> > host repo.maven.apache.org
> repo.maven.apache.org is an alias for repo.apache.maven.org.
> repo.apache.maven.org is an alias for maven.map.fastly.net.
> maven.map.fastly.net has address 151.101.112.215
> {code}
> On my server repo.maven.apache.org resolves to 151.101.132.215 where the 
> artifact is present.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   >