[jira] [Commented] (FLINK-7367) Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, MaxConnections, RequestTimeout, etc)

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121171#comment-16121171
 ] 

ASF GitHub Bot commented on FLINK-7367:
---

Github user bowenli86 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4473#discussion_r132376988
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/config/ProducerConfigConstants.java
 ---
@@ -24,10 +24,28 @@
  */
 public class ProducerConfigConstants extends AWSConfigConstants {
 
+   /** Deprecated key. **/
+   public static final String DEPRECATED_COLLECTION_MAX_COUNT = 
"aws.producer.collectionMaxCount";
+
+   /** Deprecated key. **/
+   public static final String DEPRECATED_AGGREGATION_MAX_COUNT = 
"aws.producer.aggregationMaxCount";
+
/** Maximum number of items to pack into an PutRecords request. **/
-   public static final String COLLECTION_MAX_COUNT = 
"aws.producer.collectionMaxCount";
+   public static final String COLLECTION_MAX_COUNT = "CollectionMaxCount";
--- End diff --

`CollectionMaxCount` and `AllocationMaxCount` will be `protected` since 
there's a unit test using them.


> Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, 
> MaxConnections, RequestTimeout, etc)
> ---
>
> Key: FLINK-7367
> URL: https://issues.apache.org/jira/browse/FLINK-7367
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.3.3
>
>
> Right now, FlinkKinesisProducer only expose two configs for the underlying 
> KinesisProducer:
> - AGGREGATION_MAX_COUNT
> - COLLECTION_MAX_COUNT
> Well, according to [AWS 
> doc|http://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-config.html] 
> and [their sample on 
> github|https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/default_config.properties],
>  developers can set more to make the max use of KinesisProducer, and make it 
> fault-tolerant (e.g. by increasing timeout).
> I select a few more configs that we need when using Flink with Kinesis:
> - MAX_CONNECTIONS
> - RATE_LIMIT
> - RECORD_MAX_BUFFERED_TIME
> - RECORD_TIME_TO_LIVE
> - REQUEST_TIMEOUT
> We need to parameterize FlinkKinesisProducer to pass in the above params, in 
> order to cater to our need



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4473: [FLINK-7367][kinesis connector] Parameterize more ...

2017-08-10 Thread bowenli86
Github user bowenli86 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4473#discussion_r132376988
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/config/ProducerConfigConstants.java
 ---
@@ -24,10 +24,28 @@
  */
 public class ProducerConfigConstants extends AWSConfigConstants {
 
+   /** Deprecated key. **/
+   public static final String DEPRECATED_COLLECTION_MAX_COUNT = 
"aws.producer.collectionMaxCount";
+
+   /** Deprecated key. **/
+   public static final String DEPRECATED_AGGREGATION_MAX_COUNT = 
"aws.producer.aggregationMaxCount";
+
/** Maximum number of items to pack into an PutRecords request. **/
-   public static final String COLLECTION_MAX_COUNT = 
"aws.producer.collectionMaxCount";
+   public static final String COLLECTION_MAX_COUNT = "CollectionMaxCount";
--- End diff --

`CollectionMaxCount` and `AllocationMaxCount` will be `protected` since 
there's a unit test using them.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4508: [FLINK-7405][metrics] Reduce spamming warning logging fro...

2017-08-10 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4508
  
@zentol Please let me know if this is good :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7405) Reduce spamming warning logging from DatadogHttpReporter

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121177#comment-16121177
 ] 

ASF GitHub Bot commented on FLINK-7405:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4508
  
@zentol Please let me know if this is good :)


> Reduce spamming warning logging from DatadogHttpReporter
> 
>
> Key: FLINK-7405
> URL: https://issues.apache.org/jira/browse/FLINK-7405
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.3.2
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0, 1.3.3
>
>
> DatadogHttpReporter is logging too much when there's a connection timeout, 
> and we need to reduce the amount of logging noise.
> The excessive logging looks like:
> {code:java}
> 2017-08-07 19:30:54,408 WARN  
> org.apache.flink.metrics.datadog.DatadogHttpReporter  - Failed 
> reporting metrics to Datadog.
> java.net.SocketTimeoutException: timeout
>   at 
> org.apache.flink.shaded.okio.Okio$4.newTimeoutException(Okio.java:227)
>   at org.apache.flink.shaded.okio.AsyncTimeout.exit(AsyncTimeout.java:284)
>   at 
> org.apache.flink.shaded.okio.AsyncTimeout$2.read(AsyncTimeout.java:240)
>   at 
> org.apache.flink.shaded.okio.RealBufferedSource.indexOf(RealBufferedSource.java:344)
>   at 
> org.apache.flink.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:216)
>   at 
> org.apache.flink.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:210)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http1.Http1Codec.readResponseHeaders(Http1Codec.java:189)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:75)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>   at 
> org.apache.flink.shaded.okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>   at 
> org.apache.flink.shaded.okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185)
>   at org.apache.flink.shaded.okhttp3.RealCall.execute(RealCall.java:69)
>   at 
> org.apache.flink.metrics.datadog.DatadogHttpClient.send(DatadogHttpClient.java:85)
>   at 
> org.apache.flink.metrics.datadog.DatadogHttpReporter.report(DatadogHttpReporter.java:142)
>   at 
> org.apache.flink.runtime.metrics.MetricRegistry$ReporterTask.run(MetricRegistry.java:381)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.SocketException: Socket closed
>   at java.net.SocketInputStream.read(SocketInputStream.java:204)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security

[GitHub] flink pull request #4331: [FLINK-7169][CEP] Support AFTER MATCH SKIP functio...

2017-08-10 Thread dawidwys
Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/4331#discussion_r132384194
  
--- Diff: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/compiler/NFACompiler.java
 ---
@@ -150,6 +160,59 @@ long getWindowTime() {
}
 
/**
+* Check pattern after match skip strategy.
+*/
+   private void checkPatternSkipStrategy() {
+   AfterMatchSkipStrategy afterMatchSkipStrategy = 
currentPattern.getAfterMatchSkipStrategy();
+   if (afterMatchSkipStrategy.getStrategy() == 
AfterMatchSkipStrategy.SkipStrategy.SKIP_TO_FIRST ||
+   afterMatchSkipStrategy.getStrategy() == 
AfterMatchSkipStrategy.SkipStrategy.SKIP_TO_LAST) {
+   Pattern pattern = currentPattern;
+   while 
(!pattern.getName().equals(afterMatchSkipStrategy.getPatternName())) {
+   pattern = pattern.getPrevious();
+   }
+   // pattern name match check.
+   if (pattern == null) {
+   throw new 
MalformedPatternException("the pattern name specified in AfterMatchSkipStrategy 
" +
+   "can not be found in the given 
Pattern");
+   } else {
+   // can not be used with optional states.
+   if 
(pattern.getQuantifier().hasProperty(Quantifier.QuantifierProperty.OPTIONAL)) {
+   throw new 
MalformedPatternException("the AfterMatchSkipStrategy "
+   + 
afterMatchSkipStrategy.getStrategy() + " can not be used with optional 
pattern");
+   }
+   }
+
+   // start position check.
+   if (pattern.getPrevious() == null) {
--- End diff --

I personally see no reason for a semantic with RuntimeException. I can't 
think of any use-case for it. Maybe let's finish this PR without the switch and 
exceptions and open a JIRA with the switch, ideally with some use-case's for 
that semantic, so we can further agree on that and see if anyone needs it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7169) Support AFTER MATCH SKIP function in CEP library API

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121203#comment-16121203
 ] 

ASF GitHub Bot commented on FLINK-7169:
---

Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/4331#discussion_r132384194
  
--- Diff: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/compiler/NFACompiler.java
 ---
@@ -150,6 +160,59 @@ long getWindowTime() {
}
 
/**
+* Check pattern after match skip strategy.
+*/
+   private void checkPatternSkipStrategy() {
+   AfterMatchSkipStrategy afterMatchSkipStrategy = 
currentPattern.getAfterMatchSkipStrategy();
+   if (afterMatchSkipStrategy.getStrategy() == 
AfterMatchSkipStrategy.SkipStrategy.SKIP_TO_FIRST ||
+   afterMatchSkipStrategy.getStrategy() == 
AfterMatchSkipStrategy.SkipStrategy.SKIP_TO_LAST) {
+   Pattern pattern = currentPattern;
+   while 
(!pattern.getName().equals(afterMatchSkipStrategy.getPatternName())) {
+   pattern = pattern.getPrevious();
+   }
+   // pattern name match check.
+   if (pattern == null) {
+   throw new 
MalformedPatternException("the pattern name specified in AfterMatchSkipStrategy 
" +
+   "can not be found in the given 
Pattern");
+   } else {
+   // can not be used with optional states.
+   if 
(pattern.getQuantifier().hasProperty(Quantifier.QuantifierProperty.OPTIONAL)) {
+   throw new 
MalformedPatternException("the AfterMatchSkipStrategy "
+   + 
afterMatchSkipStrategy.getStrategy() + " can not be used with optional 
pattern");
+   }
+   }
+
+   // start position check.
+   if (pattern.getPrevious() == null) {
--- End diff --

I personally see no reason for a semantic with RuntimeException. I can't 
think of any use-case for it. Maybe let's finish this PR without the switch and 
exceptions and open a JIRA with the switch, ideally with some use-case's for 
that semantic, so we can further agree on that and see if anyone needs it.


> Support AFTER MATCH SKIP function in CEP library API
> 
>
> Key: FLINK-7169
> URL: https://issues.apache.org/jira/browse/FLINK-7169
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Yueting Chen
>Assignee: Yueting Chen
> Fix For: 1.4.0
>
>
> In order to support Oracle's MATCH_RECOGNIZE on top of the CEP library, we 
> need to support AFTER MATCH SKIP function in CEP API.
> There're four options in AFTER MATCH SKIP, listed as follows:
> 1. AFTER MATCH SKIP TO NEXT ROW: resume pattern matching at the row after the 
> first row of the current match.
> 2. AFTER MATCH SKIP PAST LAST ROW: resume pattern matching at the next row 
> after the last row of the current match.
> 3. AFTER MATCH SKIP TO FIST *RPV*: resume pattern matching at the first row 
> that is mapped to the row pattern variable RPV.
> 4. AFTER MATCH SKIP TO LAST *RPV*: resume pattern matching at the last row 
> that is mapped to the row pattern variable RPV.
> I think we can introduce a new function to `CEP` class, which takes a new 
> parameter as AfterMatchSKipStrategy.
> The new API may looks like this
> {code}
> public static  PatternStream pattern(DataStream input, Pattern 
> pattern, AfterMatchSkipStrategy afterMatchSkipStrategy) 
> {code}
> We can also make `SKIP TO NEXT ROW` as the default option, because that's 
> what CEP library behaves currently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7398) Table API operators/UDFs must not store Logger

2017-08-10 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121204#comment-16121204
 ] 

Fabian Hueske commented on FLINK-7398:
--

This issue would happen with Java as well. It is not related to Scala.


> Table API operators/UDFs must not store Logger
> --
>
> Key: FLINK-7398
> URL: https://issues.apache.org/jira/browse/FLINK-7398
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Aljoscha Krettek
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> Table API operators and UDFs store a reference to the (slf4j) {{Logger}} in 
> an instance field (c.f. 
> https://github.com/apache/flink/blob/f37988c19adc30d324cde83c54f2fa5d36efb9e7/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/FlatMapRunner.scala#L39).
>  This means that the {{Logger}} will be serialised with the UDF and sent to 
> the cluster. This in itself does not sound right and leads to problems when 
> the slf4j configuration on the Client is different from the cluster 
> environment.
> This is an example of a user running into that problem: 
> https://lists.apache.org/thread.html/01dd44007c0122d60c3fd2b2bb04fd2d6d2114bcff1e34d1d2079522@%3Cuser.flink.apache.org%3E.
>  Here, they have Logback on the client but the Logback classes are not 
> available on the cluster and so deserialisation of the UDFs fails with a 
> {{ClassNotFoundException}}.
> This is a rough list of the involved classes:
> {code}
> src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala:43: 
>  private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/catalog/ExternalTableSourceUtil.scala:45:
>   private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala:28:  
> val LOG10 = Types.lookupMethod(classOf[Math], "log10", classOf[Double])
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupAggregate.scala:62:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupWindowAggregate.scala:59:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamOverAggregate.scala:51:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/FlinkConventions.scala:28:  
> val LOGICAL: Convention = new Convention.Impl("LOGICAL", 
> classOf[FlinkLogicalRel])
> src/main/scala/org/apache/flink/table/plan/rules/FlinkRuleSets.scala:38:  val 
> LOGICAL_OPT_RULES: RuleSet = RuleSets.ofList(
> src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateAggFunction.scala:36:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetAggFunction.scala:43:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetFinalAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetPreAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggregatePreProcessor.scala:55:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggReduceGroupFunction.scala:66:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideWindowAggReduceGroupFunction.scala:56:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideTimeWindowAggReduceGroupFunction.scala:64:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTumbleCountWindowAggReduceGroupFunction.scala:46:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetWindowAggMapFunction.scala:52:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTumbleTimeWindowAggReduceGroupFunction.scala:55:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/GroupAggProcessFunction.scala:48:
>   val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/ProcTimeBoundedRowsOver.scala:66:
>   val LOG = LoggerFactory.getLogger(this.getClass)
>

[jira] [Commented] (FLINK-7398) Table API operators/UDFs must not store Logger

2017-08-10 Thread Jark Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121219#comment-16121219
 ] 

Jark Wu commented on FLINK-7398:


[~fhueske] in Java, we will set the Logger as static final which will not be 
serialized.

> Table API operators/UDFs must not store Logger
> --
>
> Key: FLINK-7398
> URL: https://issues.apache.org/jira/browse/FLINK-7398
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Aljoscha Krettek
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> Table API operators and UDFs store a reference to the (slf4j) {{Logger}} in 
> an instance field (c.f. 
> https://github.com/apache/flink/blob/f37988c19adc30d324cde83c54f2fa5d36efb9e7/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/FlatMapRunner.scala#L39).
>  This means that the {{Logger}} will be serialised with the UDF and sent to 
> the cluster. This in itself does not sound right and leads to problems when 
> the slf4j configuration on the Client is different from the cluster 
> environment.
> This is an example of a user running into that problem: 
> https://lists.apache.org/thread.html/01dd44007c0122d60c3fd2b2bb04fd2d6d2114bcff1e34d1d2079522@%3Cuser.flink.apache.org%3E.
>  Here, they have Logback on the client but the Logback classes are not 
> available on the cluster and so deserialisation of the UDFs fails with a 
> {{ClassNotFoundException}}.
> This is a rough list of the involved classes:
> {code}
> src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala:43: 
>  private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/catalog/ExternalTableSourceUtil.scala:45:
>   private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala:28:  
> val LOG10 = Types.lookupMethod(classOf[Math], "log10", classOf[Double])
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupAggregate.scala:62:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupWindowAggregate.scala:59:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamOverAggregate.scala:51:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/FlinkConventions.scala:28:  
> val LOGICAL: Convention = new Convention.Impl("LOGICAL", 
> classOf[FlinkLogicalRel])
> src/main/scala/org/apache/flink/table/plan/rules/FlinkRuleSets.scala:38:  val 
> LOGICAL_OPT_RULES: RuleSet = RuleSets.ofList(
> src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateAggFunction.scala:36:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetAggFunction.scala:43:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetFinalAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetPreAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggregatePreProcessor.scala:55:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggReduceGroupFunction.scala:66:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideWindowAggReduceGroupFunction.scala:56:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideTimeWindowAggReduceGroupFunction.scala:64:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTumbleCountWindowAggReduceGroupFunction.scala:46:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetWindowAggMapFunction.scala:52:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTumbleTimeWindowAggReduceGroupFunction.scala:55:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/GroupAggProcessFunction.scala:48:
>   val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/ProcTimeBoundedRowsOver.scala:66:
>   val LOG = LoggerFactory.getLogger(this.getCl

[GitHub] flink pull request #4331: [FLINK-7169][CEP] Support AFTER MATCH SKIP functio...

2017-08-10 Thread yestinchen
Github user yestinchen commented on a diff in the pull request:

https://github.com/apache/flink/pull/4331#discussion_r132386948
  
--- Diff: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/compiler/NFACompiler.java
 ---
@@ -150,6 +160,59 @@ long getWindowTime() {
}
 
/**
+* Check pattern after match skip strategy.
+*/
+   private void checkPatternSkipStrategy() {
+   AfterMatchSkipStrategy afterMatchSkipStrategy = 
currentPattern.getAfterMatchSkipStrategy();
+   if (afterMatchSkipStrategy.getStrategy() == 
AfterMatchSkipStrategy.SkipStrategy.SKIP_TO_FIRST ||
+   afterMatchSkipStrategy.getStrategy() == 
AfterMatchSkipStrategy.SkipStrategy.SKIP_TO_LAST) {
+   Pattern pattern = currentPattern;
+   while 
(!pattern.getName().equals(afterMatchSkipStrategy.getPatternName())) {
+   pattern = pattern.getPrevious();
+   }
+   // pattern name match check.
+   if (pattern == null) {
+   throw new 
MalformedPatternException("the pattern name specified in AfterMatchSkipStrategy 
" +
+   "can not be found in the given 
Pattern");
+   } else {
+   // can not be used with optional states.
+   if 
(pattern.getQuantifier().hasProperty(Quantifier.QuantifierProperty.OPTIONAL)) {
+   throw new 
MalformedPatternException("the AfterMatchSkipStrategy "
+   + 
afterMatchSkipStrategy.getStrategy() + " can not be used with optional 
pattern");
+   }
+   }
+
+   // start position check.
+   if (pattern.getPrevious() == null) {
--- End diff --

Great, I'll just remove all those optional state check.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7169) Support AFTER MATCH SKIP function in CEP library API

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121220#comment-16121220
 ] 

ASF GitHub Bot commented on FLINK-7169:
---

Github user yestinchen commented on a diff in the pull request:

https://github.com/apache/flink/pull/4331#discussion_r132386948
  
--- Diff: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/compiler/NFACompiler.java
 ---
@@ -150,6 +160,59 @@ long getWindowTime() {
}
 
/**
+* Check pattern after match skip strategy.
+*/
+   private void checkPatternSkipStrategy() {
+   AfterMatchSkipStrategy afterMatchSkipStrategy = 
currentPattern.getAfterMatchSkipStrategy();
+   if (afterMatchSkipStrategy.getStrategy() == 
AfterMatchSkipStrategy.SkipStrategy.SKIP_TO_FIRST ||
+   afterMatchSkipStrategy.getStrategy() == 
AfterMatchSkipStrategy.SkipStrategy.SKIP_TO_LAST) {
+   Pattern pattern = currentPattern;
+   while 
(!pattern.getName().equals(afterMatchSkipStrategy.getPatternName())) {
+   pattern = pattern.getPrevious();
+   }
+   // pattern name match check.
+   if (pattern == null) {
+   throw new 
MalformedPatternException("the pattern name specified in AfterMatchSkipStrategy 
" +
+   "can not be found in the given 
Pattern");
+   } else {
+   // can not be used with optional states.
+   if 
(pattern.getQuantifier().hasProperty(Quantifier.QuantifierProperty.OPTIONAL)) {
+   throw new 
MalformedPatternException("the AfterMatchSkipStrategy "
+   + 
afterMatchSkipStrategy.getStrategy() + " can not be used with optional 
pattern");
+   }
+   }
+
+   // start position check.
+   if (pattern.getPrevious() == null) {
--- End diff --

Great, I'll just remove all those optional state check.


> Support AFTER MATCH SKIP function in CEP library API
> 
>
> Key: FLINK-7169
> URL: https://issues.apache.org/jira/browse/FLINK-7169
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Yueting Chen
>Assignee: Yueting Chen
> Fix For: 1.4.0
>
>
> In order to support Oracle's MATCH_RECOGNIZE on top of the CEP library, we 
> need to support AFTER MATCH SKIP function in CEP API.
> There're four options in AFTER MATCH SKIP, listed as follows:
> 1. AFTER MATCH SKIP TO NEXT ROW: resume pattern matching at the row after the 
> first row of the current match.
> 2. AFTER MATCH SKIP PAST LAST ROW: resume pattern matching at the next row 
> after the last row of the current match.
> 3. AFTER MATCH SKIP TO FIST *RPV*: resume pattern matching at the first row 
> that is mapped to the row pattern variable RPV.
> 4. AFTER MATCH SKIP TO LAST *RPV*: resume pattern matching at the last row 
> that is mapped to the row pattern variable RPV.
> I think we can introduce a new function to `CEP` class, which takes a new 
> parameter as AfterMatchSKipStrategy.
> The new API may looks like this
> {code}
> public static  PatternStream pattern(DataStream input, Pattern 
> pattern, AfterMatchSkipStrategy afterMatchSkipStrategy) 
> {code}
> We can also make `SKIP TO NEXT ROW` as the default option, because that's 
> what CEP library behaves currently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4239: [FLINK-6988] flink-connector-kafka-0.11 with exactly-once...

2017-08-10 Thread pnowojski
Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4239
  
Indeed it seems like you are right. `read_committed` doesn't play along 
with long `max.transaction.timeout.ms`. I'm not sure about Beam, but in Flink 
we can not use one single `transactional.id`, because our checkpoints are 
asynchronous - `notifyCheckpointComplete` (which triggers 
`KafkaProducer#commit`) can come long after `preCommit`. In that time we can 
not use the same `transactional.id` for new transactions. 

We can walk around this issue by implementing a pool of 
`transactional.id`s, which we can save on the state. This will allows on 
restoring state to not only `recoverAndCommit` all pending transactions, but to 
abort all other unknown "lingering" transactions


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6988) Add Apache Kafka 0.11 connector

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121233#comment-16121233
 ] 

ASF GitHub Bot commented on FLINK-6988:
---

Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4239
  
Indeed it seems like you are right. `read_committed` doesn't play along 
with long `max.transaction.timeout.ms`. I'm not sure about Beam, but in Flink 
we can not use one single `transactional.id`, because our checkpoints are 
asynchronous - `notifyCheckpointComplete` (which triggers 
`KafkaProducer#commit`) can come long after `preCommit`. In that time we can 
not use the same `transactional.id` for new transactions. 

We can walk around this issue by implementing a pool of 
`transactional.id`s, which we can save on the state. This will allows on 
restoring state to not only `recoverAndCommit` all pending transactions, but to 
abort all other unknown "lingering" transactions


> Add Apache Kafka 0.11 connector
> ---
>
> Key: FLINK-6988
> URL: https://issues.apache.org/jira/browse/FLINK-6988
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.3.1
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> Kafka 0.11 (it will be released very soon) add supports for transactions. 
> Thanks to that, Flink might be able to implement Kafka sink supporting 
> "exactly-once" semantic. API changes and whole transactions support is 
> described in 
> [KIP-98|https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging].
> The goal is to mimic implementation of existing BucketingSink. New 
> FlinkKafkaProducer011 would 
> * upon creation begin transaction, store transaction identifiers into the 
> state and would write all incoming data to an output Kafka topic using that 
> transaction
> * on `snapshotState` call, it would flush the data and write in state 
> information that current transaction is pending to be committed
> * on `notifyCheckpointComplete` we would commit this pending transaction
> * in case of crash between `snapshotState` and `notifyCheckpointComplete` we 
> either abort this pending transaction (if not every participant successfully 
> saved the snapshot) or restore and commit it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7398) Table API operators/UDFs must not store Logger

2017-08-10 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121234#comment-16121234
 ] 

Fabian Hueske commented on FLINK-7398:
--

In Scala you can make the field {{transient}} and initialize in {{open()}} or 
if you want a static field you can add a companion object. 
So there are many ways to address this issue. All I am saying is that this is 
not a Scala problem per se. There are certainly benefits (and drawbacks) of 
porting the runtime code to Java, but I don't see that solving this issue would 
force us to do so. 

> Table API operators/UDFs must not store Logger
> --
>
> Key: FLINK-7398
> URL: https://issues.apache.org/jira/browse/FLINK-7398
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Aljoscha Krettek
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> Table API operators and UDFs store a reference to the (slf4j) {{Logger}} in 
> an instance field (c.f. 
> https://github.com/apache/flink/blob/f37988c19adc30d324cde83c54f2fa5d36efb9e7/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/FlatMapRunner.scala#L39).
>  This means that the {{Logger}} will be serialised with the UDF and sent to 
> the cluster. This in itself does not sound right and leads to problems when 
> the slf4j configuration on the Client is different from the cluster 
> environment.
> This is an example of a user running into that problem: 
> https://lists.apache.org/thread.html/01dd44007c0122d60c3fd2b2bb04fd2d6d2114bcff1e34d1d2079522@%3Cuser.flink.apache.org%3E.
>  Here, they have Logback on the client but the Logback classes are not 
> available on the cluster and so deserialisation of the UDFs fails with a 
> {{ClassNotFoundException}}.
> This is a rough list of the involved classes:
> {code}
> src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala:43: 
>  private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/catalog/ExternalTableSourceUtil.scala:45:
>   private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala:28:  
> val LOG10 = Types.lookupMethod(classOf[Math], "log10", classOf[Double])
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupAggregate.scala:62:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupWindowAggregate.scala:59:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamOverAggregate.scala:51:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/FlinkConventions.scala:28:  
> val LOGICAL: Convention = new Convention.Impl("LOGICAL", 
> classOf[FlinkLogicalRel])
> src/main/scala/org/apache/flink/table/plan/rules/FlinkRuleSets.scala:38:  val 
> LOGICAL_OPT_RULES: RuleSet = RuleSets.ofList(
> src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateAggFunction.scala:36:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetAggFunction.scala:43:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetFinalAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetPreAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggregatePreProcessor.scala:55:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggReduceGroupFunction.scala:66:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideWindowAggReduceGroupFunction.scala:56:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideTimeWindowAggReduceGroupFunction.scala:64:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTumbleCountWindowAggReduceGroupFunction.scala:46:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetWindowAggMapFunction.scala:52:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTumbleTimeWindowAggReduceGroupFunction.scala:55:
>   val LOG = LoggerFactory.

[jira] [Commented] (FLINK-7398) Table API operators/UDFs must not store Logger

2017-08-10 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121259#comment-16121259
 ] 

Aljoscha Krettek commented on FLINK-7398:
-

I agree with both [~fhueske] and [~jark]. In Java, this has never been a 
problem because everyone is super accustomed to writing {{static final Logger 
LOG ...}} thus if the code used Java a deviation from this would have been 
immediately obvious. Scala does not force making that mistake but it's easier 
to go unnoticed, I think Fabian's suggestion of putting it in a companion 
object is they way to go, but I'm not sure since I have no experience with this.

> Table API operators/UDFs must not store Logger
> --
>
> Key: FLINK-7398
> URL: https://issues.apache.org/jira/browse/FLINK-7398
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Aljoscha Krettek
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> Table API operators and UDFs store a reference to the (slf4j) {{Logger}} in 
> an instance field (c.f. 
> https://github.com/apache/flink/blob/f37988c19adc30d324cde83c54f2fa5d36efb9e7/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/FlatMapRunner.scala#L39).
>  This means that the {{Logger}} will be serialised with the UDF and sent to 
> the cluster. This in itself does not sound right and leads to problems when 
> the slf4j configuration on the Client is different from the cluster 
> environment.
> This is an example of a user running into that problem: 
> https://lists.apache.org/thread.html/01dd44007c0122d60c3fd2b2bb04fd2d6d2114bcff1e34d1d2079522@%3Cuser.flink.apache.org%3E.
>  Here, they have Logback on the client but the Logback classes are not 
> available on the cluster and so deserialisation of the UDFs fails with a 
> {{ClassNotFoundException}}.
> This is a rough list of the involved classes:
> {code}
> src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala:43: 
>  private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/catalog/ExternalTableSourceUtil.scala:45:
>   private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala:28:  
> val LOG10 = Types.lookupMethod(classOf[Math], "log10", classOf[Double])
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupAggregate.scala:62:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupWindowAggregate.scala:59:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamOverAggregate.scala:51:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/FlinkConventions.scala:28:  
> val LOGICAL: Convention = new Convention.Impl("LOGICAL", 
> classOf[FlinkLogicalRel])
> src/main/scala/org/apache/flink/table/plan/rules/FlinkRuleSets.scala:38:  val 
> LOGICAL_OPT_RULES: RuleSet = RuleSets.ofList(
> src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateAggFunction.scala:36:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetAggFunction.scala:43:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetFinalAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetPreAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggregatePreProcessor.scala:55:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggReduceGroupFunction.scala:66:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideWindowAggReduceGroupFunction.scala:56:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideTimeWindowAggReduceGroupFunction.scala:64:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTumbleCountWindowAggReduceGroupFunction.scala:46:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetWindowAggMapFunction.scala:52:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTum

[jira] [Comment Edited] (FLINK-7398) Table API operators/UDFs must not store Logger

2017-08-10 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121259#comment-16121259
 ] 

Aljoscha Krettek edited comment on FLINK-7398 at 8/10/17 8:32 AM:
--

I agree with both [~fhueske] and [~jark]. In Java, this has never been a 
problem because everyone is super accustomed to writing {{static final Logger 
LOG ...}} thus if the code used Java a deviation from this would have been 
immediately obvious. Scala does not force making that mistake but it's easier 
to go unnoticed, I think Fabian's suggestion of putting it in a companion 
object is they way to go, but I'm not sure since I have no experience with this.

[~till.rohrmann] Do you know how this is usually done in Scala for objects that 
are meant to be serialised?


was (Author: aljoscha):
I agree with both [~fhueske] and [~jark]. In Java, this has never been a 
problem because everyone is super accustomed to writing {{static final Logger 
LOG ...}} thus if the code used Java a deviation from this would have been 
immediately obvious. Scala does not force making that mistake but it's easier 
to go unnoticed, I think Fabian's suggestion of putting it in a companion 
object is they way to go, but I'm not sure since I have no experience with this.

> Table API operators/UDFs must not store Logger
> --
>
> Key: FLINK-7398
> URL: https://issues.apache.org/jira/browse/FLINK-7398
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Aljoscha Krettek
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> Table API operators and UDFs store a reference to the (slf4j) {{Logger}} in 
> an instance field (c.f. 
> https://github.com/apache/flink/blob/f37988c19adc30d324cde83c54f2fa5d36efb9e7/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/FlatMapRunner.scala#L39).
>  This means that the {{Logger}} will be serialised with the UDF and sent to 
> the cluster. This in itself does not sound right and leads to problems when 
> the slf4j configuration on the Client is different from the cluster 
> environment.
> This is an example of a user running into that problem: 
> https://lists.apache.org/thread.html/01dd44007c0122d60c3fd2b2bb04fd2d6d2114bcff1e34d1d2079522@%3Cuser.flink.apache.org%3E.
>  Here, they have Logback on the client but the Logback classes are not 
> available on the cluster and so deserialisation of the UDFs fails with a 
> {{ClassNotFoundException}}.
> This is a rough list of the involved classes:
> {code}
> src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala:43: 
>  private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/catalog/ExternalTableSourceUtil.scala:45:
>   private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala:28:  
> val LOG10 = Types.lookupMethod(classOf[Math], "log10", classOf[Double])
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupAggregate.scala:62:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupWindowAggregate.scala:59:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamOverAggregate.scala:51:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/FlinkConventions.scala:28:  
> val LOGICAL: Convention = new Convention.Impl("LOGICAL", 
> classOf[FlinkLogicalRel])
> src/main/scala/org/apache/flink/table/plan/rules/FlinkRuleSets.scala:38:  val 
> LOGICAL_OPT_RULES: RuleSet = RuleSets.ofList(
> src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateAggFunction.scala:36:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetAggFunction.scala:43:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetFinalAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetPreAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggregatePreProcessor.scala:55:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggReduceGroupFunction.scala:66:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSe

[jira] [Commented] (FLINK-7398) Table API operators/UDFs must not store Logger

2017-08-10 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121265#comment-16121265
 ] 

Haohui Mai commented on FLINK-7398:
---

Putting in a companion object is an viable option. My worry, however, is that 
the bug will come back again as it is nontrivial to spot these usages reliably.

Rewriting a lot of code just to fix this issue does not seem very productive. 
[~jark] are there any additional benefits of reimplementing the runtime in Java 
that we might not be aware of?

> Table API operators/UDFs must not store Logger
> --
>
> Key: FLINK-7398
> URL: https://issues.apache.org/jira/browse/FLINK-7398
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Aljoscha Krettek
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> Table API operators and UDFs store a reference to the (slf4j) {{Logger}} in 
> an instance field (c.f. 
> https://github.com/apache/flink/blob/f37988c19adc30d324cde83c54f2fa5d36efb9e7/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/FlatMapRunner.scala#L39).
>  This means that the {{Logger}} will be serialised with the UDF and sent to 
> the cluster. This in itself does not sound right and leads to problems when 
> the slf4j configuration on the Client is different from the cluster 
> environment.
> This is an example of a user running into that problem: 
> https://lists.apache.org/thread.html/01dd44007c0122d60c3fd2b2bb04fd2d6d2114bcff1e34d1d2079522@%3Cuser.flink.apache.org%3E.
>  Here, they have Logback on the client but the Logback classes are not 
> available on the cluster and so deserialisation of the UDFs fails with a 
> {{ClassNotFoundException}}.
> This is a rough list of the involved classes:
> {code}
> src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala:43: 
>  private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/catalog/ExternalTableSourceUtil.scala:45:
>   private val LOG: Logger = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala:28:  
> val LOG10 = Types.lookupMethod(classOf[Math], "log10", classOf[Double])
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupAggregate.scala:62:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupWindowAggregate.scala:59:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamOverAggregate.scala:51:
>   private val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/plan/nodes/FlinkConventions.scala:28:  
> val LOGICAL: Convention = new Convention.Impl("LOGICAL", 
> classOf[FlinkLogicalRel])
> src/main/scala/org/apache/flink/table/plan/rules/FlinkRuleSets.scala:38:  val 
> LOGICAL_OPT_RULES: RuleSet = RuleSets.ofList(
> src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateAggFunction.scala:36:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetAggFunction.scala:43:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetFinalAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetPreAggFunction.scala:44:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggregatePreProcessor.scala:55:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggReduceGroupFunction.scala:66:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideWindowAggReduceGroupFunction.scala:56:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideTimeWindowAggReduceGroupFunction.scala:64:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTumbleCountWindowAggReduceGroupFunction.scala:46:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetWindowAggMapFunction.scala:52:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTumbleTimeWindowAggReduceGroupFunction.scala:55:
>   val LOG = LoggerFactory.getLogger(this.getClass)
> src/main/scala/org/apach

[GitHub] flink issue #4502: [FLINK-7062] [table, cep] Support the basic functionality...

2017-08-10 Thread dianfu
Github user dianfu commented on the issue:

https://github.com/apache/flink/pull/4502
  
@kl0u @dawidwys @wuchong It will be great if you could take a look at this 
PR. This PR add the basic support for cep on sql. Thanks in advance. :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4483: [FLINK-7372] [JobGraph] Remove ActorGateway from JobGraph

2017-08-10 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4483
  
Thanks for the review @zentol. Merging this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7062) Support the basic functionality of MATCH_RECOGNIZE

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121283#comment-16121283
 ] 

ASF GitHub Bot commented on FLINK-7062:
---

Github user dianfu commented on the issue:

https://github.com/apache/flink/pull/4502
  
@kl0u @dawidwys @wuchong It will be great if you could take a look at this 
PR. This PR add the basic support for cep on sql. Thanks in advance. :)


> Support the basic functionality of MATCH_RECOGNIZE
> --
>
> Key: FLINK-7062
> URL: https://issues.apache.org/jira/browse/FLINK-7062
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP, Table API & SQL
>Reporter: Dian Fu
>Assignee: Dian Fu
>
> In this JIRA, we will support the basic functionality of {{MATCH_RECOGNIZE}} 
> in Flink SQL API which includes the support of syntax {{MEASURES}}, 
> {{PATTERN}} and {{DEFINE}}. This would allow users write basic cep use cases 
> with SQL like the following example:
> {code}
> SELECT T.aid, T.bid, T.cid
> FROM MyTable
> MATCH_RECOGNIZE (
>   MEASURES
> A.id AS aid,
> B.id AS bid,
> C.id AS cid
>   PATTERN (A B C)
>   DEFINE
> A AS A.name = 'a',
> B AS B.name = 'b',
> C AS C.name = 'c'
> ) AS T
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7372) Remove ActorGateway from JobGraph

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121287#comment-16121287
 ] 

ASF GitHub Bot commented on FLINK-7372:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4483
  
Thanks for the review @zentol. Merging this PR.


> Remove ActorGateway from JobGraph
> -
>
> Key: FLINK-7372
> URL: https://issues.apache.org/jira/browse/FLINK-7372
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> As a preliminary step for easier Flip-6 integration we should try to decouple 
> as many components from the underlying RPC abstraction as possible. One of 
> these components is the {{JobGraph}} which has a dependency on 
> {{ActorGateway}} via its {{JobGraph#uploadUserJars}} method.
> I propose to get rid of the {{ActorGateway}} parameter and passing instead 
> the BlobServer's address as an {{InetSocketAddress}} instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4486: [FLINK-7375] Replace ActorGateway with JobManagerGateway ...

2017-08-10 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4486
  
Travis passed. Merging this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7375) Remove ActorGateway from JobClient

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121291#comment-16121291
 ] 

ASF GitHub Bot commented on FLINK-7375:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4486
  
Travis passed. Merging this PR.


> Remove ActorGateway from JobClient
> --
>
> Key: FLINK-7375
> URL: https://issues.apache.org/jira/browse/FLINK-7375
> Project: Flink
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> Remove {{ActorGateway}} dependency from {{JobClient}}. This will ease the 
> transition to the Flip-6 code base because we can reuse the {{JobClient}} 
> code.
> I propose to replace the {{ActorGateway}} by a more strictly typed 
> {{JobManagerGateway}} which will be extended by the Flip-6 
> {{JobMasterGateway}}. This will allow to decouple the {{JarRunHandler}} from 
> the {{ActorGateway}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4447: [FLINK-7312][checkstyle] activate checkstyle for flink/co...

2017-08-10 Thread NicoK
Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/4447
  
:) in here, however, I had to add an exception for these `final` keywords 
which may be removed when #4458 is merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7312) activate checkstyle for flink/core/memory/*

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121293#comment-16121293
 ] 

ASF GitHub Bot commented on FLINK-7312:
---

Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/4447
  
:) in here, however, I had to add an exception for these `final` keywords 
which may be removed when #4458 is merged.


> activate checkstyle for flink/core/memory/*
> ---
>
> Key: FLINK-7312
> URL: https://issues.apache.org/jira/browse/FLINK-7312
> Project: Flink
>  Issue Type: Improvement
>  Components: Checkstyle, Core
>Affects Versions: 1.4.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4497: [FLINK-7240] [tests] Stabilize ExternalizedCheckpointITCa...

2017-08-10 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4497
  
Thanks for the review @StefanRRichter. Travis passes now. Merging this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4501: [FLINK-7352] [tests] Stabilize ExecutionGraphRestartTest

2017-08-10 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4501
  
Travis passes. Merging this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7352) ExecutionGraphRestartTest timeouts

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121296#comment-16121296
 ] 

ASF GitHub Bot commented on FLINK-7352:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4501
  
Travis passes. Merging this PR.


> ExecutionGraphRestartTest timeouts
> --
>
> Key: FLINK-7352
> URL: https://issues.apache.org/jira/browse/FLINK-7352
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, Tests
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Nico Kruber
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> Recently, I received timeouts from some tests in 
> {{ExecutionGraphRestartTest}} like this
> {code}
> Tests in error: 
>   ExecutionGraphRestartTest.testConcurrentLocalFailAndRestart:638 » Timeout
> {code}
> This particular instance is from 1.3.2 RC2 and stuck in 
> {{ExecutionGraphTestUtils#waitUntilDeployedAndSwitchToRunning()}} but I also 
> had instances stuck in {{ExecutionGraphTestUtils#waitUntilJobStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7240) Externalized RocksDB can fail with stackoverflow

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121295#comment-16121295
 ] 

ASF GitHub Bot commented on FLINK-7240:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4497
  
Thanks for the review @StefanRRichter. Travis passes now. Merging this PR.


> Externalized RocksDB can fail with stackoverflow
> 
>
> Key: FLINK-7240
> URL: https://issues.apache.org/jira/browse/FLINK-7240
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.3.1, 1.4.0
> Environment: https://travis-ci.org/zentol/flink/jobs/255760513
>Reporter: Chesnay Schepler
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> {code}
> testExternalizedFullRocksDBCheckpointsStandalone(org.apache.flink.test.checkpointing.ExternalizedCheckpointITCase)
>   Time elapsed: 146.894 sec  <<< ERROR!
> java.lang.StackOverflowError: null
>   at java.util.Hashtable.get(Hashtable.java:363)
>   at java.util.Properties.getProperty(Properties.java:969)
>   at java.lang.System.getProperty(System.java:720)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:84)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.PrintWriter.(PrintWriter.java:116)
>   at java.io.PrintWriter.(PrintWriter.java:100)
>   at 
> org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58)
>   at 
> org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
>   at 
> org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:381)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:392)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4492: [FLINK-7381] [web] Decouple WebRuntimeMonitor from ActorG...

2017-08-10 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4492
  
Alright, I'll port the web monitor options and rebase this PR onto that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7381) Decouple WebRuntimeMonitor from ActorGateway

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121300#comment-16121300
 ] 

ASF GitHub Bot commented on FLINK-7381:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4492
  
Alright, I'll port the web monitor options and rebase this PR onto that.


> Decouple WebRuntimeMonitor from ActorGateway
> 
>
> Key: FLINK-7381
> URL: https://issues.apache.org/jira/browse/FLINK-7381
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
>
> The {{WebRuntimeMonitor}} has a hard wired dependency on the {{ActorGateway}} 
> in order to communicate with the {{JobManager}}. In order to make it work 
> with the {{JobMaster}} (Flip-6), we have to abstract this dependency away. I 
> propose to add a {{JobManagerGateway}} interface which can be implemented 
> using Akka for the old {{JobManager}} code. The Flip-6 {{JobMasterGateway}} 
> can then directly inherit from this interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4483: [FLINK-7372] [JobGraph] Remove ActorGateway from J...

2017-08-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4483


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4486: [FLINK-7375] Replace ActorGateway with JobManagerG...

2017-08-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4486


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7372) Remove ActorGateway from JobGraph

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121307#comment-16121307
 ] 

ASF GitHub Bot commented on FLINK-7372:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4483


> Remove ActorGateway from JobGraph
> -
>
> Key: FLINK-7372
> URL: https://issues.apache.org/jira/browse/FLINK-7372
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> As a preliminary step for easier Flip-6 integration we should try to decouple 
> as many components from the underlying RPC abstraction as possible. One of 
> these components is the {{JobGraph}} which has a dependency on 
> {{ActorGateway}} via its {{JobGraph#uploadUserJars}} method.
> I propose to get rid of the {{ActorGateway}} parameter and passing instead 
> the BlobServer's address as an {{InetSocketAddress}} instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7375) Remove ActorGateway from JobClient

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121308#comment-16121308
 ] 

ASF GitHub Bot commented on FLINK-7375:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4486


> Remove ActorGateway from JobClient
> --
>
> Key: FLINK-7375
> URL: https://issues.apache.org/jira/browse/FLINK-7375
> Project: Flink
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>
> Remove {{ActorGateway}} dependency from {{JobClient}}. This will ease the 
> transition to the Flip-6 code base because we can reuse the {{JobClient}} 
> code.
> I propose to replace the {{ActorGateway}} by a more strictly typed 
> {{JobManagerGateway}} which will be extended by the Flip-6 
> {{JobMasterGateway}}. This will allow to decouple the {{JarRunHandler}} from 
> the {{ActorGateway}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7352) ExecutionGraphRestartTest timeouts

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121318#comment-16121318
 ] 

ASF GitHub Bot commented on FLINK-7352:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4501


> ExecutionGraphRestartTest timeouts
> --
>
> Key: FLINK-7352
> URL: https://issues.apache.org/jira/browse/FLINK-7352
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, Tests
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Nico Kruber
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.4.0
>
>
> Recently, I received timeouts from some tests in 
> {{ExecutionGraphRestartTest}} like this
> {code}
> Tests in error: 
>   ExecutionGraphRestartTest.testConcurrentLocalFailAndRestart:638 » Timeout
> {code}
> This particular instance is from 1.3.2 RC2 and stuck in 
> {{ExecutionGraphTestUtils#waitUntilDeployedAndSwitchToRunning()}} but I also 
> had instances stuck in {{ExecutionGraphTestUtils#waitUntilJobStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7240) Externalized RocksDB can fail with stackoverflow

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121319#comment-16121319
 ] 

ASF GitHub Bot commented on FLINK-7240:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4497


> Externalized RocksDB can fail with stackoverflow
> 
>
> Key: FLINK-7240
> URL: https://issues.apache.org/jira/browse/FLINK-7240
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.3.1, 1.4.0
> Environment: https://travis-ci.org/zentol/flink/jobs/255760513
>Reporter: Chesnay Schepler
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> {code}
> testExternalizedFullRocksDBCheckpointsStandalone(org.apache.flink.test.checkpointing.ExternalizedCheckpointITCase)
>   Time elapsed: 146.894 sec  <<< ERROR!
> java.lang.StackOverflowError: null
>   at java.util.Hashtable.get(Hashtable.java:363)
>   at java.util.Properties.getProperty(Properties.java:969)
>   at java.lang.System.getProperty(System.java:720)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:84)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.PrintWriter.(PrintWriter.java:116)
>   at java.io.PrintWriter.(PrintWriter.java:100)
>   at 
> org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58)
>   at 
> org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
>   at 
> org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:381)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:392)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4501: [FLINK-7352] [tests] Stabilize ExecutionGraphResta...

2017-08-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4501


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4497: [FLINK-7240] [tests] Stabilize ExternalizedCheckpo...

2017-08-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4497


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-7352) ExecutionGraphRestartTest timeouts

2017-08-10 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-7352.
--
   Resolution: Fixed
Fix Version/s: 1.4.0

Fixed via f59de67d9bd440b40352fdea5ede7c709f991a9e

> ExecutionGraphRestartTest timeouts
> --
>
> Key: FLINK-7352
> URL: https://issues.apache.org/jira/browse/FLINK-7352
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, Tests
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Nico Kruber
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.4.0
>
>
> Recently, I received timeouts from some tests in 
> {{ExecutionGraphRestartTest}} like this
> {code}
> Tests in error: 
>   ExecutionGraphRestartTest.testConcurrentLocalFailAndRestart:638 » Timeout
> {code}
> This particular instance is from 1.3.2 RC2 and stuck in 
> {{ExecutionGraphTestUtils#waitUntilDeployedAndSwitchToRunning()}} but I also 
> had instances stuck in {{ExecutionGraphTestUtils#waitUntilJobStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7372) Remove ActorGateway from JobGraph

2017-08-10 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-7372.

   Resolution: Fixed
Fix Version/s: 1.4.0

Fixed via d52ccd2941ff25c3c61146b25c52df1ddc09d8da

> Remove ActorGateway from JobGraph
> -
>
> Key: FLINK-7372
> URL: https://issues.apache.org/jira/browse/FLINK-7372
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
> Fix For: 1.4.0
>
>
> As a preliminary step for easier Flip-6 integration we should try to decouple 
> as many components from the underlying RPC abstraction as possible. One of 
> these components is the {{JobGraph}} which has a dependency on 
> {{ActorGateway}} via its {{JobGraph#uploadUserJars}} method.
> I propose to get rid of the {{ActorGateway}} parameter and passing instead 
> the BlobServer's address as an {{InetSocketAddress}} instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7375) Remove ActorGateway from JobClient

2017-08-10 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-7375.

   Resolution: Fixed
Fix Version/s: 1.4.0

Fixed via dfaec337059cc59800d6c708d03a8194db487872

> Remove ActorGateway from JobClient
> --
>
> Key: FLINK-7375
> URL: https://issues.apache.org/jira/browse/FLINK-7375
> Project: Flink
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
> Fix For: 1.4.0
>
>
> Remove {{ActorGateway}} dependency from {{JobClient}}. This will ease the 
> transition to the Flip-6 code base because we can reuse the {{JobClient}} 
> code.
> I propose to replace the {{ActorGateway}} by a more strictly typed 
> {{JobManagerGateway}} which will be extended by the Flip-6 
> {{JobMasterGateway}}. This will allow to decouple the {{JarRunHandler}} from 
> the {{ActorGateway}} as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-7240) Externalized RocksDB can fail with stackoverflow

2017-08-10 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-7240.
--
   Resolution: Fixed
Fix Version/s: 1.4.0

Fixed via f9db6fe1dd0c82209db9c064c8f6f1aa99b9590c

> Externalized RocksDB can fail with stackoverflow
> 
>
> Key: FLINK-7240
> URL: https://issues.apache.org/jira/browse/FLINK-7240
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.3.1, 1.4.0
> Environment: https://travis-ci.org/zentol/flink/jobs/255760513
>Reporter: Chesnay Schepler
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.4.0
>
>
> {code}
> testExternalizedFullRocksDBCheckpointsStandalone(org.apache.flink.test.checkpointing.ExternalizedCheckpointITCase)
>   Time elapsed: 146.894 sec  <<< ERROR!
> java.lang.StackOverflowError: null
>   at java.util.Hashtable.get(Hashtable.java:363)
>   at java.util.Properties.getProperty(Properties.java:969)
>   at java.lang.System.getProperty(System.java:720)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:84)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.PrintWriter.(PrintWriter.java:116)
>   at java.io.PrintWriter.(PrintWriter.java:100)
>   at 
> org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58)
>   at 
> org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
>   at 
> org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:381)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:392)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132400522
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
--- End diff --

typo, should be `getCirculantGraphParameters`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132400996
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-   parameters("EchoGraph", "hash", "--vertex_count", "42", 
"--vertex_degree", "13"),
-   546, 0x00057720L);
+   expectedChecksum(getEchoGraphParamters("hash"), 546, 
0x00057720L);
}
 
 

[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401190
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-   parameters("EchoGraph", "hash", "--vertex_count", "42", 
"--vertex_degree", "13"),
-   546, 0x00057720L);
+   expectedChecksum(getEchoGraphParamters("hash"), 546, 
0x00057720L);
}
 
 

[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401256
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-   parameters("EchoGraph", "hash", "--vertex_count", "42", 
"--vertex_degree", "13"),
-   546, 0x00057720L);
+   expectedChecksum(getEchoGraphParamters("hash"), 546, 
0x00057720L);
}
 
 

[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401061
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-   parameters("EchoGraph", "hash", "--vertex_count", "42", 
"--vertex_degree", "13"),
-   546, 0x00057720L);
+   expectedChecksum(getEchoGraphParamters("hash"), 546, 
0x00057720L);
}
 
 

[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401474
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-   parameters("EchoGraph", "hash", "--vertex_count", "42", 
"--vertex_degree", "13"),
-   546, 0x00057720L);
+   expectedChecksum(getEchoGraphParamters("hash"), 546, 
0x00057720L);
}
 
 

[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132402963
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/drivers/parameter/LongParameter.java
 ---
@@ -52,16 +52,6 @@ public LongParameter(ParameterizedBase owner, String 
name) {
public LongParameter setDefaultValue(long defaultValue) {
super.setDefaultValue(defaultValue);
 
-   if (hasMinimumValue) {
-   Util.checkParameter(defaultValue >= minimumValue,
-   "Default value (" + defaultValue + ") must be 
greater than or equal to minimum (" + minimumValue + ")");
-   }
-
-   if (hasMaximumValue) {
-   Util.checkParameter(defaultValue <= maximumValue,
-   "Default value (" + defaultValue + ") must be 
less than or equal to maximum (" + maximumValue + ")");
-   }
-
--- End diff --

I'm no expert here, but can you elaborate on why you removed these checks 
for the default value being in line with the min and max?

I can only guess that this must be that way in case where the default 
parallelism is not within the bounds of the configured one. I'm wondering 
whether the `LongParameter` is used somewhere else were these bounds are 
expected to hold though.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401144
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-   parameters("EchoGraph", "hash", "--vertex_count", "42", 
"--vertex_degree", "13"),
-   546, 0x00057720L);
+   expectedChecksum(getEchoGraphParamters("hash"), 546, 
0x00057720L);
}
 
 

[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132400879
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
--- End diff --

same here: `getEchoGraphParameters`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401371
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-   parameters("EchoGraph", "hash", "--vertex_count", "42", 
"--vertex_degree", "13"),
-   546, 0x00057720L);
+   expectedChecksum(getEchoGraphParamters("hash"), 546, 
0x00057720L);
}
 
 

[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132400768
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
--- End diff --

typo, should be: `getCompleteGraphParameters`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121343#comment-16121343
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401256
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-  

[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121350#comment-16121350
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132400879
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
--- End diff --

same here: `getEchoGraphParameters`


> Graph simplification does not set parallelism
> -
>
> Key: FLINK-7199
> URL: https://issues.apache.

[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121345#comment-16121345
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132400522
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
--- End diff --

typo, should be `getCirculantGraphParameters`


> Graph simplification does not set parallelism
> -
>
> Key: FLINK-7199
> URL: https://issues.apache.org/jira/browse/FLINK-7199
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> The {{Simplify}} parameter should accept and set the parallelism when calling 
> the {{Simplify}} algorithms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121349#comment-16121349
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132400768
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
--- End diff --

typo, should be: `getCompleteGraphParameters`


> Graph simplification does not set parallelism
> -
>
> Key: FLINK-7199
> URL: https://issues.apache.org/jira/browse/FLINK-7199
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> The {{Simplify}} parameter should accept and set the parallelism when calling 
> the {{Simplify}} algorithms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121347#comment-16121347
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401371
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-  

[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121348#comment-16121348
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401061
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-  

[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121346#comment-16121346
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132402963
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/drivers/parameter/LongParameter.java
 ---
@@ -52,16 +52,6 @@ public LongParameter(ParameterizedBase owner, String 
name) {
public LongParameter setDefaultValue(long defaultValue) {
super.setDefaultValue(defaultValue);
 
-   if (hasMinimumValue) {
-   Util.checkParameter(defaultValue >= minimumValue,
-   "Default value (" + defaultValue + ") must be 
greater than or equal to minimum (" + minimumValue + ")");
-   }
-
-   if (hasMaximumValue) {
-   Util.checkParameter(defaultValue <= maximumValue,
-   "Default value (" + defaultValue + ") must be 
less than or equal to maximum (" + maximumValue + ")");
-   }
-
--- End diff --

I'm no expert here, but can you elaborate on why you removed these checks 
for the default value being in line with the min and max?

I can only guess that this must be that way in case where the default 
parallelism is not within the bounds of the configured one. I'm wondering 
whether the `LongParameter` is used somewhere else were these bounds are 
expected to hold though.


> Graph simplification does not set parallelism
> -
>
> Key: FLINK-7199
> URL: https://issues.apache.org/jira/browse/FLINK-7199
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> The {{Simplify}} parameter should accept and set the parallelism when calling 
> the {{Simplify}} algorithms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121344#comment-16121344
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401190
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-  

[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121340#comment-16121340
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132400996
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-  

[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121341#comment-16121341
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401144
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-  

[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121342#comment-16121342
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132401474
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/test/java/org/apache/flink/graph/drivers/EdgeListITCase.java
 ---
@@ -56,214 +56,304 @@ public void testLongDescription() throws Exception {
ProgramParametrizationException.class);
}
 
+   // CirculantGraph
+
+   private String[] getCirculantGraphParamters(String output) {
+   return parameters("CirculantGraph", output, "--vertex_count", 
"42", "--range0", "13:4");
+   }
+
@Test
public void testHashWithCirculantGraph() throws Exception {
-   expectedChecksum(
-   parameters("CirculantGraph", "hash", "--vertex_count", 
"42", "--range0", "13:4"),
-   168, 0x0001ae80);
+   expectedChecksum(getCirculantGraphParamters("hash"), 168, 
0x0001ae80);
}
 
@Test
public void testPrintWithCirculantGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CirculantGraph", "print", "--vertex_count", 
"42", "--range0", "13:4"),
-   new Checksum(168, 0x004bdcc52cbcL));
+   expectedOutputChecksum(getCirculantGraphParamters("print"), new 
Checksum(168, 0x004bdcc52cbcL));
+   }
+
+   @Test
+   public void testParallelismWithCirculantGraph() throws Exception {
+   
TestUtils.verifyParallelism(getCirculantGraphParamters("print"));
+   }
+
+   // CompleteGraph
+
+   private String[] getCompleteGraphParamters(String output) {
+   return parameters("CompleteGraph", output, "--vertex_count", 
"42");
}
 
@Test
public void testHashWithCompleteGraph() throws Exception {
-   expectedChecksum(
-   parameters("CompleteGraph", "hash", "--vertex_count", 
"42"),
-   1722, 0x00113ca0L);
+   expectedChecksum(getCompleteGraphParamters("hash"), 1722, 
0x00113ca0L);
}
 
@Test
public void testPrintWithCompleteGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CompleteGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(1722, 0x031109a0c398L));
+   expectedOutputChecksum(getCompleteGraphParamters("print"), new 
Checksum(1722, 0x031109a0c398L));
+   }
+
+   @Test
+   public void testParallelismWithCompleteGraph() throws Exception {
+   TestUtils.verifyParallelism(getCompleteGraphParamters("print"));
+   }
+
+   // CycleGraph
+
+   private String[] getCycleGraphParamters(String output) {
+   return parameters("CycleGraph", output, "--vertex_count", "42");
}
 
@Test
public void testHashWithCycleGraph() throws Exception {
-   expectedChecksum(
-   parameters("CycleGraph", "hash", "--vertex_count", 
"42"),
-   84, 0xd740L);
+   expectedChecksum(getCycleGraphParamters("hash"), 84, 
0xd740L);
}
 
@Test
public void testPrintWithCycleGraph() throws Exception {
// skip 'char' since it is not printed as a number
Assume.assumeFalse(idType.equals("char") || 
idType.equals("nativeChar"));
 
-   expectedOutputChecksum(
-   parameters("CycleGraph", "print", "--vertex_count", 
"42"),
-   new Checksum(84, 0x00272a136fcaL));
+   expectedOutputChecksum(getCycleGraphParamters("print"), new 
Checksum(84, 0x00272a136fcaL));
+   }
+
+   @Test
+   public void testParallelismWithCycleGraph() throws Exception {
+   TestUtils.verifyParallelism(getCycleGraphParamters("print"));
+   }
+
+   // EchoGraph
+
+   private String[] getEchoGraphParamters(String output) {
+   return parameters("EchoGraph", output, "--vertex_count", "42", 
"--vertex_degree", "13");
}
 
@Test
public void testHashWithEchoGraph() throws Exception {
-   expectedChecksum(
-  

[GitHub] flink pull request #4511: [FLINK-7396] Don't put multiple directories in HAD...

2017-08-10 Thread zjureel
GitHub user zjureel opened a pull request:

https://github.com/apache/flink/pull/4511

[FLINK-7396] Don't put multiple directories in HADOOP_CONF_DIR in config.sh

## What is the purpose of the change

Fix put multiple directories in HADOOP_CONF_DIR in config.sh


## Brief change log

*(for example:)*
  - *Check whether HADOOP_CONF_DIR before put directory to it*


## Verifying this change

*(Please pick either of the following options)*

No test case

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / not documented)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zjureel/flink FLINK-7396

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4511.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4511


commit 4d25ba8abad862a5cd04812311043de7f83cac89
Author: zjureel 
Date:   2017-08-10T08:52:43Z

[FLINK-7396] Fix don't put multiple directories in HADOOP_CONF_DIR in 
config.sh




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4511: [FLINK-7396] Don't put multiple directories in HADOOP_CON...

2017-08-10 Thread zjureel
Github user zjureel commented on the issue:

https://github.com/apache/flink/pull/4511
  
@aljoscha What do you think of this change for 
[https://issues.apache.org/jira/browse/FLINK-7396](https://issues.apache.org/jira/browse/FLINK-7396)
  Thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7396) Don't put multiple directories in HADOOP_CONF_DIR in config.sh

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121357#comment-16121357
 ] 

ASF GitHub Bot commented on FLINK-7396:
---

GitHub user zjureel opened a pull request:

https://github.com/apache/flink/pull/4511

[FLINK-7396] Don't put multiple directories in HADOOP_CONF_DIR in config.sh

## What is the purpose of the change

Fix put multiple directories in HADOOP_CONF_DIR in config.sh


## Brief change log

*(for example:)*
  - *Check whether HADOOP_CONF_DIR before put directory to it*


## Verifying this change

*(Please pick either of the following options)*

No test case

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / not documented)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zjureel/flink FLINK-7396

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4511.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4511


commit 4d25ba8abad862a5cd04812311043de7f83cac89
Author: zjureel 
Date:   2017-08-10T08:52:43Z

[FLINK-7396] Fix don't put multiple directories in HADOOP_CONF_DIR in 
config.sh




> Don't put multiple directories in HADOOP_CONF_DIR in config.sh
> --
>
> Key: FLINK-7396
> URL: https://issues.apache.org/jira/browse/FLINK-7396
> Project: Flink
>  Issue Type: Bug
>  Components: Startup Shell Scripts
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> In config.sh we do this:
> {code}
> # Check if deprecated HADOOP_HOME is set.
> if [ -n "$HADOOP_HOME" ]; then
> # HADOOP_HOME is set. Check if its a Hadoop 1.x or 2.x HADOOP_HOME path
> if [ -d "$HADOOP_HOME/conf" ]; then
> # its a Hadoop 1.x
> HADOOP_CONF_DIR="$HADOOP_CONF_DIR:$HADOOP_HOME/conf"
> fi
> if [ -d "$HADOOP_HOME/etc/hadoop" ]; then
> # Its Hadoop 2.2+
> HADOOP_CONF_DIR="$HADOOP_CONF_DIR:$HADOOP_HOME/etc/hadoop"
> fi
> fi
> {code}
> while our {{HadoopFileSystem}} actually only treats this paths as a single 
> path, not a colon-separated path: 
> https://github.com/apache/flink/blob/854b05376a459a6197e41e141bb28a9befe481ad/flink-runtime/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopFileSystem.java#L236
> I also think that other tools don't assume multiple paths in there and at 
> least one user ran into the problem on their setup.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7396) Don't put multiple directories in HADOOP_CONF_DIR in config.sh

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121359#comment-16121359
 ] 

ASF GitHub Bot commented on FLINK-7396:
---

Github user zjureel commented on the issue:

https://github.com/apache/flink/pull/4511
  
@aljoscha What do you think of this change for 
[https://issues.apache.org/jira/browse/FLINK-7396](https://issues.apache.org/jira/browse/FLINK-7396)
  Thanks


> Don't put multiple directories in HADOOP_CONF_DIR in config.sh
> --
>
> Key: FLINK-7396
> URL: https://issues.apache.org/jira/browse/FLINK-7396
> Project: Flink
>  Issue Type: Bug
>  Components: Startup Shell Scripts
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> In config.sh we do this:
> {code}
> # Check if deprecated HADOOP_HOME is set.
> if [ -n "$HADOOP_HOME" ]; then
> # HADOOP_HOME is set. Check if its a Hadoop 1.x or 2.x HADOOP_HOME path
> if [ -d "$HADOOP_HOME/conf" ]; then
> # its a Hadoop 1.x
> HADOOP_CONF_DIR="$HADOOP_CONF_DIR:$HADOOP_HOME/conf"
> fi
> if [ -d "$HADOOP_HOME/etc/hadoop" ]; then
> # Its Hadoop 2.2+
> HADOOP_CONF_DIR="$HADOOP_CONF_DIR:$HADOOP_HOME/etc/hadoop"
> fi
> fi
> {code}
> while our {{HadoopFileSystem}} actually only treats this paths as a single 
> path, not a colon-separated path: 
> https://github.com/apache/flink/blob/854b05376a459a6197e41e141bb28a9befe481ad/flink-runtime/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopFileSystem.java#L236
> I also think that other tools don't assume multiple paths in there and at 
> least one user ran into the problem on their setup.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7396) Don't put multiple directories in HADOOP_CONF_DIR in config.sh

2017-08-10 Thread Fang Yong (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fang Yong reassigned FLINK-7396:


Assignee: Fang Yong

> Don't put multiple directories in HADOOP_CONF_DIR in config.sh
> --
>
> Key: FLINK-7396
> URL: https://issues.apache.org/jira/browse/FLINK-7396
> Project: Flink
>  Issue Type: Bug
>  Components: Startup Shell Scripts
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Aljoscha Krettek
>Assignee: Fang Yong
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> In config.sh we do this:
> {code}
> # Check if deprecated HADOOP_HOME is set.
> if [ -n "$HADOOP_HOME" ]; then
> # HADOOP_HOME is set. Check if its a Hadoop 1.x or 2.x HADOOP_HOME path
> if [ -d "$HADOOP_HOME/conf" ]; then
> # its a Hadoop 1.x
> HADOOP_CONF_DIR="$HADOOP_CONF_DIR:$HADOOP_HOME/conf"
> fi
> if [ -d "$HADOOP_HOME/etc/hadoop" ]; then
> # Its Hadoop 2.2+
> HADOOP_CONF_DIR="$HADOOP_CONF_DIR:$HADOOP_HOME/etc/hadoop"
> fi
> fi
> {code}
> while our {{HadoopFileSystem}} actually only treats this paths as a single 
> path, not a colon-separated path: 
> https://github.com/apache/flink/blob/854b05376a459a6197e41e141bb28a9befe481ad/flink-runtime/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopFileSystem.java#L236
> I also think that other tools don't assume multiple paths in there and at 
> least one user ran into the problem on their setup.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7408) Extract WebRuntimeMonitor options from JobManagerOptions

2017-08-10 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7408:


 Summary: Extract WebRuntimeMonitor options from JobManagerOptions
 Key: FLINK-7408
 URL: https://issues.apache.org/jira/browse/FLINK-7408
 Project: Flink
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
Priority: Trivial


With the Flip-6 code changes, the {{WebRuntimeMonitor}} won't run exclusively 
next to the {{JobManager}}. Therefore, it makes sense to refactor the web 
monitor options and moving them from {{JobManagerOptions}} to {{WebOptions}} 
and removing the prefix {{jobmanager}}. 

This is done as requested by 
https://github.com/apache/flink/pull/4492#issuecomment-321271819.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4512: [FLINK-7408] [conf] Create WebOptions for WebRunti...

2017-08-10 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/4512

[FLINK-7408] [conf] Create WebOptions for WebRuntimeMonitor

## What is the purpose of the change

This commit moves the WebRuntimeMonitor related configuration options from
JobManagerOptions to WebOptions. Moreover, it removes the prefix jobmanager.

As you've requested @zentol.

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink portWebMonitorOptions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4512.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4512


commit 34c0fcb40f9704f6eab6a9cc520e9a27b91ff0d0
Author: Till Rohrmann 
Date:   2017-08-10T09:42:09Z

[FLINK-7408] [conf] Create WebOptions for WebRuntimeMonitor

This commit moves the WebRuntimeMonitor related configuration options from
JobManagerOptions to WebOptions. Moreover, it removes the prefix jobmanager.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7408) Extract WebRuntimeMonitor options from JobManagerOptions

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121372#comment-16121372
 ] 

ASF GitHub Bot commented on FLINK-7408:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/4512

[FLINK-7408] [conf] Create WebOptions for WebRuntimeMonitor

## What is the purpose of the change

This commit moves the WebRuntimeMonitor related configuration options from
JobManagerOptions to WebOptions. Moreover, it removes the prefix jobmanager.

As you've requested @zentol.

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink portWebMonitorOptions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4512.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4512


commit 34c0fcb40f9704f6eab6a9cc520e9a27b91ff0d0
Author: Till Rohrmann 
Date:   2017-08-10T09:42:09Z

[FLINK-7408] [conf] Create WebOptions for WebRuntimeMonitor

This commit moves the WebRuntimeMonitor related configuration options from
JobManagerOptions to WebOptions. Moreover, it removes the prefix jobmanager.




> Extract WebRuntimeMonitor options from JobManagerOptions
> 
>
> Key: FLINK-7408
> URL: https://issues.apache.org/jira/browse/FLINK-7408
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Trivial
>
> With the Flip-6 code changes, the {{WebRuntimeMonitor}} won't run exclusively 
> next to the {{JobManager}}. Therefore, it makes sense to refactor the web 
> monitor options and moving them from {{JobManagerOptions}} to {{WebOptions}} 
> and removing the prefix {{jobmanager}}. 
> This is done as requested by 
> https://github.com/apache/flink/pull/4492#issuecomment-321271819.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4492: [FLINK-7381] [web] Decouple WebRuntimeMonitor from ActorG...

2017-08-10 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4492
  
@zentol I've created the [WebOptions PR](#4512) and rebased onto that as 
you've requested.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7381) Decouple WebRuntimeMonitor from ActorGateway

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121377#comment-16121377
 ] 

ASF GitHub Bot commented on FLINK-7381:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4492
  
@zentol I've created the [WebOptions PR](#4512) and rebased onto that as 
you've requested.


> Decouple WebRuntimeMonitor from ActorGateway
> 
>
> Key: FLINK-7381
> URL: https://issues.apache.org/jira/browse/FLINK-7381
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
>
> The {{WebRuntimeMonitor}} has a hard wired dependency on the {{ActorGateway}} 
> in order to communicate with the {{JobManager}}. In order to make it work 
> with the {{JobMaster}} (Flip-6), we have to abstract this dependency away. I 
> propose to add a {{JobManagerGateway}} interface which can be implemented 
> using Akka for the old {{JobManager}} code. The Flip-6 {{JobMasterGateway}} 
> can then directly inherit from this interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7409) WebRuntimeMonitor blocks serving threads

2017-08-10 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7409:


 Summary: WebRuntimeMonitor blocks serving threads
 Key: FLINK-7409
 URL: https://issues.apache.org/jira/browse/FLINK-7409
 Project: Flink
  Issue Type: Improvement
  Components: Webfrontend
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann


The {{WebRuntimeMonitor}} contains a lot of blocking operations where it 
retrieves a result from the {{JobManager}} and then waits on the future to 
obtain the result. This is not a good design since we are blocking server 
threads with that. Instead I propose to follow a more reactive approach where 
the {{RequestHandler}} returns a {{CompletableFuture}} of {{FullHttpResonse}} 
which is in the completion handler written out to the channel.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7410) Add getName method to UserDefinedFunction

2017-08-10 Thread Hequn Cheng (JIRA)
Hequn Cheng created FLINK-7410:
--

 Summary: Add getName method to UserDefinedFunction
 Key: FLINK-7410
 URL: https://issues.apache.org/jira/browse/FLINK-7410
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: Hequn Cheng
Assignee: Hequn Cheng


*Motivation*

Operator names setted in table-api are used by visualization and logging, it is 
import to make these names simple and readable. Currently, 
UserDefinedFunction’s name contains class CanonicalName and md5 value making 
the name too long and unfriendly to users. 

As shown in the following example, 
select: (a, b, c, 
org$apache$flink$table$expressions$utils$RichFunc1$281f7e61ec5d8da894f5783e2e17a4f5(a)
 AS _c3, 
org$apache$flink$table$expressions$utils$RichFunc2$fb99077e565685ebc5f48b27edc14d98(c)
 AS _c4)


*Changes:*

Provide getName method for UserDefinedFunction. The method will return class 
name by default. Users can also override the method to return whatever he wants.

What do you think [~fhueske] ?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7410) Add getName method to UserDefinedFunction

2017-08-10 Thread Hequn Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-7410:
---
Description: 
*Motivation*

Operator names setted in table-api are used by visualization and logging, it is 
import to make these names simple and readable. Currently, 
UserDefinedFunction’s name contains class CanonicalName and md5 value making 
the name too long and unfriendly to users. 

As shown in the following example, 
{quote}
select: (a, b, c, 
org$apache$flink$table$expressions$utils$RichFunc1$281f7e61ec5d8da894f5783e2e17a4f5(a)
 AS _c3, 
org$apache$flink$table$expressions$utils$RichFunc2$fb99077e565685ebc5f48b27edc14d98(c)
 AS _c4)
{quote}

*Changes:*

Provide getName method for UserDefinedFunction. The method will return class 
name by default. Users can also override the method to return whatever he wants.

What do you think [~fhueske] ?


  was:
*Motivation*

Operator names setted in table-api are used by visualization and logging, it is 
import to make these names simple and readable. Currently, 
UserDefinedFunction’s name contains class CanonicalName and md5 value making 
the name too long and unfriendly to users. 

As shown in the following example, 
select: (a, b, c, 
org$apache$flink$table$expressions$utils$RichFunc1$281f7e61ec5d8da894f5783e2e17a4f5(a)
 AS _c3, 
org$apache$flink$table$expressions$utils$RichFunc2$fb99077e565685ebc5f48b27edc14d98(c)
 AS _c4)


*Changes:*

Provide getName method for UserDefinedFunction. The method will return class 
name by default. Users can also override the method to return whatever he wants.

What do you think [~fhueske] ?



> Add getName method to UserDefinedFunction
> -
>
> Key: FLINK-7410
> URL: https://issues.apache.org/jira/browse/FLINK-7410
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>
> *Motivation*
> Operator names setted in table-api are used by visualization and logging, it 
> is import to make these names simple and readable. Currently, 
> UserDefinedFunction’s name contains class CanonicalName and md5 value making 
> the name too long and unfriendly to users. 
> As shown in the following example, 
> {quote}
> select: (a, b, c, 
> org$apache$flink$table$expressions$utils$RichFunc1$281f7e61ec5d8da894f5783e2e17a4f5(a)
>  AS _c3, 
> org$apache$flink$table$expressions$utils$RichFunc2$fb99077e565685ebc5f48b27edc14d98(c)
>  AS _c4)
> {quote}
> *Changes:*
>   
> Provide getName method for UserDefinedFunction. The method will return class 
> name by default. Users can also override the method to return whatever he 
> wants.
> What do you think [~fhueske] ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4512: [FLINK-7408] [conf] Create WebOptions for WebRunti...

2017-08-10 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426478
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitor.java
 ---
@@ -214,7 +214,7 @@ public WebRuntimeMonitor(
ExecutionContextExecutor context = 
ExecutionContext$.MODULE$.fromExecutor(executorService);
 
// Config to enable https access to the web-ui
-   boolean enableSSL = 
config.getBoolean(JobManagerOptions.WEB_SSL_ENABLED) && 
SSLUtils.getSSLEnabled(config);
+   boolean enableSSL = config.getBoolean(WebOptions.SSL_ENABLED) 
&&SSLUtils.getSSLEnabled(config);
--- End diff --

whitespace after `&&` seems off.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4512: [FLINK-7408] [conf] Create WebOptions for WebRunti...

2017-08-10 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426290
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/entrypoint/YarnEntrypointUtils.java
 ---
@@ -104,8 +105,8 @@ public static Configuration loadConfiguration(String 
workingDirectory, Map= 0) {
-   configuration.setInteger(JobManagerOptions.WEB_PORT, 0);
+   if (configuration.getInteger(WebOptions.PORT.key(), 0) >= 0) {
--- End diff --

drop call to `key()`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4512: [FLINK-7408] [conf] Create WebOptions for WebRunti...

2017-08-10 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426338
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/BootstrapTools.java
 ---
@@ -188,7 +189,7 @@ public static WebMonitor startWebMonitorIfConfigured(
config.setString(JobManagerOptions.ADDRESS, 
address.host().get());
config.setInteger(JobManagerOptions.PORT, 
Integer.parseInt(address.port().get().toString()));
 
-   if (config.getInteger(JobManagerOptions.WEB_PORT.key(), 0) >= 
0) {
+   if (config.getInteger(WebOptions.PORT.key(), 0) >= 0) {
--- End diff --

drop call to `key()`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4512: [FLINK-7408] [conf] Create WebOptions for WebRunti...

2017-08-10 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426175
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala
 ---
@@ -388,7 +388,7 @@ abstract class FlinkMiniCluster(
 : Option[WebMonitor] = {
 if(
   config.getBoolean(ConfigConstants.LOCAL_START_WEBSERVER, false) &&
-config.getInteger(JobManagerOptions.WEB_PORT.key(), 0) >= 0) {
+config.getInteger(WebOptions.PORT.key(), 0) >= 0) {
--- End diff --

you can drop the call to `key()`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4512: [FLINK-7408] [conf] Create WebOptions for WebRunti...

2017-08-10 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426305
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
 ---
@@ -2216,7 +2216,7 @@ object JobManager {
 : (ActorRef, ActorRef, Option[WebMonitor], Option[ActorRef]) = {
 
 val webMonitor: Option[WebMonitor] =
-  if (configuration.getInteger(JobManagerOptions.WEB_PORT.key(), 0) >= 
0) {
+  if (configuration.getInteger(WebOptions.PORT.key(), 0) >= 0) {
--- End diff --

drop call to `key()`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7408) Extract WebRuntimeMonitor options from JobManagerOptions

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121476#comment-16121476
 ] 

ASF GitHub Bot commented on FLINK-7408:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426338
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/BootstrapTools.java
 ---
@@ -188,7 +189,7 @@ public static WebMonitor startWebMonitorIfConfigured(
config.setString(JobManagerOptions.ADDRESS, 
address.host().get());
config.setInteger(JobManagerOptions.PORT, 
Integer.parseInt(address.port().get().toString()));
 
-   if (config.getInteger(JobManagerOptions.WEB_PORT.key(), 0) >= 
0) {
+   if (config.getInteger(WebOptions.PORT.key(), 0) >= 0) {
--- End diff --

drop call to `key()`


> Extract WebRuntimeMonitor options from JobManagerOptions
> 
>
> Key: FLINK-7408
> URL: https://issues.apache.org/jira/browse/FLINK-7408
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Trivial
>
> With the Flip-6 code changes, the {{WebRuntimeMonitor}} won't run exclusively 
> next to the {{JobManager}}. Therefore, it makes sense to refactor the web 
> monitor options and moving them from {{JobManagerOptions}} to {{WebOptions}} 
> and removing the prefix {{jobmanager}}. 
> This is done as requested by 
> https://github.com/apache/flink/pull/4492#issuecomment-321271819.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7408) Extract WebRuntimeMonitor options from JobManagerOptions

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121479#comment-16121479
 ] 

ASF GitHub Bot commented on FLINK-7408:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426263
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationMasterRunner.java 
---
@@ -510,8 +511,8 @@ private static Configuration createConfiguration(String 
baseDirectory, Map= 0) {
-   configuration.setInteger(JobManagerOptions.WEB_PORT, 0);
+   if (configuration.getInteger(WebOptions.PORT.key(), 0) >= 0) {
--- End diff --

drop call to `key()`


> Extract WebRuntimeMonitor options from JobManagerOptions
> 
>
> Key: FLINK-7408
> URL: https://issues.apache.org/jira/browse/FLINK-7408
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Trivial
>
> With the Flip-6 code changes, the {{WebRuntimeMonitor}} won't run exclusively 
> next to the {{JobManager}}. Therefore, it makes sense to refactor the web 
> monitor options and moving them from {{JobManagerOptions}} to {{WebOptions}} 
> and removing the prefix {{jobmanager}}. 
> This is done as requested by 
> https://github.com/apache/flink/pull/4492#issuecomment-321271819.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4512: [FLINK-7408] [conf] Create WebOptions for WebRunti...

2017-08-10 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426263
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationMasterRunner.java 
---
@@ -510,8 +511,8 @@ private static Configuration createConfiguration(String 
baseDirectory, Map= 0) {
-   configuration.setInteger(JobManagerOptions.WEB_PORT, 0);
+   if (configuration.getInteger(WebOptions.PORT.key(), 0) >= 0) {
--- End diff --

drop call to `key()`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7408) Extract WebRuntimeMonitor options from JobManagerOptions

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121478#comment-16121478
 ] 

ASF GitHub Bot commented on FLINK-7408:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426290
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/entrypoint/YarnEntrypointUtils.java
 ---
@@ -104,8 +105,8 @@ public static Configuration loadConfiguration(String 
workingDirectory, Map= 0) {
-   configuration.setInteger(JobManagerOptions.WEB_PORT, 0);
+   if (configuration.getInteger(WebOptions.PORT.key(), 0) >= 0) {
--- End diff --

drop call to `key()`


> Extract WebRuntimeMonitor options from JobManagerOptions
> 
>
> Key: FLINK-7408
> URL: https://issues.apache.org/jira/browse/FLINK-7408
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Trivial
>
> With the Flip-6 code changes, the {{WebRuntimeMonitor}} won't run exclusively 
> next to the {{JobManager}}. Therefore, it makes sense to refactor the web 
> monitor options and moving them from {{JobManagerOptions}} to {{WebOptions}} 
> and removing the prefix {{jobmanager}}. 
> This is done as requested by 
> https://github.com/apache/flink/pull/4492#issuecomment-321271819.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7408) Extract WebRuntimeMonitor options from JobManagerOptions

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121480#comment-16121480
 ] 

ASF GitHub Bot commented on FLINK-7408:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426305
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala
 ---
@@ -2216,7 +2216,7 @@ object JobManager {
 : (ActorRef, ActorRef, Option[WebMonitor], Option[ActorRef]) = {
 
 val webMonitor: Option[WebMonitor] =
-  if (configuration.getInteger(JobManagerOptions.WEB_PORT.key(), 0) >= 
0) {
+  if (configuration.getInteger(WebOptions.PORT.key(), 0) >= 0) {
--- End diff --

drop call to `key()`


> Extract WebRuntimeMonitor options from JobManagerOptions
> 
>
> Key: FLINK-7408
> URL: https://issues.apache.org/jira/browse/FLINK-7408
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Trivial
>
> With the Flip-6 code changes, the {{WebRuntimeMonitor}} won't run exclusively 
> next to the {{JobManager}}. Therefore, it makes sense to refactor the web 
> monitor options and moving them from {{JobManagerOptions}} to {{WebOptions}} 
> and removing the prefix {{jobmanager}}. 
> This is done as requested by 
> https://github.com/apache/flink/pull/4492#issuecomment-321271819.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7408) Extract WebRuntimeMonitor options from JobManagerOptions

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121475#comment-16121475
 ] 

ASF GitHub Bot commented on FLINK-7408:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426478
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitor.java
 ---
@@ -214,7 +214,7 @@ public WebRuntimeMonitor(
ExecutionContextExecutor context = 
ExecutionContext$.MODULE$.fromExecutor(executorService);
 
// Config to enable https access to the web-ui
-   boolean enableSSL = 
config.getBoolean(JobManagerOptions.WEB_SSL_ENABLED) && 
SSLUtils.getSSLEnabled(config);
+   boolean enableSSL = config.getBoolean(WebOptions.SSL_ENABLED) 
&&SSLUtils.getSSLEnabled(config);
--- End diff --

whitespace after `&&` seems off.


> Extract WebRuntimeMonitor options from JobManagerOptions
> 
>
> Key: FLINK-7408
> URL: https://issues.apache.org/jira/browse/FLINK-7408
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Trivial
>
> With the Flip-6 code changes, the {{WebRuntimeMonitor}} won't run exclusively 
> next to the {{JobManager}}. Therefore, it makes sense to refactor the web 
> monitor options and moving them from {{JobManagerOptions}} to {{WebOptions}} 
> and removing the prefix {{jobmanager}}. 
> This is done as requested by 
> https://github.com/apache/flink/pull/4492#issuecomment-321271819.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7408) Extract WebRuntimeMonitor options from JobManagerOptions

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121477#comment-16121477
 ] 

ASF GitHub Bot commented on FLINK-7408:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4512#discussion_r132426175
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/FlinkMiniCluster.scala
 ---
@@ -388,7 +388,7 @@ abstract class FlinkMiniCluster(
 : Option[WebMonitor] = {
 if(
   config.getBoolean(ConfigConstants.LOCAL_START_WEBSERVER, false) &&
-config.getInteger(JobManagerOptions.WEB_PORT.key(), 0) >= 0) {
+config.getInteger(WebOptions.PORT.key(), 0) >= 0) {
--- End diff --

you can drop the call to `key()`.


> Extract WebRuntimeMonitor options from JobManagerOptions
> 
>
> Key: FLINK-7408
> URL: https://issues.apache.org/jira/browse/FLINK-7408
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Trivial
>
> With the Flip-6 code changes, the {{WebRuntimeMonitor}} won't run exclusively 
> next to the {{JobManager}}. Therefore, it makes sense to refactor the web 
> monitor options and moving them from {{JobManagerOptions}} to {{WebOptions}} 
> and removing the prefix {{jobmanager}}. 
> This is done as requested by 
> https://github.com/apache/flink/pull/4492#issuecomment-321271819.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4346: [FLINK-7199] [gelly] Graph simplification does not...

2017-08-10 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132427211
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/drivers/parameter/LongParameter.java
 ---
@@ -52,16 +52,6 @@ public LongParameter(ParameterizedBase owner, String 
name) {
public LongParameter setDefaultValue(long defaultValue) {
super.setDefaultValue(defaultValue);
 
-   if (hasMinimumValue) {
-   Util.checkParameter(defaultValue >= minimumValue,
-   "Default value (" + defaultValue + ") must be 
greater than or equal to minimum (" + minimumValue + ")");
-   }
-
-   if (hasMaximumValue) {
-   Util.checkParameter(defaultValue <= maximumValue,
-   "Default value (" + defaultValue + ") must be 
less than or equal to maximum (" + maximumValue + ")");
-   }
-
--- End diff --

Right, `ExecutionConfig.DEFAULT_PARALLELISM` is currently `-1`. We're still 
checking the user configuration within the min and max bounds, it's only when 
no value is given that the default is used and now permitted to lie outside of 
the bounds.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7199) Graph simplification does not set parallelism

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121482#comment-16121482
 ] 

ASF GitHub Bot commented on FLINK-7199:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4346#discussion_r132427211
  
--- Diff: 
flink-libraries/flink-gelly-examples/src/main/java/org/apache/flink/graph/drivers/parameter/LongParameter.java
 ---
@@ -52,16 +52,6 @@ public LongParameter(ParameterizedBase owner, String 
name) {
public LongParameter setDefaultValue(long defaultValue) {
super.setDefaultValue(defaultValue);
 
-   if (hasMinimumValue) {
-   Util.checkParameter(defaultValue >= minimumValue,
-   "Default value (" + defaultValue + ") must be 
greater than or equal to minimum (" + minimumValue + ")");
-   }
-
-   if (hasMaximumValue) {
-   Util.checkParameter(defaultValue <= maximumValue,
-   "Default value (" + defaultValue + ") must be 
less than or equal to maximum (" + maximumValue + ")");
-   }
-
--- End diff --

Right, `ExecutionConfig.DEFAULT_PARALLELISM` is currently `-1`. We're still 
checking the user configuration within the min and max bounds, it's only when 
no value is given that the default is used and now permitted to lie outside of 
the bounds.


> Graph simplification does not set parallelism
> -
>
> Key: FLINK-7199
> URL: https://issues.apache.org/jira/browse/FLINK-7199
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> The {{Simplify}} parameter should accept and set the parallelism when calling 
> the {{Simplify}} algorithms.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4492: [FLINK-7381] [web] Decouple WebRuntimeMonitor from...

2017-08-10 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4492#discussion_r132427008
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/WebOptions.java ---
@@ -149,6 +149,13 @@
.defaultValue(50)

.withDeprecatedKeys("jobmanager.web.backpressure.delay-between-samples");
 
+   /**
+* Timeout for asynchronous operations by the WebRuntimeMonitor
+*/
+   public static final ConfigOption TIMEOUT = ConfigOptions
--- End diff --

We should document the time-unit in some way.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7381) Decouple WebRuntimeMonitor from ActorGateway

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121486#comment-16121486
 ] 

ASF GitHub Bot commented on FLINK-7381:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4492#discussion_r132427008
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/WebOptions.java ---
@@ -149,6 +149,13 @@
.defaultValue(50)

.withDeprecatedKeys("jobmanager.web.backpressure.delay-between-samples");
 
+   /**
+* Timeout for asynchronous operations by the WebRuntimeMonitor
+*/
+   public static final ConfigOption TIMEOUT = ConfigOptions
--- End diff --

We should document the time-unit in some way.


> Decouple WebRuntimeMonitor from ActorGateway
> 
>
> Key: FLINK-7381
> URL: https://issues.apache.org/jira/browse/FLINK-7381
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
>
> The {{WebRuntimeMonitor}} has a hard wired dependency on the {{ActorGateway}} 
> in order to communicate with the {{JobManager}}. In order to make it work 
> with the {{JobMaster}} (Flip-6), we have to abstract this dependency away. I 
> propose to add a {{JobManagerGateway}} interface which can be implemented 
> using Akka for the old {{JobManager}} code. The Flip-6 {{JobMasterGateway}} 
> can then directly inherit from this interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7354) test instability in LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121489#comment-16121489
 ] 

ASF GitHub Bot commented on FLINK-7354:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4464


> test instability in 
> LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers
> -
>
> Key: FLINK-7354
> URL: https://issues.apache.org/jira/browse/FLINK-7354
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.1.4, 1.2.1, 1.4.0, 1.3.2
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Critical
>  Labels: test-stability
>
> During {{mvn clean install}} on the 1.3.2 RC2, I found an inconsistently 
> failing test at 
> {{LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers}}:
> {code}
> testLocalFlinkMiniClusterWithMultipleTaskManagers(org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase)
>   Time elapsed: 34.978 sec  <<< FAILURE!
> java.lang.AssertionError: Thread 
> Thread[initialSeedUniquifierGenerator,5,main] was started by the mini 
> cluster, but not shut down
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers(LocalFlinkMiniClusterITCase.java:168)
> {code}
> Searching the web for that error yields one previous thread on the dev-list, 
> so this seems to be valid for quite old versions of flink, too, but 
> apparently, was never solved:
> https://lists.apache.org/thread.html/07ce439bf6d358bd3139541b52ef6b8e8af249a27e09ae10b6698f81@%3Cdev.flink.apache.org%3E
> Test environment: Debian 9, openjdk 1.8.0_141-8u141-b15-1~deb9u1-b15



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4464: [FLINK-7354][tests] ignore "initialSeedUniquifierG...

2017-08-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4464


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7026) Add shaded asm dependency

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121487#comment-16121487
 ] 

ASF GitHub Bot commented on FLINK-7026:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4494


> Add shaded asm dependency
> -
>
> Key: FLINK-7026
> URL: https://issues.apache.org/jira/browse/FLINK-7026
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4494: [FLINK-7026] Introduce flink-shaded-asm-5

2017-08-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4494


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4075: [FLINK-6494] Migrate ResourceManager/Yarn/Mesos co...

2017-08-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4075


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6494) Migrate ResourceManager configuration options

2017-08-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121488#comment-16121488
 ] 

ASF GitHub Bot commented on FLINK-6494:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4075


> Migrate ResourceManager configuration options
> -
>
> Key: FLINK-6494
> URL: https://issues.apache.org/jira/browse/FLINK-6494
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination, ResourceManager
>Reporter: Chesnay Schepler
>Assignee: Fang Yong
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7028) Replace asm dependencies

2017-08-10 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-7028.
---
Resolution: Fixed

1.4: 65391805933f52e9c99de4210c2f422bdc652a15

> Replace asm dependencies
> 
>
> Key: FLINK-7028
> URL: https://issues.apache.org/jira/browse/FLINK-7028
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   4   >