[jira] [Created] (FLINK-5425) JobManager replaced by IP in metrics

2017-01-06 Thread Shannon Carey (JIRA)
Shannon Carey created FLINK-5425:


 Summary: JobManager  replaced by IP in metrics
 Key: FLINK-5425
 URL: https://issues.apache.org/jira/browse/FLINK-5425
 Project: Flink
  Issue Type: Bug
  Components: Metrics
Affects Versions: 1.1.3
Reporter: Shannon Carey
Priority: Minor


In metrics at the jobmanager level and below, the "" scope variable is 
being replaced by the IP rather than the hostname. The taskmanager metrics, 
meanwhile, use the host name.

You can see the job manager behavior at 
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobManagerRunner.java#L147
 compared to TaskManagerLocation#getHostname().

The problem with this is mainly that due to the presence of "." (period) 
characters in the IP address and thereby the metric name, the metric names show 
up strangely in Graphite/Grafana, where "." is the metric group separator.

If it's not possible to make jobmanager metrics use the hostname, I suggest 
replacing "." with "-" in the  section.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5424) Improve Restart Strategy Logging

2017-01-06 Thread Shannon Carey (JIRA)
Shannon Carey created FLINK-5424:


 Summary: Improve Restart Strategy Logging
 Key: FLINK-5424
 URL: https://issues.apache.org/jira/browse/FLINK-5424
 Project: Flink
  Issue Type: Improvement
  Components: Core
Reporter: Shannon Carey
Assignee: Shannon Carey
Priority: Minor


I'll be submitting a PR which includes some minor improvements to logging 
related to restart strategies.

Specifically, I added a toString so that the log contains better info about 
failure-rate restart strategy, and I added an explanation in the log when the 
restart strategy is responsible for preventing job restart (currently, there's 
no indication that the restart strategy had anything to do with it).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5423) Implement Stochastic Outlier Selection

2017-01-06 Thread Fokko Driesprong (JIRA)
Fokko Driesprong created FLINK-5423:
---

 Summary: Implement Stochastic Outlier Selection
 Key: FLINK-5423
 URL: https://issues.apache.org/jira/browse/FLINK-5423
 Project: Flink
  Issue Type: Improvement
  Components: Machine Learning Library
Reporter: Fokko Driesprong


I've implemented the Stochastic Outlier Selection (SOS) algorithm by Jeroen 
Jansen.
http://jeroenjanssens.com/2013/11/24/stochastic-outlier-selection.html
Integrated as much as possible with the components from the machine learning 
library.

The algorithm itself has been compared to four other algorithms and it it shows 
that SOS has a higher performance on most of these real-world datasets. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5421) Explicit restore method in Snapshotable

2017-01-06 Thread Stefan Richter (JIRA)
Stefan Richter created FLINK-5421:
-

 Summary: Explicit restore method in Snapshotable
 Key: FLINK-5421
 URL: https://issues.apache.org/jira/browse/FLINK-5421
 Project: Flink
  Issue Type: Improvement
Reporter: Stefan Richter
Assignee: Stefan Richter


We should introduce an explicit {{restore(...)}} method to match the 
{{snapshot(...)}} method in this interface.

Currently, restore happens implicit in backends, i.e. when state handles are 
provided, backends execute restore logic in their constructors. This behaviour 
makes it hard for backends to participate in the task's lifecycle through 
{{CloseableRegistry}}, because we can only register backend objects after they 
have been constructed. As a result, for example, all restore operations that 
happen in the constructor are not responsive to cancelation.

When we introduce an explicit restore, we can first create a backend object, 
then register it, and only then run restore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5420) Make the CEP library rescalable.

2017-01-06 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-5420:
-

 Summary: Make the CEP library rescalable.
 Key: FLINK-5420
 URL: https://issues.apache.org/jira/browse/FLINK-5420
 Project: Flink
  Issue Type: Bug
  Components: CEP
Affects Versions: 1.2.0
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas
 Fix For: 1.2.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5419) Taskmanager metrics not accessible via REST

2017-01-06 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-5419:
---

 Summary: Taskmanager metrics not accessible via REST
 Key: FLINK-5419
 URL: https://issues.apache.org/jira/browse/FLINK-5419
 Project: Flink
  Issue Type: Bug
  Components: Metrics, Webfrontend
Affects Versions: 1.2.0, 1.3.0
Reporter: Chesnay Schepler
Priority: Blocker
 Fix For: 1.2.0, 1.3.0


There is currently a URL clash between the TaskManagersHandler and 
TaskManagerMetricsHandler, with both being routed to
{code}
/taskmanagers/:taskmanagerid/metrics
{code}
As a result it is not possible to query the full set of metrics for a 
taskmanager, but only the hard-coded subset that is displayed on the metrics 
tab on the taskmanager page.

This is a side-effect of 6d53bbc4b92e651786ecc8c2c6dfeb8e450a16a3 making the 
URL's more consistent. The TaskManager page in the web-interface has 3 tabs: 
Metrics, Log and Stdout.

The URLs for these tabs are
{code}
/taskmanager//metrics
/taskmanager//log
/taskmanager//stdout
{code}
which correspond to the REST URL's used. Previously, the metrics tab used 
{code}/taskmanager/{code}

However, 70704de0c82cbb7b143dd696221e11999feb3600 then exposed the metrics 
gathered by the metrics system through the REST API. The assumption was that 
general information for the taskmanagers are retrieved via /taskmanager/, 
similar to how the job-related URL's are structured, which sadly isn't the case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5418) Estimated row size does not support nested types

2017-01-06 Thread Timo Walther (JIRA)
Timo Walther created FLINK-5418:
---

 Summary: Estimated row size does not support nested types
 Key: FLINK-5418
 URL: https://issues.apache.org/jira/browse/FLINK-5418
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Timo Walther
Assignee: Timo Walther


Operations that use 
{{org.apache.flink.table.plan.nodes.FlinkRel#estimateRowSize}} do not support 
nested types yet and fail with:

{code}
java.lang.AssertionError: Internal error: Error occurred while applying rule 
DataSetMinusRule

at org.apache.calcite.util.Util.newInternal(Util.java:792)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:148)
at 
org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:225)
at 
org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:117)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:213)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:819)
at 
org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:334)
at 
org.apache.flink.table.api.BatchTableEnvironment.optimize(BatchTableEnvironment.scala:256)
at 
org.apache.flink.table.api.BatchTableEnvironment.translate(BatchTableEnvironment.scala:288)
at 
org.apache.flink.table.api.scala.BatchTableEnvironment.toDataSet(BatchTableEnvironment.scala:140)
at 
org.apache.flink.table.api.scala.TableConversions.toDataSet(TableConversions.scala:40)
at 
org.apache.flink.table.api.scala.batch.table.SetOperatorsITCase.testMinus(SetOperatorsITCase.scala:175)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:42)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:253)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: org.apache.flink.table.api.TableException: Unsupported data 

DataStream and CEP - Pattern not matching after applying a window

2017-01-06 Thread madhairsilence
//Creating a window of ten items
/WindowedStream windowStream =
inputStream.keyBy("rackId").countWindow(10);/

// Applying a Window Function , adding some custom evaluating all the values
in the window
/DataStream inactivityStream = windowStream.apply(new
WindowFunction()
{

@Override
public void apply(Tuple tuple, GlobalWindow timeWindow,
Iterable itr, Collector out)
//custom evaluation logic
out.collect(new 
ObservationEvent(1,"temperature", "stable"));
}
});/

//Defining Simple CEP Pattern
/Pattern inactivityPattern =
Pattern.begin("first")
.subtype(ObservationEvent.class)
.where(new FilterFunction() {

@Override
public boolean filter(ObservationEvent 
arg0) throws Exception {
System.out.println( arg0 );  
*//This function is not at all called*
return false;
}
});/



/PatternStream inactivityCEP =
CEP.pattern(inactivityStream.keyBy("rackId"), inactivityPattern);
/
When I run this code, the filter function inside the where clause is not at
all getting called. 
I have printed the inactivityStream.print() and I can see the matching
value.

Now, when I plug in the inputStream directly without applying a window. The
pattern is matching

I printed inputStream and WindowedStream and I can see they both send
similar kind of data.

What am I missing



--
View this message in context: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DataStream-and-CEP-Pattern-not-matching-after-applying-a-window-tp15164.html
Sent from the Apache Flink Mailing List archive. mailing list archive at 
Nabble.com.