[jira] [Assigned] (FLINK-3849) Add FilterableTableSource interface and translation rule

2017-03-09 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-3849:


Assignee: (was: Anton Solovev)

> Add FilterableTableSource interface and translation rule
> 
>
> Key: FLINK-3849
> URL: https://issues.apache.org/jira/browse/FLINK-3849
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>
> Add a {{FilterableTableSource}} interface for {{TableSource}} implementations 
> which support filter push-down.
> The interface could look as follows
> {code}
> def trait FilterableTableSource {
>   // returns unsupported predicate expression
>   def setPredicate(predicate: Expression): Expression
> }
> {code}
> In addition we need Calcite rules to push a predicate (or parts of it) into a 
> TableScan that refers to a {{FilterableTableSource}}. We might need to tweak 
> the cost model as well to push the optimizer in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-5402) Fails AkkaRpcServiceTest#testTerminationFuture

2017-02-17 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-5402:


Assignee: Dawid Wysakowicz

> Fails AkkaRpcServiceTest#testTerminationFuture
> --
>
> Key: FLINK-5402
> URL: https://issues.apache.org/jira/browse/FLINK-5402
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.2.0
> Environment: macOS
>Reporter: Anton Solovev
>Assignee: Dawid Wysakowicz
>  Labels: test-stability
>
> {code}
> testTerminationFuture(org.apache.flink.runtime.rpc.akka.AkkaRpcServiceTest)  
> Time elapsed: 1.013 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 1000 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
>   at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>   at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>   at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>   at scala.concurrent.Await$.result(package.scala:107)
>   at akka.remote.Remoting.start(Remoting.scala:179)
>   at 
> akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
>   at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:620)
>   at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl._start(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl.start(ActorSystem.scala:634)
>   at akka.actor.ActorSystem$.apply(ActorSystem.scala:142)
>   at akka.actor.ActorSystem$.apply(ActorSystem.scala:119)
>   at akka.actor.ActorSystem$.create(ActorSystem.scala:67)
>   at 
> org.apache.flink.runtime.akka.AkkaUtils$.createActorSystem(AkkaUtils.scala:104)
>   at 
> org.apache.flink.runtime.akka.AkkaUtils$.createDefaultActorSystem(AkkaUtils.scala:114)
>   at 
> org.apache.flink.runtime.akka.AkkaUtils.createDefaultActorSystem(AkkaUtils.scala)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceTest.testTerminationFuture(AkkaRpcServiceTest.java:134)
> {code} in org.apache.flink.runtime.rpc.akka.AkkaRpcServiceTest while testing 
> current master 1.2.0 branch 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5481) Simplify Row creation

2017-02-13 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5481:
-
Summary: Simplify Row creation  (was: Cannot create Collection of Row )

> Simplify Row creation
> -
>
> Key: FLINK-5481
> URL: https://issues.apache.org/jira/browse/FLINK-5481
> Project: Flink
>  Issue Type: Bug
>  Components: DataSet API, Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Trivial
>
> When we use {{ExecutionEnvironment#fromElements(X... data)}} it takes first 
> element of {{data}} to define a type. If first Row in collection has wrong 
> number of fields (there are nulls) TypeExtractor returns not RowTypeInfo, but 
> GenericType



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5744) Check remote connection in Flink-shell

2017-02-13 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5744:
-
Description: 
I wish to check connection to remote host before execution a program when 
starting flink-shell

For example, right after {{bin/start-scala-shell.sh remote   35007}}
it checks connection and will not start if it has errors

Actualy I would like to change welcome pic to just "welcome to flink" like in 
ignite, spring or spark 


> Check remote connection in Flink-shell
> --
>
> Key: FLINK-5744
> URL: https://issues.apache.org/jira/browse/FLINK-5744
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Reporter: Anton Solovev
>Priority: Minor
>
> I wish to check connection to remote host before execution a program when 
> starting flink-shell
> For example, right after {{bin/start-scala-shell.sh remote   35007}}
> it checks connection and will not start if it has errors
> Actualy I would like to change welcome pic to just "welcome to flink" like in 
> ignite, spring or spark 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5744) Check remote connection in Flink-shell

2017-02-08 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5744:
-
Summary: Check remote connection in Flink-shell  (was: Checking remote 
connection in Flink-shell)

> Check remote connection in Flink-shell
> --
>
> Key: FLINK-5744
> URL: https://issues.apache.org/jira/browse/FLINK-5744
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Reporter: Anton Solovev
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-5744) Checking remote connection in Flink-shell

2017-02-08 Thread Anton Solovev (JIRA)
Anton Solovev created FLINK-5744:


 Summary: Checking remote connection in Flink-shell
 Key: FLINK-5744
 URL: https://issues.apache.org/jira/browse/FLINK-5744
 Project: Flink
  Issue Type: Improvement
  Components: Scala Shell
Reporter: Anton Solovev
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-5431) time format for akka status

2017-02-07 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15855939#comment-15855939
 ] 

Anton Solovev edited comment on FLINK-5431 at 2/7/17 1:07 PM:
--

How should I make it configurable?
for example through cli - push a parametr to {{flink run appname.jar 
--logger.akka.pattern=HH:mm:ss}}
Or create an another one option {{flink run --logger HH:mm:ss appname.jar}}
And where we catch pattern, from flink-conf.yml or log4j props?


was (Author: tonycox):
How should I make it configurable?
for example through cli - push a parametr to {{flink run appname.jar 
--logger.akka.pattern=HH:mm:ss}}
Or create an another one option {{flink run --logger HH:mm:ss appname.jar}}
And where we catch pattern from flink-conf.yml or log4j props?

> time format for akka status
> ---
>
> Key: FLINK-5431
> URL: https://issues.apache.org/jira/browse/FLINK-5431
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Reporter: Alexey Diomin
>Assignee: Anton Solovev
>Priority: Minor
>
> In ExecutionGraphMessages we have code
> {code}
> private val DATE_FORMATTER: SimpleDateFormat = new 
> SimpleDateFormat("MM/dd/ HH:mm:ss")
> {code}
> But sometimes it cause confusion when main logger configured with 
> "dd/MM/".
> We need making this format configurable or maybe stay only "HH:mm:ss" for 
> prevent misunderstanding output date-time



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5431) time format for akka status

2017-02-07 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15855939#comment-15855939
 ] 

Anton Solovev commented on FLINK-5431:
--

How should I make it configurable?
for example through cli - push a parametr to {{flink run appname.jar 
--logger.akka.pattern=HH:mm:ss}}
Or create an another one option {{flink run --logger HH:mm:ss appname.jar}}
And where we catch pattern from flink-conf.yml or log4j props?

> time format for akka status
> ---
>
> Key: FLINK-5431
> URL: https://issues.apache.org/jira/browse/FLINK-5431
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Reporter: Alexey Diomin
>Assignee: Anton Solovev
>Priority: Minor
>
> In ExecutionGraphMessages we have code
> {code}
> private val DATE_FORMATTER: SimpleDateFormat = new 
> SimpleDateFormat("MM/dd/ HH:mm:ss")
> {code}
> But sometimes it cause confusion when main logger configured with 
> "dd/MM/".
> We need making this format configurable or maybe stay only "HH:mm:ss" for 
> prevent misunderstanding output date-time



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5384) clean up jira issues

2017-02-06 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
h4. must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1293 -> outdated;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;

FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

FLINK-1398 -> outdated;

h4. should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

h4. far from done yet:
FLINK-1926
FLINK-3331

h4. still in progress?
FLINK-1999;
FLINK-1735;
FLINK-2030, FLINK-2274, FLINK-1727

see a monthly release status email (how many weeks until
next feature freeze / number of open JIRAs for that version / ... ) in dev 
mailing list 

  was:
h4. must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1293 -> outdated;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;

FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

h4. should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

h4. far from done yet:
FLINK-1926
FLINK-3331

h4. still in progress?
FLINK-1999;
FLINK-1735;
FLINK-2030, FLINK-2274, FLINK-1727

see a monthly release status email (how many weeks until
next feature freeze / number of open JIRAs for that version / ... ) in dev 
mailing list 


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Priority: Minor
>
> h4. must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> fro

[jira] [Closed] (FLINK-1398) A new DataSet function: extractElementFromTuple

2017-02-06 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev closed FLINK-1398.

Resolution: Won't Fix

> A new DataSet function: extractElementFromTuple
> ---
>
> Key: FLINK-1398
> URL: https://issues.apache.org/jira/browse/FLINK-1398
> Project: Flink
>  Issue Type: Wish
>  Components: DataSet API
>Reporter: Felix Neutatz
>Assignee: Felix Neutatz
>Priority: Minor
>
> This is the use case:
> {code:xml}
> DataSet> data =  env.fromElements(new 
> Tuple2(1,2.0));
> 
> data.map(new ElementFromTuple());
> 
> }
> public static final class ElementFromTuple implements 
> MapFunction, Double> {
> @Override
> public Double map(Tuple2 value) {
> return value.f1;
> }
> }
> {code}
> It would be awesome if we had something like this:
> {code:xml}
> data.extractElement(1);
> {code}
> This means that we implement a function for DataSet which extracts a certain 
> element from a given Tuple.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5698) Add NestedFieldsProjectableTableSource interface

2017-02-06 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5698:
-
Description: 
Add a NestedFieldsProjectableTableSource interface for some TableSource 
implementation that support nesting projection push-down.
The interface could look as follows
{code}
def trait NestedFieldsProjectableTableSource {
  def projectNestedFields(fields: Array[String]): 
NestedFieldsProjectableTableSource[T]
}
{code}
This interface works together with ProjectableTableSource 

  was:
Add a NestedFieldsProjectableTableSource interface for some TableSource 
implementation that support nesting projection push-down.
The interface could look as follows
{code}
def trait ProjectableTableSource {
  def projectNestedFields(fields: Array[String]): 
NestedFieldsProjectableTableSource[T]
}
{code}
This interface works together with ProjectableTableSource 


> Add NestedFieldsProjectableTableSource interface
> 
>
> Key: FLINK-5698
> URL: https://issues.apache.org/jira/browse/FLINK-5698
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>
> Add a NestedFieldsProjectableTableSource interface for some TableSource 
> implementation that support nesting projection push-down.
> The interface could look as follows
> {code}
> def trait NestedFieldsProjectableTableSource {
>   def projectNestedFields(fields: Array[String]): 
> NestedFieldsProjectableTableSource[T]
> }
> {code}
> This interface works together with ProjectableTableSource 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-5698) Add NestedFieldsProjectableTableSource interface

2017-02-06 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-5698:


Assignee: Anton Solovev

> Add NestedFieldsProjectableTableSource interface
> 
>
> Key: FLINK-5698
> URL: https://issues.apache.org/jira/browse/FLINK-5698
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>
> Add a NestedFieldsProjectableTableSource interface for some TableSource 
> implementation that support nesting projection push-down.
> The interface could look as follows
> {code}
> def trait ProjectableTableSource {
>   def projectNestedFields(fields: Array[String]): 
> NestedFieldsProjectableTableSource[T]
> }
> {code}
> This interface works together with ProjectableTableSource 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5698) Add NestedFieldsProjectableTableSource interface

2017-02-02 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5698:
-
Issue Type: New Feature  (was: Improvement)

> Add NestedFieldsProjectableTableSource interface
> 
>
> Key: FLINK-5698
> URL: https://issues.apache.org/jira/browse/FLINK-5698
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Anton Solovev
>
> Add a NestedFieldsProjectableTableSource interface for some TableSource 
> implementation that support nesting projection push-down.
> The interface could look as follows
> {code}
> def trait ProjectableTableSource {
>   def projectNestedFields(fields: Array[String]): 
> NestedFieldsProjectableTableSource[T]
> }
> {code}
> This interface works together with ProjectableTableSource 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-5698) Add NestedFieldsProjectableTableSource interface

2017-02-02 Thread Anton Solovev (JIRA)
Anton Solovev created FLINK-5698:


 Summary: Add NestedFieldsProjectableTableSource interface
 Key: FLINK-5698
 URL: https://issues.apache.org/jira/browse/FLINK-5698
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Anton Solovev


Add a NestedFieldsProjectableTableSource interface for some TableSource 
implementation that support nesting projection push-down.
The interface could look as follows
{code}
def trait ProjectableTableSource {
  def projectNestedFields(fields: Array[String]): 
NestedFieldsProjectableTableSource[T]
}
{code}
This interface works together with ProjectableTableSource 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-2168) Add HBaseTableSource

2017-02-01 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-2168:
-
Labels:   (was: starter)

> Add HBaseTableSource
> 
>
> Key: FLINK-2168
> URL: https://issues.apache.org/jira/browse/FLINK-2168
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
>
> Add a {{HBaseTableSource}} to read data from a HBase table. The 
> {{HBaseTableSource}} should implement the {{ProjectableTableSource}} 
> (FLINK-3848) and {{FilterableTableSource}} (FLINK-3849) interfaces.
> The implementation can be based on Flink's {{TableInputFormat}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5671) Test ClassLoaderITCase#testJobsWithCustomClassLoader fails

2017-01-27 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5671:
-
Environment: Ubuntu 16.04

> Test ClassLoaderITCase#testJobsWithCustomClassLoader fails
> --
>
> Key: FLINK-5671
> URL: https://issues.apache.org/jira/browse/FLINK-5671
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
> Environment: Ubuntu 16.04
>Reporter: Anton Solovev
>
> {code}
> testJobsWithCustomClassLoader(org.apache.flink.test.classloading.ClassLoaderITCase)
>   Time elapsed: 41.75 sec  <<< FAILURE!
> java.lang.AssertionError: The program execution failed: Job execution failed.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.test.classloading.ClassLoaderITCase.testJobsWithCustomClassLoader(ClassLoaderITCase.java:221)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-27 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev closed FLINK-5592.

Resolution: Not A Problem

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>Priority: Minor
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
>   at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5671) Test ClassLoaderITCase#testJobsWithCustomClassLoader fails

2017-01-27 Thread Anton Solovev (JIRA)
Anton Solovev created FLINK-5671:


 Summary: Test ClassLoaderITCase#testJobsWithCustomClassLoader fails
 Key: FLINK-5671
 URL: https://issues.apache.org/jira/browse/FLINK-5671
 Project: Flink
  Issue Type: Bug
  Components: Tests
Reporter: Anton Solovev


{code}
testJobsWithCustomClassLoader(org.apache.flink.test.classloading.ClassLoaderITCase)
  Time elapsed: 41.75 sec  <<< FAILURE!
java.lang.AssertionError: The program execution failed: Job execution failed.
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.flink.test.classloading.ClassLoaderITCase.testJobsWithCustomClassLoader(ClassLoaderITCase.java:221)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5336) Make Path immutable

2017-01-25 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5336:
-
Assignee: (was: Anton Solovev)

> Make Path immutable
> ---
>
> Key: FLINK-5336
> URL: https://issues.apache.org/jira/browse/FLINK-5336
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Stephan Ewen
> Fix For: 2.0.0
>
>
> The {{Path}} class is currently mutable to support the {{IOReadableWritable}} 
> serialization. Since that serialization is not used any more, I suggest to 
> drop that interface from Path and make the Path's URI final.
> Being immutable, we can store configures paths properly without the chance of 
> them being mutated as side effects.
> Many parts of the code make the assumption that the Path is immutable, being 
> susceptible to subtle errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5384) clean up jira issues

2017-01-25 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Assignee: (was: Anton Solovev)

> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Priority: Minor
>
> h4. must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1293 -> outdated;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed on github;
> FLINK-2157 -> closed on github;
> FLINK-2220 -> closed on github;
> FLINK-2363 -> closed on github;
> FLINK-2480 -> closed on github;
> FLINK-2609 -> closed on github;
> FLINK-2823 -> closed on github;
> FLINK-3155 -> closed on github;
> FLINK-3964 -> closed on github;
> FLINK-4653 -> closed on github;
> FLINK-4717 -> closed on github;
> FLINK-4829 -> closed on github;
> FLINK-5016 -> closed on github;
> FLINK-2319 -> closed on github, outdated;
> FLINK-2399 -> closed on github, outdated;
> FLINK-2428 -> closed on github, outdated;
> FLINK-2472 -> closed on github, outdated;
> h4. should be discussed before:
> FLINK-1055 ;
> FLINK-1098 -> create other issue to add a colectEach();
> FLINK-1100 ;
> FLINK-1146 ;
> FLINK-1335 -> maybe rename?;
> FLINK-1439 ;
> FLINK-1447 -> firefox problem?;
> FLINK-1521 -> closed on github;
> FLINK-1538 -> gsoc2015, is it solved?;
> FLINK-1541 -> gsoc2015, is it solved?;
> FLINK-1723 -> almost done? ;
> FLINK-1814 ;
> FLINK-1858 -> is QA bot deleted?;
> FLINK-2023 -> does not block Scala Graph API;
> FLINK-2032 -> all subtasks done;
> FLINK-2108 -> almost done? ;
> FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
> FLINK-3109 -> its PR is stuck;
> FLINK-3154 -> must be addressed as part of a bigger initiative;
> FLINK-3297 -> solved as third party lib;
> FLINK-4042 -> outdated;
> FLINK-4313 -> outdated;
> FLINK-4760 -> fixed?
> FLINK-5282 -> closed on github;
> h4. far from done yet:
> FLINK-1926
> FLINK-3331
> h4. still in progress?
> FLINK-1999;
> FLINK-1735;
> FLINK-2030, FLINK-2274, FLINK-1727
> see a monthly release status email (how many weeks until
> next feature freeze / number of open JIRAs for that version / ... ) in dev 
> mailing list 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5384) clean up jira issues

2017-01-25 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
h4. must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1293 -> outdated;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;

FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

h4. should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

h4. far from done yet:
FLINK-1926
FLINK-3331

h4. still in progress?
FLINK-1999;
FLINK-1735;
FLINK-2030, FLINK-2274, FLINK-1727

see a monthly release status email (how many weeks until
next feature freeze / number of open JIRAs for that version / ... ) in dev 
mailing list 

  was:
h4. must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1293 -> outdated;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;

FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

h4. should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

h4. far from done yet:
FLINK-1926
FLINK-3331

h4. still in progress?
FLINK-1999;
FLINK-1735;
FLINK-2030, FLINK-2274, Flink-1727


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> h4. must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> fro

[jira] [Updated] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5592:
-
Priority: Minor  (was: Major)

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>Priority: Minor
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
>   at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15834697#comment-15834697
 ] 

Anton Solovev edited comment on FLINK-5592 at 1/23/17 3:07 PM:
---

[ Hi [~jark], thank you for helping me. I want exactly a row of a number of 
rows. You are right, it's problem of my code 
{code} override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](
  BasicTypeInfo.STRING_TYPE_INFO,
  BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age")),
  new RowTypeInfo(
Array[TypeInformation[_]](
  BasicTypeInfo.STRING_TYPE_INFO,
  BasicTypeInfo.STRING_TYPE_INFO),
Array("more_info", "and_so_on"))),
Array("person", "additional")
  )
} {code} {{getReturnType}} does that thing I want, but how we can get 
nested field by table api? 
via table.scan("rows").select("person.name") it doesn't work


was (Author: tonycox):
[ Hi [~jark], thank you for helping me. I want exactly a row of a number of 
rows. You are right, it's problem of my code 
{code} override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](
  BasicTypeInfo.STRING_TYPE_INFO,
  BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age")),
  new RowTypeInfo(
Array[TypeInformation[_]](
  BasicTypeInfo.STRING_TYPE_INFO,
  BasicTypeInfo.STRING_TYPE_INFO),
Array("more_info", "and_so_on"))),
Array("person", "additional")
  )
} {code} does that thing I want

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.a

[jira] [Reopened] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reopened FLINK-5592:
--

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
>   at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev closed FLINK-5592.

Resolution: Not A Problem

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
>   at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15834697#comment-15834697
 ] 

Anton Solovev commented on FLINK-5592:
--

[ Hi [~jark], thank you for helping me. I want exactly a row of a number of 
rows. You are right, it's problem of my code 
{code} override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](
  BasicTypeInfo.STRING_TYPE_INFO,
  BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age")),
  new RowTypeInfo(
Array[TypeInformation[_]](
  BasicTypeInfo.STRING_TYPE_INFO,
  BasicTypeInfo.STRING_TYPE_INFO),
Array("more_info", "and_so_on"))),
Array("person", "additional")
  )
} {code} does that thing I want

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
>   at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5592:
-
Comment: was deleted

(was: Hi [~jark], thank you for helping me. I want exactly a row of a number of 
rows, this case falls even if 
{code}
override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age")),
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("more_info", "and_so_on"))),
Array("person", "additional")
  )
}
{code})

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
>   at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15834337#comment-15834337
 ] 

Anton Solovev edited comment on FLINK-5592 at 1/23/17 12:40 PM:


Hi [~jark], thank you for helping me. I want exactly a row of a number of rows, 
this case falls even if 
{code}
override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age")),
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("more_info", "and_so_on"))),
Array("person", "additional")
  )
}
{code}


was (Author: tonycox):
Hi [~jark], thank you for helping me. I want exactly a row of a number of rows, 
this case falls even if 
{code}
override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age")),
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("more_info", "and_so_on"))),
Array("person")
  )
}
{code}
with {{java.lang.IllegalArgumentException: Number of field types and names is 
different.}}

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
>   at org.apache.flink.api.scala.DataSet.c

[jira] [Comment Edited] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15834337#comment-15834337
 ] 

Anton Solovev edited comment on FLINK-5592 at 1/23/17 12:03 PM:


Hi [~jark], thank you for helping me. I want exactly a row of a number of rows, 
this case falls even if 
{code}
override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age")),
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("more_info", "and_so_on"))),
Array("person")
  )
}
{code}
with {{java.lang.IllegalArgumentException: Number of field types and names is 
different.}}


was (Author: tonycox):
Hi [~jark], thank you fro helping me. I want exactly a row of a number of rows, 
this case falls even if 
{code}
override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age")),
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("more_info", "and_so_on"))),
Array("person")
  )
}
{code}
with {{java.lang.IllegalArgumentException: Number of field types and names is 
different.}}

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(E

[jira] [Comment Edited] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15834337#comment-15834337
 ] 

Anton Solovev edited comment on FLINK-5592 at 1/23/17 12:02 PM:


Hi [~jark], thank you fro helping me. I want exactly a row of a number of rows, 
this case falls even if 
{code}
override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age")),
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("more_info", "and_so_on"))),
Array("person")
  )
}
{code}
with {{java.lang.IllegalArgumentException: Number of field types and names is 
different.}}


was (Author: tonycox):
Hi [~jark], thank you fro helping me. I want exactly a row of a number of rows, 
this case falls even if 
{code}
override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age")),
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("more_info", "and_so_on"))),
Array("person")
  )
}
{code}

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
>   at org.apache.flink.api.scala.DataSet.collect(DataSet

[jira] [Commented] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15834337#comment-15834337
 ] 

Anton Solovev commented on FLINK-5592:
--

Hi [~jark], thank you fro helping me. I want exactly a row of a number of rows, 
this case falls even if 
{code}
override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age")),
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("more_info", "and_so_on"))),
Array("person")
  )
}
{code}

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
>   at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5554) Add sql operator to table api for getting columns from HBase

2017-01-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev closed FLINK-5554.

Resolution: Not A Problem

> Add sql operator to table api for getting columns from HBase
> 
>
> Key: FLINK-5554
> URL: https://issues.apache.org/jira/browse/FLINK-5554
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Anton Solovev
>
> example of select query
> {code}
> table.select("f1:q1, f1:q2, f1:q3");
> {code}
> or/and
> {code}
> table.select('f1:'q1, 'f1:'q2, 'f1:'q3);
> {code}
> let's discuss how to provide better api for selecting from HBase



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-20 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-5592:


Assignee: (was: Anton Solovev)

> Wrong number of RowSerializers with nested Rows in Collection mode
> --
>
> Key: FLINK-5592
> URL: https://issues.apache.org/jira/browse/FLINK-5592
> Project: Flink
>  Issue Type: Bug
>Reporter: Anton Solovev
>
> {code}
>   @Test
>   def testNestedRowTypes(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> tEnv.registerTableSource("rows", new MockSource)
> val table: Table = tEnv.scan("rows")
> val nestedTable: Table = tEnv.scan("rows").select('person)
> val collect: Seq[Row] = nestedTable.collect()
> print(collect)
>   }
>   class MockSource extends BatchTableSource[Row] {
> import org.apache.flink.api.java.ExecutionEnvironment
> import org.apache.flink.api.java.DataSet
> override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
>   val data = List(
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
> Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
>   execEnv.fromCollection(data.asJava, getReturnType)
> }
> override def getReturnType: TypeInformation[Row] = {
>   new RowTypeInfo(
> Array[TypeInformation[_]](
>   new RowTypeInfo(
> Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
> BasicTypeInfo.STRING_TYPE_INFO),
> Array("name", "age"))),
> Array("person")
>   )
> }
>   }
> {code}
> throws {{java.lang.RuntimeException: Row arity of from does not match 
> serializers}}
> stacktrace 
> {code}
> at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
>   at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
>   at 
> org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
>   at 
> org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
>   at 
> org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
>   at 
> org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
>   at 
> org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
>   at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-20 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5592:
-
Description: 
{code}
  @Test
  def testNestedRowTypes(): Unit = {
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env, config)

tEnv.registerTableSource("rows", new MockSource)

val table: Table = tEnv.scan("rows")
val nestedTable: Table = tEnv.scan("rows").select('person)

val collect: Seq[Row] = nestedTable.collect()
print(collect)
  }

  class MockSource extends BatchTableSource[Row] {
import org.apache.flink.api.java.ExecutionEnvironment
import org.apache.flink.api.java.DataSet

override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
  val data = List(
Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
  execEnv.fromCollection(data.asJava, getReturnType)
}

override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age"))),
Array("person")
  )
}
  }
{code}

throws {{java.lang.RuntimeException: Row arity of from does not match 
serializers}}

stacktrace 
{code}
at 
org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
at 
org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
at 
org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
at 
org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
at 
org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
at 
org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
at 
org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
at 
org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
at 
org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
at 
org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
at 
org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
at 
org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
at 
org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
at 
org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
{code}

  was:
{code}
@Test
  def testNestedRowTypes(): Unit = {
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env, config)

tEnv.registerTableSource("rows", new MockSource)

val table: Table = tEnv.scan("rows")
val nestedTable: Table = tEnv.scan("rows").select('person)

table.printSchema()
nestedTable.printSchema()
val collect: Seq[Row] = nestedTable.collect()
print(collect)
  }

  class MockSource extends BatchTableSource[Row] {
import org.apache.flink.api.java.ExecutionEnvironment
import org.apache.flink.api.java.DataSet

override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
  val data = List(
Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
  execEnv.fromCollection(data.asJava, getReturnType)
}

override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age"))),
Array("person")
  )
}
  }
{code}

throws {{java.lang.RuntimeException: Row arity of from does not match 
serializers}}

stacktrace 
{code}
at 
org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
at 
org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
at 
org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
at 
org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)

[jira] [Created] (FLINK-5592) Wrong number of RowSerializers with nested Rows in Collection mode

2017-01-20 Thread Anton Solovev (JIRA)
Anton Solovev created FLINK-5592:


 Summary: Wrong number of RowSerializers with nested Rows in 
Collection mode
 Key: FLINK-5592
 URL: https://issues.apache.org/jira/browse/FLINK-5592
 Project: Flink
  Issue Type: Bug
Reporter: Anton Solovev
Assignee: Anton Solovev


{code}
@Test
  def testNestedRowTypes(): Unit = {
val env = ExecutionEnvironment.getExecutionEnvironment
val tEnv = TableEnvironment.getTableEnvironment(env, config)

tEnv.registerTableSource("rows", new MockSource)

val table: Table = tEnv.scan("rows")
val nestedTable: Table = tEnv.scan("rows").select('person)

table.printSchema()
nestedTable.printSchema()
val collect: Seq[Row] = nestedTable.collect()
print(collect)
  }

  class MockSource extends BatchTableSource[Row] {
import org.apache.flink.api.java.ExecutionEnvironment
import org.apache.flink.api.java.DataSet

override def getDataSet(execEnv: ExecutionEnvironment): DataSet[Row] = {
  val data = List(
Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")),
Row.of(Row.of("data_1", "dob"), Row.of("info_4", "dub")))
  execEnv.fromCollection(data.asJava, getReturnType)
}

override def getReturnType: TypeInformation[Row] = {
  new RowTypeInfo(
Array[TypeInformation[_]](
  new RowTypeInfo(
Array[TypeInformation[_]](BasicTypeInfo.STRING_TYPE_INFO, 
BasicTypeInfo.STRING_TYPE_INFO),
Array("name", "age"))),
Array("person")
  )
}
  }
{code}

throws {{java.lang.RuntimeException: Row arity of from does not match 
serializers}}

stacktrace 
{code}
at 
org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:82)
at 
org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:36)
at 
org.apache.flink.api.common.operators.GenericDataSourceBase.executeOnCollections(GenericDataSourceBase.java:234)
at 
org.apache.flink.api.common.operators.CollectionExecutor.executeDataSource(CollectionExecutor.java:218)
at 
org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:154)
at 
org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
at 
org.apache.flink.api.common.operators.CollectionExecutor.executeDataSink(CollectionExecutor.java:181)
at 
org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:157)
at 
org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:130)
at 
org.apache.flink.api.common.operators.CollectionExecutor.execute(CollectionExecutor.java:114)
at 
org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:35)
at 
org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:47)
at 
org.apache.flink.test.util.CollectionTestEnvironment.execute(CollectionTestEnvironment.java:42)
at 
org.apache.flink.api.scala.ExecutionEnvironment.execute(ExecutionEnvironment.scala:672)
at org.apache.flink.api.scala.DataSet.collect(DataSet.scala:547)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5554) Add sql operator to table api for getting columns from HBase

2017-01-18 Thread Anton Solovev (JIRA)
Anton Solovev created FLINK-5554:


 Summary: Add sql operator to table api for getting columns from 
HBase
 Key: FLINK-5554
 URL: https://issues.apache.org/jira/browse/FLINK-5554
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Anton Solovev


example of select query
{code}
table.select("f1:q1, f1:q2, f1:q3");
{code}
or/and
{code}
table.select('f1:'q1, 'f1:'q2, 'f1:'q3);
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5554) Add sql operator to table api for getting columns from HBase

2017-01-18 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5554:
-
Description: 
example of select query
{code}
table.select("f1:q1, f1:q2, f1:q3");
{code}
or/and
{code}
table.select('f1:'q1, 'f1:'q2, 'f1:'q3);
{code}

let's discuss how to provide better api for selecting from HBase

  was:
example of select query
{code}
table.select("f1:q1, f1:q2, f1:q3");
{code}
or/and
{code}
table.select('f1:'q1, 'f1:'q2, 'f1:'q3);
{code}



> Add sql operator to table api for getting columns from HBase
> 
>
> Key: FLINK-5554
> URL: https://issues.apache.org/jira/browse/FLINK-5554
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Anton Solovev
>
> example of select query
> {code}
> table.select("f1:q1, f1:q2, f1:q3");
> {code}
> or/and
> {code}
> table.select('f1:'q1, 'f1:'q2, 'f1:'q3);
> {code}
> let's discuss how to provide better api for selecting from HBase



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4682) Add TumbleRow row-windows for batch tables.

2017-01-18 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-4682:
-
Assignee: (was: Anton Solovev)

> Add TumbleRow row-windows for batch tables.
> ---
>
> Key: FLINK-4682
> URL: https://issues.apache.org/jira/browse/FLINK-4682
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>
> Add TumbleRow row-windows for batch tables as described in 
> [FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5384) clean up jira issues

2017-01-16 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
h4. must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1293 -> outdated;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;

FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

h4. should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

h4. far from done yet:
FLINK-1926
FLINK-3331

h4. still in progress?
FLINK-1999;
FLINK-1735;
FLINK-2030, FLINK-2274, Flink-1727

  was:
h4. must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1293 -> outdated;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;

FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

h4. should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

h4. far from done yet:
FLINK-1926
FLINK-3331

h4. still in progress?
FLINK-1999;
FLINK-1735;


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> h4. must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphe

[jira] [Updated] (FLINK-5384) clean up jira issues

2017-01-16 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
h4. must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1293 -> outdated;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;

FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

h4. should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

h4. far from done yet:
FLINK-1926
FLINK-3331

h4. still in progress?
FLINK-1999;
FLINK-1735;

  was:
h4. must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1293 -> outdated;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;

FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

h4. should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

h4. far from done yet:
FLINK-1926
FLINK-3331


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> h4. must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-129

[jira] [Updated] (FLINK-5384) clean up jira issues

2017-01-16 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
h4. must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1293 -> outdated;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;

FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

h4. should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

h4. far from done yet:
FLINK-1926
FLINK-3331

  was:
h4. must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;

FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

h4. should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

h4. far from done yet:
FLINK-1926
FLINK-3331


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> h4. must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1293 -> outdated;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed 

[jira] [Comment Edited] (FLINK-5345) IOManager failed to properly clean up temp file directory

2017-01-16 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15824116#comment-15824116
 ] 

Anton Solovev edited comment on FLINK-5345 at 1/16/17 3:02 PM:
---

[~StephanEwen] I see, will assign the issue on you


was (Author: tonycox):
[~StephanEwen] I see, I will assign the issue on you

> IOManager failed to properly clean up temp file directory
> -
>
> Key: FLINK-5345
> URL: https://issues.apache.org/jira/browse/FLINK-5345
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: Robert Metzger
>Assignee: Stephan Ewen
>  Labels: simplex, starter
> Fix For: 1.2.0, 1.3.0
>
>
> While testing 1.1.3 RC3, I have the following message in my log:
> {code}
> 2016-12-15 14:46:05,450 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
> is shutting down.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: control events generator (29/40) 
> (73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: control events generator 
> (29/40) (73915a232ba09e642f9dff92f8c8773a).
> 2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> CANCELED to JobManager for task Source: control events genera
> tor (73915a232ba09e642f9dff92f8c8773a)
> 2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner 
>   - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
> 2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
> [5000] ms.
>  Reason is: [Disassociated].
> 2016-12-15 14:46:40,808 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /yarn/nm/usercache/robert/appcache/application_148129128
> 9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
> java.lang.IllegalArgumentException: 
> /yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
>  does not exist
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)
> {code}
> This was the last message logged from that machine. I suspect two threads are 
> trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5345) IOManager failed to properly clean up temp file directory

2017-01-16 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15824116#comment-15824116
 ] 

Anton Solovev commented on FLINK-5345:
--

[~StephanEwen] I see, I will assign the issue on you

> IOManager failed to properly clean up temp file directory
> -
>
> Key: FLINK-5345
> URL: https://issues.apache.org/jira/browse/FLINK-5345
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: Robert Metzger
>Assignee: Anton Solovev
>  Labels: simplex, starter
> Fix For: 1.2.0, 1.3.0
>
>
> While testing 1.1.3 RC3, I have the following message in my log:
> {code}
> 2016-12-15 14:46:05,450 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
> is shutting down.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: control events generator (29/40) 
> (73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: control events generator 
> (29/40) (73915a232ba09e642f9dff92f8c8773a).
> 2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> CANCELED to JobManager for task Source: control events genera
> tor (73915a232ba09e642f9dff92f8c8773a)
> 2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner 
>   - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
> 2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
> [5000] ms.
>  Reason is: [Disassociated].
> 2016-12-15 14:46:40,808 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /yarn/nm/usercache/robert/appcache/application_148129128
> 9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
> java.lang.IllegalArgumentException: 
> /yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
>  does not exist
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)
> {code}
> This was the last message logged from that machine. I suspect two threads are 
> trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5345) IOManager failed to properly clean up temp file directory

2017-01-16 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5345:
-
Assignee: Stephan Ewen  (was: Anton Solovev)

> IOManager failed to properly clean up temp file directory
> -
>
> Key: FLINK-5345
> URL: https://issues.apache.org/jira/browse/FLINK-5345
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: Robert Metzger
>Assignee: Stephan Ewen
>  Labels: simplex, starter
> Fix For: 1.2.0, 1.3.0
>
>
> While testing 1.1.3 RC3, I have the following message in my log:
> {code}
> 2016-12-15 14:46:05,450 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
> is shutting down.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: control events generator (29/40) 
> (73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: control events generator 
> (29/40) (73915a232ba09e642f9dff92f8c8773a).
> 2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> CANCELED to JobManager for task Source: control events genera
> tor (73915a232ba09e642f9dff92f8c8773a)
> 2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner 
>   - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
> 2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
> [5000] ms.
>  Reason is: [Disassociated].
> 2016-12-15 14:46:40,808 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /yarn/nm/usercache/robert/appcache/application_148129128
> 9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
> java.lang.IllegalArgumentException: 
> /yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
>  does not exist
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)
> {code}
> This was the last message logged from that machine. I suspect two threads are 
> trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5345) IOManager failed to properly clean up temp file directory

2017-01-16 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15824013#comment-15824013
 ] 

Anton Solovev commented on FLINK-5345:
--

[~rmetzger] [~StephanEwen] How about using {{#deleteQuietly}} and if a folder 
can't be deleted - log a warn. Because it's a temporary file so theoretically 
the directory sooner or later will be deleted by one of the another thread.


> IOManager failed to properly clean up temp file directory
> -
>
> Key: FLINK-5345
> URL: https://issues.apache.org/jira/browse/FLINK-5345
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: Robert Metzger
>Assignee: Anton Solovev
>  Labels: simplex, starter
> Fix For: 1.2.0, 1.3.0
>
>
> While testing 1.1.3 RC3, I have the following message in my log:
> {code}
> 2016-12-15 14:46:05,450 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
> is shutting down.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: control events generator (29/40) 
> (73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: control events generator 
> (29/40) (73915a232ba09e642f9dff92f8c8773a).
> 2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> CANCELED to JobManager for task Source: control events genera
> tor (73915a232ba09e642f9dff92f8c8773a)
> 2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner 
>   - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
> 2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
> [5000] ms.
>  Reason is: [Disassociated].
> 2016-12-15 14:46:40,808 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /yarn/nm/usercache/robert/appcache/application_148129128
> 9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
> java.lang.IllegalArgumentException: 
> /yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
>  does not exist
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)
> {code}
> This was the last message logged from that machine. I suspect two threads are 
> trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5481) Cannot create Collection of Row

2017-01-13 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5481:
-
 Priority: Trivial  (was: Major)
Affects Version/s: 1.2.0

> Cannot create Collection of Row 
> 
>
> Key: FLINK-5481
> URL: https://issues.apache.org/jira/browse/FLINK-5481
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Trivial
>
> When we use {{ExecutionEnvironment#fromElements(X... data)}} it takes first 
> element of {{data}} to define a type. If first Row in collection has wrong 
> number of fields (there are nulls) TypeExtractor returns not RowTypeInfo, but 
> GenericType



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5481) Cannot create Collection of Row

2017-01-13 Thread Anton Solovev (JIRA)
Anton Solovev created FLINK-5481:


 Summary: Cannot create Collection of Row 
 Key: FLINK-5481
 URL: https://issues.apache.org/jira/browse/FLINK-5481
 Project: Flink
  Issue Type: Bug
Reporter: Anton Solovev
Assignee: Anton Solovev


When we use {{ExecutionEnvironment#fromElements(X... data)}} it takes first 
element of {{data}} to define a type. If first Row in collection has wrong 
number of fields (there are nulls) TypeExtractor returns not RowTypeInfo, but 
GenericType



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-3849) Add FilterableTableSource interface and translation rule

2017-01-13 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15821676#comment-15821676
 ] 

Anton Solovev edited comment on FLINK-3849 at 1/13/17 11:41 AM:


Hi [~fhueske], so {{setPredicate(predicate: Expression)}} returns unsupported 
expression. What if predicate looks like {{id > 2 || (amount * 2) > anydata}} 
and {{id > 2}} part may be pushed into a source. Should we make sure that a 
source wouldn't do any logic mistakes before? And how,maybe add another method 
ensuring filtering or try to set predicate and a source make all checks.


was (Author: tonycox):
Hi [~fhueske], so {{setPredicate(predicate: Expression)}} returns unsupported 
expression. What if predicate looks like {{id > 2 || (amount * 2) > anydata}} 
and {{id > 2}} part may be pushed into a source ? Should we make sure that a 
source wouldn't do any logic mistakes before? And how, add another method 
ensuring filtering or try to set predicate and a source make all checks?

> Add FilterableTableSource interface and translation rule
> 
>
> Key: FLINK-3849
> URL: https://issues.apache.org/jira/browse/FLINK-3849
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Anton Solovev
>
> Add a {{FilterableTableSource}} interface for {{TableSource}} implementations 
> which support filter push-down.
> The interface could look as follows
> {code}
> def trait FilterableTableSource {
>   // returns unsupported predicate expression
>   def setPredicate(predicate: Expression): Expression
> }
> {code}
> In addition we need Calcite rules to push a predicate (or parts of it) into a 
> TableScan that refers to a {{FilterableTableSource}}. We might need to tweak 
> the cost model as well to push the optimizer in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3849) Add FilterableTableSource interface and translation rule

2017-01-13 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15821676#comment-15821676
 ] 

Anton Solovev commented on FLINK-3849:
--

Hi [~fhueske], so {{setPredicate(predicate: Expression)}} returns unsupported 
expression. What if predicate looks like {{id > 2 || (amount * 2) > anydata}} 
and {{id > 2}} part may be pushed into a source ? Should we make sure that a 
source wouldn't do any logic mistakes before? And how, add another method 
ensuring filtering or try to set predicate and a source make all checks?

> Add FilterableTableSource interface and translation rule
> 
>
> Key: FLINK-3849
> URL: https://issues.apache.org/jira/browse/FLINK-3849
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Anton Solovev
>
> Add a {{FilterableTableSource}} interface for {{TableSource}} implementations 
> which support filter push-down.
> The interface could look as follows
> {code}
> def trait FilterableTableSource {
>   // returns unsupported predicate expression
>   def setPredicate(predicate: Expression): Expression
> }
> {code}
> In addition we need Calcite rules to push a predicate (or parts of it) into a 
> TableScan that refers to a {{FilterableTableSource}}. We might need to tweak 
> the cost model as well to push the optimizer in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5384) clean up jira issues

2017-01-11 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
h4. must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;

FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

h4. should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

h4. far from done yet:
FLINK-1926
FLINK-3331

  was:
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;
FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> h4. must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed on github;
> FLINK-2157 -> closed on git

[jira] [Updated] (FLINK-5384) clean up jira issues

2017-01-11 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Assignee: (was: Anton Solovev)

> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Priority: Minor
>
> must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed on github;
> FLINK-2157 -> closed on github;
> FLINK-2220 -> closed on github;
> FLINK-2363 -> closed on github;
> FLINK-2480 -> closed on github;
> FLINK-2609 -> closed on github;
> FLINK-2823 -> closed on github;
> FLINK-3155 -> closed on github;
> FLINK-3331 -> closed on github;
> FLINK-3964 -> closed on github;
> FLINK-4653 -> closed on github;
> FLINK-4717 -> closed on github;
> FLINK-4829 -> closed on github;
> FLINK-5016 -> closed on github;
> should be discussed before:
> FLINK-1055 ;
> FLINK-1098 -> create other issue to add a colectEach();
> FLINK-1100 ;
> FLINK-1146 ;
> FLINK-1335 -> maybe rename?;
> FLINK-1439 ;
> FLINK-1447 -> firefox problem?;
> FLINK-1521 -> closed on github;
> FLINK-1538 -> gsoc2015, is it solved?;
> FLINK-1541 -> gsoc2015, is it solved?;
> FLINK-1723 -> almost done? ;
> FLINK-1814 ;
> FLINK-1858 -> is QA bot deleted?;
> FLINK-1926 -> all subtasks done;
> FLINK-2319 -> closed on github, outdated;
> FLINK-2399 -> closed on github, outdated;
> FLINK-2428 -> closed on github, outdated;
> FLINK-2472 -> closed on github, outdated;
> FLINK-2023 -> does not block Scala Graph API;
> FLINK-2032 -> all subtasks done;
> FLINK-2108 -> almost done? ;
> FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
> FLINK-3109 -> its PR is stuck;
> FLINK-3154 -> must be addressed as part of a bigger initiative;
> FLINK-3297 -> solved as third party lib;
> FLINK-4042 -> outdated;
> FLINK-4313 -> outdated;
> FLINK-4760 -> fixed?
> FLINK-5282 -> closed on github;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5384) clean up jira issues

2017-01-11 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-5384:


Assignee: Anton Solovev

> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed on github;
> FLINK-2157 -> closed on github;
> FLINK-2220 -> closed on github;
> FLINK-2363 -> closed on github;
> FLINK-2480 -> closed on github;
> FLINK-2609 -> closed on github;
> FLINK-2823 -> closed on github;
> FLINK-3155 -> closed on github;
> FLINK-3331 -> closed on github;
> FLINK-3964 -> closed on github;
> FLINK-4653 -> closed on github;
> FLINK-4717 -> closed on github;
> FLINK-4829 -> closed on github;
> FLINK-5016 -> closed on github;
> should be discussed before:
> FLINK-1055 ;
> FLINK-1098 -> create other issue to add a colectEach();
> FLINK-1100 ;
> FLINK-1146 ;
> FLINK-1335 -> maybe rename?;
> FLINK-1439 ;
> FLINK-1447 -> firefox problem?;
> FLINK-1521 -> closed on github;
> FLINK-1538 -> gsoc2015, is it solved?;
> FLINK-1541 -> gsoc2015, is it solved?;
> FLINK-1723 -> almost done? ;
> FLINK-1814 ;
> FLINK-1858 -> is QA bot deleted?;
> FLINK-1926 -> all subtasks done;
> FLINK-2319 -> closed on github, outdated;
> FLINK-2399 -> closed on github, outdated;
> FLINK-2428 -> closed on github, outdated;
> FLINK-2472 -> closed on github, outdated;
> FLINK-2023 -> does not block Scala Graph API;
> FLINK-2032 -> all subtasks done;
> FLINK-2108 -> almost done? ;
> FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
> FLINK-3109 -> its PR is stuck;
> FLINK-3154 -> must be addressed as part of a bigger initiative;
> FLINK-3297 -> solved as third party lib;
> FLINK-4042 -> outdated;
> FLINK-4313 -> outdated;
> FLINK-4760 -> fixed?
> FLINK-5282 -> closed on github;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5431) time format for akka status

2017-01-09 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-5431:


Assignee: Anton Solovev

> time format for akka status
> ---
>
> Key: FLINK-5431
> URL: https://issues.apache.org/jira/browse/FLINK-5431
> Project: Flink
>  Issue Type: Improvement
>Reporter: Alexey Diomin
>Assignee: Anton Solovev
>Priority: Minor
>
> In ExecutionGraphMessages we have code
> {code}
> private val DATE_FORMATTER: SimpleDateFormat = new 
> SimpleDateFormat("MM/dd/ HH:mm:ss")
> {code}
> But sometimes it cause confusion when main logger configured with 
> "dd/MM/".
> We need making this format configurable or maybe stay only "HH:mm:ss" for 
> prevent misunderstanding output date-time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5336) Make Path immutable

2017-01-09 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15811585#comment-15811585
 ] 

Anton Solovev commented on FLINK-5336:
--

My approach https://github.com/tonycox/flink/tree/immutablePath. I think it's 
still relevant, but must be reworked according to branch 2.0

> Make Path immutable
> ---
>
> Key: FLINK-5336
> URL: https://issues.apache.org/jira/browse/FLINK-5336
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Stephan Ewen
>Assignee: Anton Solovev
> Fix For: 2.0.0
>
>
> The {{Path}} class is currently mutable to support the {{IOReadableWritable}} 
> serialization. Since that serialization is not used any more, I suggest to 
> drop that interface from Path and make the Path's URI final.
> Being immutable, we can store configures paths properly without the chance of 
> them being mutated as side effects.
> Many parts of the code make the assumption that the Path is immutable, being 
> susceptible to subtle errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5384) clean up jira issues

2017-01-06 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;
FLINK-2319 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

  was:
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed on github;
> FLINK-2157 -> closed on github;
> FLINK-2220 -> closed on github;
> FLINK-2363 -> closed on github;
>

[jira] [Updated] (FLINK-5384) clean up jira issues

2017-01-06 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;
FLINK-2319 -> closed on github, outdated;
FLINK-2399 -> closed on github, outdated;
FLINK-2428 -> closed on github, outdated;
FLINK-2472 -> closed on github, outdated;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

  was:
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;
FLINK-2319 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed on github;
> FLINK-2157 -> closed on github;
> FLINK-2220 -> closed on git

[jira] [Commented] (FLINK-5388) Remove private access of edges and vertices of Gelly Graph class

2017-01-04 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15798384#comment-15798384
 ] 

Anton Solovev commented on FLINK-5388:
--

after reviewing this pull request by committers it will be implemented or 
declined in next version branch or other ones. for now to use the changes in 
your project you can pull {{privateGellyGraph}} branch from 
https://github.com/tonycox/

> Remove private access of edges and vertices of Gelly Graph class
> 
>
> Key: FLINK-5388
> URL: https://issues.apache.org/jira/browse/FLINK-5388
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.3
> Environment: Java
>Reporter: wouter ligtenberg
>Assignee: Anton Solovev
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> If you want to make a special kind of Graph with special edge types or some 
> different methods on top of Gelly you want to be able to extend the Graph 
> class. Currently that's not possible because the constructor is private. I 
> don't know what effect this has on other methods or the scale of the project, 
> but it was just something that i ran into in my project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5389) Fails #testAnswerFailureWhenSavepointReadFails

2016-12-30 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5389:
-
Affects Version/s: 1.2.0

> Fails #testAnswerFailureWhenSavepointReadFails
> --
>
> Key: FLINK-5389
> URL: https://issues.apache.org/jira/browse/FLINK-5389
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: macOS sierra
>Reporter: Anton Solovev
>  Labels: test
>
> {{testAnswerFailureWhenSavepointReadFails}} fails in {{JobSubmitTest}} when  
> {{timeout}} is set to 5000ms, but when 6000ms it pass



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5402) Fails AkkaRpcServiceTest#testTerminationFuture

2016-12-30 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5402:
-
Environment: macOS

> Fails AkkaRpcServiceTest#testTerminationFuture
> --
>
> Key: FLINK-5402
> URL: https://issues.apache.org/jira/browse/FLINK-5402
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: macOS
>Reporter: Anton Solovev
>
> {code}
> testTerminationFuture(org.apache.flink.runtime.rpc.akka.AkkaRpcServiceTest)  
> Time elapsed: 1.013 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 1000 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
>   at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>   at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>   at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>   at scala.concurrent.Await$.result(package.scala:107)
>   at akka.remote.Remoting.start(Remoting.scala:179)
>   at 
> akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
>   at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:620)
>   at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl._start(ActorSystem.scala:617)
>   at akka.actor.ActorSystemImpl.start(ActorSystem.scala:634)
>   at akka.actor.ActorSystem$.apply(ActorSystem.scala:142)
>   at akka.actor.ActorSystem$.apply(ActorSystem.scala:119)
>   at akka.actor.ActorSystem$.create(ActorSystem.scala:67)
>   at 
> org.apache.flink.runtime.akka.AkkaUtils$.createActorSystem(AkkaUtils.scala:104)
>   at 
> org.apache.flink.runtime.akka.AkkaUtils$.createDefaultActorSystem(AkkaUtils.scala:114)
>   at 
> org.apache.flink.runtime.akka.AkkaUtils.createDefaultActorSystem(AkkaUtils.scala)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceTest.testTerminationFuture(AkkaRpcServiceTest.java:134)
> {code} in org.apache.flink.runtime.rpc.akka.AkkaRpcServiceTest while testing 
> current master 1.2.0 branch 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5401) Fails ConnectionUtilsTest#testReturnLocalHostAddressUsingHeuristics

2016-12-30 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5401:
-
Environment: macOS

> Fails ConnectionUtilsTest#testReturnLocalHostAddressUsingHeuristics
> ---
>
> Key: FLINK-5401
> URL: https://issues.apache.org/jira/browse/FLINK-5401
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
> Environment: macOS
>Reporter: Anton Solovev
>
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.flink.runtime.net.ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics(ConnectionUtilsTest.java:45)
> {code}
> in org.apache.flink.runtime.net.ConnectionUtilsTest while testing 1.1.3 RC2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5402) Fails AkkaRpcServiceTest#testTerminationFuture

2016-12-30 Thread Anton Solovev (JIRA)
Anton Solovev created FLINK-5402:


 Summary: Fails AkkaRpcServiceTest#testTerminationFuture
 Key: FLINK-5402
 URL: https://issues.apache.org/jira/browse/FLINK-5402
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Anton Solovev


{code}
testTerminationFuture(org.apache.flink.runtime.rpc.akka.AkkaRpcServiceTest)  
Time elapsed: 1.013 sec  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 1000 
milliseconds
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at 
scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at 
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at 
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at akka.remote.Remoting.start(Remoting.scala:179)
at 
akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:620)
at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:617)
at akka.actor.ActorSystemImpl._start(ActorSystem.scala:617)
at akka.actor.ActorSystemImpl.start(ActorSystem.scala:634)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:142)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:119)
at akka.actor.ActorSystem$.create(ActorSystem.scala:67)
at 
org.apache.flink.runtime.akka.AkkaUtils$.createActorSystem(AkkaUtils.scala:104)
at 
org.apache.flink.runtime.akka.AkkaUtils$.createDefaultActorSystem(AkkaUtils.scala:114)
at 
org.apache.flink.runtime.akka.AkkaUtils.createDefaultActorSystem(AkkaUtils.scala)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcServiceTest.testTerminationFuture(AkkaRpcServiceTest.java:134)
{code} in org.apache.flink.runtime.rpc.akka.AkkaRpcServiceTest while testing 
current master 1.2.0 branch 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5401) Fails ConnectionUtilsTest#testReturnLocalHostAddressUsingHeuristics

2016-12-30 Thread Anton Solovev (JIRA)
Anton Solovev created FLINK-5401:


 Summary: Fails 
ConnectionUtilsTest#testReturnLocalHostAddressUsingHeuristics
 Key: FLINK-5401
 URL: https://issues.apache.org/jira/browse/FLINK-5401
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Anton Solovev


{code}
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.flink.runtime.net.ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics(ConnectionUtilsTest.java:45)
{code}

in org.apache.flink.runtime.net.ConnectionUtilsTest while testing 1.1.3 RC2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5400) Add accessor to folding states in RuntimeContext

2016-12-30 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5400:
-
Assignee: Xiaogang Shi

> Add accessor to folding states in RuntimeContext
> 
>
> Key: FLINK-5400
> URL: https://issues.apache.org/jira/browse/FLINK-5400
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Xiaogang Shi
>Assignee: Xiaogang Shi
>
> Now {{RuntimeContext}} does not provide the accessors to folding states. 
> Therefore users cannot use folding states in their rich functions. I think we 
> should provide the missing accessor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3849) Add FilterableTableSource interface and translation rule

2016-12-29 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15785078#comment-15785078
 ] 

Anton Solovev commented on FLINK-3849:
--

Yes, one rule for pushing them together. But, I`ve just realized your idea, I 
thought it wouldn`t match after one of the rule is applied

> Add FilterableTableSource interface and translation rule
> 
>
> Key: FLINK-3849
> URL: https://issues.apache.org/jira/browse/FLINK-3849
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Anton Solovev
>
> Add a {{FilterableTableSource}} interface for {{TableSource}} implementations 
> which support filter push-down.
> The interface could look as follows
> {code}
> def trait FilterableTableSource {
>   // returns unsupported predicate expression
>   def setPredicate(predicate: Expression): Expression
> }
> {code}
> In addition we need Calcite rules to push a predicate (or parts of it) into a 
> TableScan that refers to a {{FilterableTableSource}}. We might need to tweak 
> the cost model as well to push the optimizer in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5345) IOManager failed to properly clean up temp file directory

2016-12-29 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15785037#comment-15785037
 ] 

Anton Solovev commented on FLINK-5345:
--

1.1.3 RC3 ? I only see 1.1.3 RC2 on github branches

> IOManager failed to properly clean up temp file directory
> -
>
> Key: FLINK-5345
> URL: https://issues.apache.org/jira/browse/FLINK-5345
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: Robert Metzger
>Assignee: Anton Solovev
>  Labels: simplex, starter
>
> While testing 1.1.3 RC3, I have the following message in my log:
> {code}
> 2016-12-15 14:46:05,450 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
> is shutting down.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: control events generator (29/40) 
> (73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: control events generator 
> (29/40) (73915a232ba09e642f9dff92f8c8773a).
> 2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> CANCELED to JobManager for task Source: control events genera
> tor (73915a232ba09e642f9dff92f8c8773a)
> 2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner 
>   - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
> 2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
> [5000] ms.
>  Reason is: [Disassociated].
> 2016-12-15 14:46:40,808 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /yarn/nm/usercache/robert/appcache/application_148129128
> 9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
> java.lang.IllegalArgumentException: 
> /yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
>  does not exist
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)
> {code}
> This was the last message logged from that machine. I suspect two threads are 
> trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5345) IOManager failed to properly clean up temp file directory

2016-12-29 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-5345:


Assignee: Anton Solovev

> IOManager failed to properly clean up temp file directory
> -
>
> Key: FLINK-5345
> URL: https://issues.apache.org/jira/browse/FLINK-5345
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: Robert Metzger
>Assignee: Anton Solovev
>  Labels: simplex, starter
>
> While testing 1.1.3 RC3, I have the following message in my log:
> {code}
> 2016-12-15 14:46:05,450 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
> is shutting down.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: control events generator (29/40) 
> (73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: control events generator 
> (29/40) (73915a232ba09e642f9dff92f8c8773a).
> 2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> CANCELED to JobManager for task Source: control events genera
> tor (73915a232ba09e642f9dff92f8c8773a)
> 2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner 
>   - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
> 2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
> [5000] ms.
>  Reason is: [Disassociated].
> 2016-12-15 14:46:40,808 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /yarn/nm/usercache/robert/appcache/application_148129128
> 9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
> java.lang.IllegalArgumentException: 
> /yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
>  does not exist
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)
> {code}
> This was the last message logged from that machine. I suspect two threads are 
> trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3849) Add FilterableTableSource interface and translation rule

2016-12-26 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15778448#comment-15778448
 ] 

Anton Solovev commented on FLINK-3849:
--

I think there is needed to create PushFilterProjectRule to compose these rules 
together

> Add FilterableTableSource interface and translation rule
> 
>
> Key: FLINK-3849
> URL: https://issues.apache.org/jira/browse/FLINK-3849
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Anton Solovev
>
> Add a {{FilterableTableSource}} interface for {{TableSource}} implementations 
> which support filter push-down.
> The interface could look as follows
> {code}
> def trait FilterableTableSource {
>   // returns unsupported predicate expression
>   def setPredicate(predicate: Expression): Expression
> }
> {code}
> In addition we need Calcite rules to push a predicate (or parts of it) into a 
> TableScan that refers to a {{FilterableTableSource}}. We might need to tweak 
> the cost model as well to push the optimizer in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-3849) Add FilterableTableSource interface and translation rule

2016-12-26 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15778448#comment-15778448
 ] 

Anton Solovev edited comment on FLINK-3849 at 12/26/16 2:53 PM:


I think there is needed to create {{PushFilterProjectRule}} to compose these 
rules together


was (Author: tonycox):
I think there is needed to create PushFilterProjectRule to compose these rules 
together

> Add FilterableTableSource interface and translation rule
> 
>
> Key: FLINK-3849
> URL: https://issues.apache.org/jira/browse/FLINK-3849
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Anton Solovev
>
> Add a {{FilterableTableSource}} interface for {{TableSource}} implementations 
> which support filter push-down.
> The interface could look as follows
> {code}
> def trait FilterableTableSource {
>   // returns unsupported predicate expression
>   def setPredicate(predicate: Expression): Expression
> }
> {code}
> In addition we need Calcite rules to push a predicate (or parts of it) into a 
> TableScan that refers to a {{FilterableTableSource}}. We might need to tweak 
> the cost model as well to push the optimizer in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5384) clean up jira issues

2016-12-25 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4042 -> outdated;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?
FLINK-5282 -> closed on github;

  was:
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?

resolved^
FLINK-5282


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed on github;
> FLINK-2157 -> closed on github;
> FLINK-2220 -> closed on github;
> FLINK-2319 -> closed on github;
> FLINK-2363 -> closed on github;
>

[jira] [Closed] (FLINK-4042) TwitterStream example does not work

2016-12-25 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev closed FLINK-4042.

Resolution: Won't Fix

outdated

> TwitterStream example does not work
> ---
>
> Key: FLINK-4042
> URL: https://issues.apache.org/jira/browse/FLINK-4042
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.0.3
>Reporter: Till Rohrmann
>Assignee: Anton Solovev
>Priority: Minor
>
> The {{TwitterStream}} does not work with version 1.0.3 because the 
> {{flink-connector-twitter}} is missing the following dependencies: 
> {{org.apache.httpcomponents:httpclient}} and 
> {{org.apache.httpcomponents:httpcore}} 
> Adding these lines to the {{pom.xml}} should fix the problem.
> {code}
> org.apache.httpcomponents:httpclient   
> org.apache.httpcomponents:httpcore
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5389) Fails #testAnswerFailureWhenSavepointReadFails

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5389:
-
Description: {{testAnswerFailureWhenSavepointReadFails}} fails in 
{{JobSubmitTest}} when  {{timeout}} is set to 5000ms, but when 6000ms it pass  
(was: {{testAnswerFailureWhenSavepointReadFails}} fails in {{JobSubmitTest}} 
when  {{timeout]] is set to 5000ms, but when 6000ms it pass)

> Fails #testAnswerFailureWhenSavepointReadFails
> --
>
> Key: FLINK-5389
> URL: https://issues.apache.org/jira/browse/FLINK-5389
> Project: Flink
>  Issue Type: Bug
> Environment: macOS sierra
>Reporter: Anton Solovev
>  Labels: test
>
> {{testAnswerFailureWhenSavepointReadFails}} fails in {{JobSubmitTest}} when  
> {{timeout}} is set to 5000ms, but when 6000ms it pass



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5389) Fails #testAnswerFailureWhenSavepointReadFails

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5389:
-
Environment: macOS sierra
 Labels: test  (was: )

> Fails #testAnswerFailureWhenSavepointReadFails
> --
>
> Key: FLINK-5389
> URL: https://issues.apache.org/jira/browse/FLINK-5389
> Project: Flink
>  Issue Type: Bug
> Environment: macOS sierra
>Reporter: Anton Solovev
>  Labels: test
>
> {{testAnswerFailureWhenSavepointReadFails}} fails in {{JobSubmitTest}} when  
> {{timeout]] is set to 5000ms, but when 6000ms it pass



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5389) Fails #testAnswerFailureWhenSavepointReadFails

2016-12-23 Thread Anton Solovev (JIRA)
Anton Solovev created FLINK-5389:


 Summary: Fails #testAnswerFailureWhenSavepointReadFails
 Key: FLINK-5389
 URL: https://issues.apache.org/jira/browse/FLINK-5389
 Project: Flink
  Issue Type: Bug
Reporter: Anton Solovev


{{testAnswerFailureWhenSavepointReadFails}} fails in {{JobSubmitTest}} when  
{{timeout]] is set to 5000ms, but when 6000ms it pass



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5388) Remove private access of edges and vertices of Gelly Graph class

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-5388:


Assignee: Anton Solovev

> Remove private access of edges and vertices of Gelly Graph class
> 
>
> Key: FLINK-5388
> URL: https://issues.apache.org/jira/browse/FLINK-5388
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.3
> Environment: Java
>Reporter: wouter ligtenberg
>Assignee: Anton Solovev
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> If you want to make a special kind of Graph with special edge types or some 
> different methods on top of Gelly you want to be able to extend the Graph 
> class. Currently that's not possible because the constructor is private. I 
> don't know what effect this has on other methods or the scale of the project, 
> but it was just something that i ran into in my project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5388) Remove private access of edges and vertices of Gelly Graph class

2016-12-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15773126#comment-15773126
 ] 

Anton Solovev commented on FLINK-5388:
--

[~otherwise777] could you provide an example of your necessity?

> Remove private access of edges and vertices of Gelly Graph class
> 
>
> Key: FLINK-5388
> URL: https://issues.apache.org/jira/browse/FLINK-5388
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.3
> Environment: Java
>Reporter: wouter ligtenberg
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> If you want to make a special kind of Graph with special edge types or some 
> different methods on top of Gelly you want to be able to extend the Graph 
> class. Currently that's not possible because the constructor is private. I 
> don't know what effect this has on other methods or the scale of the project, 
> but it was just something that i ran into in my project



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4313) Inconsistent code for Key/Value in the CheckpointCoordinator

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev closed FLINK-4313.

Resolution: Won't Fix

outdated

> Inconsistent code for Key/Value in the CheckpointCoordinator
> 
>
> Key: FLINK-4313
> URL: https://issues.apache.org/jira/browse/FLINK-4313
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.0
>Reporter: Stephan Ewen
> Fix For: 1.2.0
>
>
> The CheckpointCoordinator seems to have maps to track KeyValue states 
> independently from other state.
> However, currently all state is transferred via a single {{StateHandle}}. The 
> CheckpointCoordinator does not populate the key/value state map ever, nor do 
> the deploy fields actually pick up any contents from that map.
> This is currently quite confusing and probably error prone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4228) RocksDB semi-async snapshot to S3AFileSystem fails

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-4228:


Assignee: Anton Solovev

> RocksDB semi-async snapshot to S3AFileSystem fails
> --
>
> Key: FLINK-4228
> URL: https://issues.apache.org/jira/browse/FLINK-4228
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Anton Solovev
>
> Using the {{RocksDBStateBackend}} with semi-async snapshots (current default) 
> leads to an Exception when uploading the snapshot to S3 when using the 
> {{S3AFileSystem}}.
> {code}
> AsynchronousException{com.amazonaws.AmazonClientException: Unable to 
> calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)}
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointThread.run(StreamTask.java:870)
> Caused by: com.amazonaws.AmazonClientException: Unable to calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1298)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:108)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:100)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.upload(UploadMonitor.java:192)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:150)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1294)
>   ... 9 more
> {code}
> Running with S3NFileSystem, the error does not occur. The problem might be 
> due to {{HDFSCopyToLocal}} assuming that sub-folders are going to be created 
> automatically. We might need to manually create folders and copy only actual 
> files for {{S3AFileSystem}}. More investigation is required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4228) RocksDB semi-async snapshot to S3AFileSystem fails

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-4228:
-
Assignee: (was: Anton Solovev)

> RocksDB semi-async snapshot to S3AFileSystem fails
> --
>
> Key: FLINK-4228
> URL: https://issues.apache.org/jira/browse/FLINK-4228
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>
> Using the {{RocksDBStateBackend}} with semi-async snapshots (current default) 
> leads to an Exception when uploading the snapshot to S3 when using the 
> {{S3AFileSystem}}.
> {code}
> AsynchronousException{com.amazonaws.AmazonClientException: Unable to 
> calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)}
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointThread.run(StreamTask.java:870)
> Caused by: com.amazonaws.AmazonClientException: Unable to calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1298)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:108)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:100)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.upload(UploadMonitor.java:192)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:150)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1294)
>   ... 9 more
> {code}
> Running with S3NFileSystem, the error does not occur. The problem might be 
> due to {{HDFSCopyToLocal}} assuming that sub-folders are going to be created 
> automatically. We might need to manually create folders and copy only actual 
> files for {{S3AFileSystem}}. More investigation is required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4228) RocksDB semi-async snapshot to S3AFileSystem fails

2016-12-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15773024#comment-15773024
 ] 

Anton Solovev commented on FLINK-4228:
--

so do you want to continue on this?

> RocksDB semi-async snapshot to S3AFileSystem fails
> --
>
> Key: FLINK-4228
> URL: https://issues.apache.org/jira/browse/FLINK-4228
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>
> Using the {{RocksDBStateBackend}} with semi-async snapshots (current default) 
> leads to an Exception when uploading the snapshot to S3 when using the 
> {{S3AFileSystem}}.
> {code}
> AsynchronousException{com.amazonaws.AmazonClientException: Unable to 
> calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)}
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointThread.run(StreamTask.java:870)
> Caused by: com.amazonaws.AmazonClientException: Unable to calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1298)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:108)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:100)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.upload(UploadMonitor.java:192)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:150)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1294)
>   ... 9 more
> {code}
> Running with S3NFileSystem, the error does not occur. The problem might be 
> due to {{HDFSCopyToLocal}} assuming that sub-folders are going to be created 
> automatically. We might need to manually create folders and copy only actual 
> files for {{S3AFileSystem}}. More investigation is required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5282) CheckpointStateOutputStream should be closed in finally block in SavepointV0Serializer#convertKeyedBackendState()

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-5282:


Assignee: Anton Solovev

> CheckpointStateOutputStream should be closed in finally block in 
> SavepointV0Serializer#convertKeyedBackendState()
> -
>
> Key: FLINK-5282
> URL: https://issues.apache.org/jira/browse/FLINK-5282
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Anton Solovev
>Priority: Minor
>
> {code}
>   CheckpointStreamFactory.CheckpointStateOutputStream keyedStateOut =
>   
> checkpointStreamFactory.createCheckpointStateOutputStream(checkpointID, 0L);
>   final long offset = keyedStateOut.getPos();
> {code}
> If getPos() throws exception, keyedStateOut would be left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5282) CheckpointStateOutputStream should be closed in finally block in SavepointV0Serializer#convertKeyedBackendState()

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev closed FLINK-5282.


> CheckpointStateOutputStream should be closed in finally block in 
> SavepointV0Serializer#convertKeyedBackendState()
> -
>
> Key: FLINK-5282
> URL: https://issues.apache.org/jira/browse/FLINK-5282
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Stefan Richter
>Priority: Minor
>
> {code}
>   CheckpointStreamFactory.CheckpointStateOutputStream keyedStateOut =
>   
> checkpointStreamFactory.createCheckpointStateOutputStream(checkpointID, 0L);
>   final long offset = keyedStateOut.getPos();
> {code}
> If getPos() throws exception, keyedStateOut would be left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5282) CheckpointStateOutputStream should be closed in finally block in SavepointV0Serializer#convertKeyedBackendState()

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5282:
-
Assignee: Stefan Richter  (was: Anton Solovev)

> CheckpointStateOutputStream should be closed in finally block in 
> SavepointV0Serializer#convertKeyedBackendState()
> -
>
> Key: FLINK-5282
> URL: https://issues.apache.org/jira/browse/FLINK-5282
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Stefan Richter
>Priority: Minor
>
> {code}
>   CheckpointStreamFactory.CheckpointStateOutputStream keyedStateOut =
>   
> checkpointStreamFactory.createCheckpointStateOutputStream(checkpointID, 0L);
>   final long offset = keyedStateOut.getPos();
> {code}
> If getPos() throws exception, keyedStateOut would be left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5384) clean up jira issues

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?

resolved^
FLINK-5282

  was:
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed on github;
> FLINK-2157 -> closed on github;
> FLINK-2220 -> closed on github;
> FLINK-2319 -> closed on github;
> FLINK-2363 -> closed on github;
> FLINK-2399 -> closed on github;
> FLINK-2428 -> closed 

[jira] [Updated] (FLINK-5384) clean up jira issues

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4313 -> outdated;
FLINK-4760 -> fixed?

  was:
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4760 -> fixed?


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed on github;
> FLINK-2157 -> closed on github;
> FLINK-2220 -> closed on github;
> FLINK-2319 -> closed on github;
> FLINK-2363 -> closed on github;
> FLINK-2399 -> closed on github;
> FLINK-2428 -> closed on github;
> FLINK-2472 -> closed on github;
>

[jira] [Commented] (FLINK-4313) Inconsistent code for Key/Value in the CheckpointCoordinator

2016-12-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15772799#comment-15772799
 ] 

Anton Solovev commented on FLINK-4313:
--

[~StephanEwen] I see, I will add the issue to clean up list in FLINK-5384 

> Inconsistent code for Key/Value in the CheckpointCoordinator
> 
>
> Key: FLINK-4313
> URL: https://issues.apache.org/jira/browse/FLINK-4313
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.0
>Reporter: Stephan Ewen
> Fix For: 1.2.0
>
>
> The CheckpointCoordinator seems to have maps to track KeyValue states 
> independently from other state.
> However, currently all state is transferred via a single {{StateHandle}}. The 
> CheckpointCoordinator does not populate the key/value state map ever, nor do 
> the deploy fields actually pick up any contents from that map.
> This is currently quite confusing and probably error prone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4313) Inconsistent code for Key/Value in the CheckpointCoordinator

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-4313:
-
Assignee: (was: Anton Solovev)

> Inconsistent code for Key/Value in the CheckpointCoordinator
> 
>
> Key: FLINK-4313
> URL: https://issues.apache.org/jira/browse/FLINK-4313
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.0
>Reporter: Stephan Ewen
> Fix For: 1.2.0
>
>
> The CheckpointCoordinator seems to have maps to track KeyValue states 
> independently from other state.
> However, currently all state is transferred via a single {{StateHandle}}. The 
> CheckpointCoordinator does not populate the key/value state map ever, nor do 
> the deploy fields actually pick up any contents from that map.
> This is currently quite confusing and probably error prone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4042) TwitterStream example does not work

2016-12-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15772779#comment-15772779
 ] 

Anton Solovev commented on FLINK-4042:
--

Hi [~till.rohrmann] What exactly doesn't work? On my local machine all is going 
well

> TwitterStream example does not work
> ---
>
> Key: FLINK-4042
> URL: https://issues.apache.org/jira/browse/FLINK-4042
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.0.3
>Reporter: Till Rohrmann
>Assignee: Anton Solovev
>Priority: Minor
>
> The {{TwitterStream}} does not work with version 1.0.3 because the 
> {{flink-connector-twitter}} is missing the following dependencies: 
> {{org.apache.httpcomponents:httpclient}} and 
> {{org.apache.httpcomponents:httpcore}} 
> Adding these lines to the {{pom.xml}} should fix the problem.
> {code}
> org.apache.httpcomponents:httpclient   
> org.apache.httpcomponents:httpcore
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-4042) TwitterStream example does not work

2016-12-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15772779#comment-15772779
 ] 

Anton Solovev edited comment on FLINK-4042 at 12/23/16 12:25 PM:
-

Hi [~till.rohrmann] What exactly doesn't work? On my local machine all are 
going well


was (Author: tonycox):
Hi [~till.rohrmann] What exactly doesn't work? On my local machine all is going 
well

> TwitterStream example does not work
> ---
>
> Key: FLINK-4042
> URL: https://issues.apache.org/jira/browse/FLINK-4042
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.0.3
>Reporter: Till Rohrmann
>Assignee: Anton Solovev
>Priority: Minor
>
> The {{TwitterStream}} does not work with version 1.0.3 because the 
> {{flink-connector-twitter}} is missing the following dependencies: 
> {{org.apache.httpcomponents:httpclient}} and 
> {{org.apache.httpcomponents:httpcore}} 
> Adding these lines to the {{pom.xml}} should fix the problem.
> {code}
> org.apache.httpcomponents:httpclient   
> org.apache.httpcomponents:httpcore
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4313) Inconsistent code for Key/Value in the CheckpointCoordinator

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-4313:


Assignee: Anton Solovev

> Inconsistent code for Key/Value in the CheckpointCoordinator
> 
>
> Key: FLINK-4313
> URL: https://issues.apache.org/jira/browse/FLINK-4313
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.0
>Reporter: Stephan Ewen
>Assignee: Anton Solovev
> Fix For: 1.2.0
>
>
> The CheckpointCoordinator seems to have maps to track KeyValue states 
> independently from other state.
> However, currently all state is transferred via a single {{StateHandle}}. The 
> CheckpointCoordinator does not populate the key/value state map ever, nor do 
> the deploy fields actually pick up any contents from that map.
> This is currently quite confusing and probably error prone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4042) TwitterStream example does not work

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev reassigned FLINK-4042:


Assignee: Anton Solovev

> TwitterStream example does not work
> ---
>
> Key: FLINK-4042
> URL: https://issues.apache.org/jira/browse/FLINK-4042
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.0.3
>Reporter: Till Rohrmann
>Assignee: Anton Solovev
>Priority: Minor
>
> The {{TwitterStream}} does not work with version 1.0.3 because the 
> {{flink-connector-twitter}} is missing the following dependencies: 
> {{org.apache.httpcomponents:httpclient}} and 
> {{org.apache.httpcomponents:httpcore}} 
> Adding these lines to the {{pom.xml}} should fix the problem.
> {code}
> org.apache.httpcomponents:httpclient   
> org.apache.httpcomponents:httpcore
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4228) RocksDB semi-async snapshot to S3AFileSystem fails

2016-12-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15772388#comment-15772388
 ] 

Anton Solovev commented on FLINK-4228:
--

what happens with this issue, is still relevant?

> RocksDB semi-async snapshot to S3AFileSystem fails
> --
>
> Key: FLINK-4228
> URL: https://issues.apache.org/jira/browse/FLINK-4228
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>
> Using the {{RocksDBStateBackend}} with semi-async snapshots (current default) 
> leads to an Exception when uploading the snapshot to S3 when using the 
> {{S3AFileSystem}}.
> {code}
> AsynchronousException{com.amazonaws.AmazonClientException: Unable to 
> calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)}
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointThread.run(StreamTask.java:870)
> Caused by: com.amazonaws.AmazonClientException: Unable to calculate MD5 hash: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1298)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:108)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:100)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.upload(UploadMonitor.java:192)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:150)
>   at 
> com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /var/folders/_c/5tc5q5q55qjcjtqwlwvwd1m0gn/T/flink-io-5640e9f1-3ea4-4a0f-b4d9-3ce9fbd98d8a/7c6e745df2dddc6eb70def1240779e44/StreamFlatMap_3_0/dummy_state/47daaf2a-150c-4208-aa4b-409927e9e5b7/local-chk-2886
>  (Is a directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1294)
>   ... 9 more
> {code}
> Running with S3NFileSystem, the error does not occur. The problem might be 
> due to {{HDFSCopyToLocal}} assuming that sub-folders are going to be created 
> automatically. We might need to manually create folders and copy only actual 
> files for {{S3AFileSystem}}. More investigation is required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4719) KryoSerializer random exception

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-4719:
-
Assignee: Nico Kruber

> KryoSerializer random exception
> ---
>
> Key: FLINK-4719
> URL: https://issues.apache.org/jira/browse/FLINK-4719
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.1.1
>Reporter: Flavio Pompermaier
>Assignee: Nico Kruber
>  Labels: kryo, serialization
>
> There's a random exception that involves somehow the KryoSerializer when 
> using POJOs in Flink jobs reading large volumes of data.
> It is usually thrown in several places, e.g. (the Exceptions reported here 
> can refer to previous versions of Flink...):
> {code}
> java.lang.Exception: The data preparation for task 'CHAIN GroupReduce 
> (GroupReduce at createResult(IndexMappingExecutor.java:42)) -> Map (Map at 
> main(Jsonizer.java:128))' , caused an error: Error obtaining the sorted 
> input: Thread 'SortMerger spilling thread' terminated due to an exception: 
> Unable to find class: java.ttil.HashSet
> at 
> org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:456)
> at 
> org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:345)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Error obtaining the sorted input: 
> Thread 'SortMerger spilling thread' terminated due to an exception: Unable to 
> find class: java.ttil.HashSet
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:619)
> at 
> org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1079)
> at 
> org.apache.flink.runtime.operators.GroupReduceDriver.prepare(GroupReduceDriver.java:94)
> at 
> org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:450)
> ... 3 more
> Caused by: java.io.IOException: Thread 'SortMerger spilling thread' 
> terminated due to an exception: Unable to find class: java.ttil.HashSet
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:800)
> Caused by: com.esotericsoftware.kryo.KryoException: Unable to find class: 
> java.ttil.HashSet
> at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
> at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
> at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:641)
> at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:752)
> at 
> com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:143)
> at 
> com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:21)
> at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
> at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:228)
> at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:242)
> at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:252)
> at 
> org.apache.flink.api.java.typeutils.runtime.PojoSerializer.copy(PojoSerializer.java:556)
> at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.copy(TupleSerializerBase.java:75)
> at 
> org.apache.flink.runtime.operators.sort.NormalizedKeySorter.writeToOutput(NormalizedKeySorter.java:499)
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger$SpillingThread.go(UnilateralSortMerger.java:1344)
> at 
> org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:796)
> Caused by: java.lang.ClassNotFoundException: java.ttil.HashSet
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
> {code}
> {code}
> Caused by: java.io.IOException: Serializer consumed more bytes than the 
> record had. This indicates broken serialization. If you are using custom 
> serialization types (Value or Writable), check their serialization methods. 
> If you are using a Kryo-serialized type, check the corresponding Kryo 
> serializer.
> at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpannin

[jira] [Updated] (FLINK-5384) clean up jira issues

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;
FLINK-4760 -> fixed?

  was:
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed on github;
> FLINK-2157 -> closed on github;
> FLINK-2220 -> closed on github;
> FLINK-2319 -> closed on github;
> FLINK-2363 -> closed on github;
> FLINK-2399 -> closed on github;
> FLINK-2428 -> closed on github;
> FLINK-2472 -> closed on github;
> FLINK-2480 -> closed on github;
> FLINK-2609

[jira] [Comment Edited] (FLINK-5282) CheckpointStateOutputStream should be closed in finally block in SavepointV0Serializer#convertKeyedBackendState()

2016-12-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15772332#comment-15772332
 ] 

Anton Solovev edited comment on FLINK-5282 at 12/23/16 8:45 AM:


[~tedyu] what do you mean exactly?
{{keyedStateOut}} is already closing in finall block 
http://pasteboard.co/dciBXLSYj.png


was (Author: tonycox):
[~tedyu] what do ypu mean exactly?
{{keyedStateOut}} is already closing in finall block 
http://pasteboard.co/dciBXLSYj.png

> CheckpointStateOutputStream should be closed in finally block in 
> SavepointV0Serializer#convertKeyedBackendState()
> -
>
> Key: FLINK-5282
> URL: https://issues.apache.org/jira/browse/FLINK-5282
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
>   CheckpointStreamFactory.CheckpointStateOutputStream keyedStateOut =
>   
> checkpointStreamFactory.createCheckpointStateOutputStream(checkpointID, 0L);
>   final long offset = keyedStateOut.getPos();
> {code}
> If getPos() throws exception, keyedStateOut would be left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5282) CheckpointStateOutputStream should be closed in finally block in SavepointV0Serializer#convertKeyedBackendState()

2016-12-23 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15772332#comment-15772332
 ] 

Anton Solovev commented on FLINK-5282:
--

[~tedyu] what do ypu mean exactly?
{{keyedStateOut}} is already closing in finall block 
http://pasteboard.co/dciBXLSYj.png

> CheckpointStateOutputStream should be closed in finally block in 
> SavepointV0Serializer#convertKeyedBackendState()
> -
>
> Key: FLINK-5282
> URL: https://issues.apache.org/jira/browse/FLINK-5282
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
>   CheckpointStreamFactory.CheckpointStateOutputStream keyedStateOut =
>   
> checkpointStreamFactory.createCheckpointStateOutputStream(checkpointID, 0L);
>   final long offset = keyedStateOut.getPos();
> {code}
> If getPos() throws exception, keyedStateOut would be left unclosed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5360) Fix arguments names in WindowedStream

2016-12-23 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5360:
-
Assignee: Ivan Mushketyk

> Fix arguments names in WindowedStream
> -
>
> Key: FLINK-5360
> URL: https://issues.apache.org/jira/browse/FLINK-5360
> Project: Flink
>  Issue Type: Bug
>Reporter: Ivan Mushketyk
>Assignee: Ivan Mushketyk
>
> Should be "field" instead of "positionToMaxBy" in some methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3849) Add FilterableTableSource interface and translation rule

2016-12-22 Thread Anton Solovev (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15772202#comment-15772202
 ] 

Anton Solovev commented on FLINK-3849:
--

I think we need to parse an {{Expression}} from {{rexProgram.getCondition}} , 
and set the {{Expression}} to {{FilterableTableSource}}, then parse unsupported 
{{Expression}} into {{RexNode}} , right?

> Add FilterableTableSource interface and translation rule
> 
>
> Key: FLINK-3849
> URL: https://issues.apache.org/jira/browse/FLINK-3849
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: Anton Solovev
>
> Add a {{FilterableTableSource}} interface for {{TableSource}} implementations 
> which support filter push-down.
> The interface could look as follows
> {code}
> def trait FilterableTableSource {
>   // returns unsupported predicate expression
>   def setPredicate(predicate: Expression): Expression
> }
> {code}
> In addition we need Calcite rules to push a predicate (or parts of it) into a 
> TableScan that refers to a {{FilterableTableSource}}. We might need to tweak 
> the cost model as well to push the optimizer in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5384) clean up jira issues

2016-12-22 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with 
https://issues.apache.org/jira/browse/FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;

  was:
must be closed:
FLINK-37 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-87 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-481 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-605 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-639 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-650 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-735 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-456 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-788 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-796 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-805 -> from stratosphere ;
https://issues.apache.org/jira/browse/FLINK-867 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-879 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-1166 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-1946 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2119 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2157 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2220 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2319 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2363 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2399 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2428 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2472 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2480 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2609 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2823 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3155 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3331 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3964 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4653 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4717 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4829 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-5016 -> closed on github;

should be discussed before:
https://issues.apache.org/jira/browse/FLINK-1055 ;
https://issues.apache.org/jira/browse/FLINK-1098 -> create other issue to add a 
colectEach();
https://issues.apache.org/jira/browse/FLINK-1100 ;
https://issues.apache.org/jira/browse/FLINK-1146 ;
https://issues.apache.org/jira/browse/FLINK-1335 -> maybe rename?;
https://issues.apache.org/jira/browse/FLINK-1439 ;
https://issues.apache.org/jira/browse/FLINK-1447 -> firefox problem?;
https://issues.apache.org/jira/browse/FLINK-1521 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-1538 -> gsoc2015, is it solved?;
https://issues.apache.org/jira/browse/FLINK-15

[jira] [Updated] (FLINK-5384) clean up jira issues

2016-12-22 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;

  was:
must be closed:
FLINK-37 -> from stratosphere;
FLINK-87 -> from stratosphere;
FLINK-481 -> from stratosphere;
FLINK-605 -> from stratosphere;
FLINK-639 -> from stratosphere;
FLINK-650 -> from stratosphere;
FLINK-735 -> from stratosphere;
FLINK-456 -> from stratosphere;
FLINK-788 -> from stratosphere;
FLINK-796 -> from stratosphere;
FLINK-805 -> from stratosphere ;
FLINK-867 -> from stratosphere;
FLINK-879 -> from stratosphere;
FLINK-1166 -> closed on github;
FLINK-1946 -> closed on github;
FLINK-2119 -> closed on github;
FLINK-2157 -> closed on github;
FLINK-2220 -> closed on github;
FLINK-2319 -> closed on github;
FLINK-2363 -> closed on github;
FLINK-2399 -> closed on github;
FLINK-2428 -> closed on github;
FLINK-2472 -> closed on github;
FLINK-2480 -> closed on github;
FLINK-2609 -> closed on github;
FLINK-2823 -> closed on github;
FLINK-3155 -> closed on github;
FLINK-3331 -> closed on github;
FLINK-3964 -> closed on github;
FLINK-4653 -> closed on github;
FLINK-4717 -> closed on github;
FLINK-4829 -> closed on github;
FLINK-5016 -> closed on github;

should be discussed before:
FLINK-1055 ;
FLINK-1098 -> create other issue to add a colectEach();
FLINK-1100 ;
FLINK-1146 ;
FLINK-1335 -> maybe rename?;
FLINK-1439 ;
FLINK-1447 -> firefox problem?;
FLINK-1521 -> closed on github;
FLINK-1538 -> gsoc2015, is it solved?;
FLINK-1541 -> gsoc2015, is it solved?;
FLINK-1723 -> almost done? ;
FLINK-1814 ;
FLINK-1858 -> is QA bot deleted?;
FLINK-1926 -> all subtasks done;

FLINK-2023 -> does not block Scala Graph API;
FLINK-2032 -> all subtasks done;
FLINK-2108 -> almost done? ;
FLINK-2309 -> maybe it's worth to merge with 
https://issues.apache.org/jira/browse/FLINK-2316 ? ;
FLINK-3109 -> its PR is stuck;
FLINK-3154 -> must be addressed as part of a bigger initiative;
FLINK-3297 -> solved as third party lib;


> clean up jira issues
> 
>
> Key: FLINK-5384
> URL: https://issues.apache.org/jira/browse/FLINK-5384
> Project: Flink
>  Issue Type: Improvement
>Reporter: Anton Solovev
>Assignee: Anton Solovev
>Priority: Minor
>
> must be closed:
> FLINK-37 -> from stratosphere;
> FLINK-87 -> from stratosphere;
> FLINK-481 -> from stratosphere;
> FLINK-605 -> from stratosphere;
> FLINK-639 -> from stratosphere;
> FLINK-650 -> from stratosphere;
> FLINK-735 -> from stratosphere;
> FLINK-456 -> from stratosphere;
> FLINK-788 -> from stratosphere;
> FLINK-796 -> from stratosphere;
> FLINK-805 -> from stratosphere ;
> FLINK-867 -> from stratosphere;
> FLINK-879 -> from stratosphere;
> FLINK-1166 -> closed on github;
> FLINK-1946 -> closed on github;
> FLINK-2119 -> closed on github;
> FLINK-2157 -> closed on github;
> FLINK-2220 -> closed on github;
> FLINK-2319 -> closed on github;
> FLINK-2363 -> closed on github;
> FLINK-2399 -> closed on github;
> FLINK-2428 -> closed on github;
> FLINK-2472 -> closed on github;
> FLINK-2480 -> closed on gi

[jira] [Updated] (FLINK-5384) clean up jira issues

2016-12-22 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
must be closed:
FLINK-37 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-87 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-481 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-605 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-639 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-650 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-735 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-456 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-788 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-796 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-805 -> from stratosphere ;
https://issues.apache.org/jira/browse/FLINK-867 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-879 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-1166 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-1946 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2119 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2157 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2220 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2319 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2363 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2399 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2428 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2472 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2480 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2609 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2823 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3155 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3331 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3964 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4653 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4717 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4829 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-5016 -> closed on github;

should be discussed before:
https://issues.apache.org/jira/browse/FLINK-1055 ;
https://issues.apache.org/jira/browse/FLINK-1098 -> create other issue to add a 
colectEach();
https://issues.apache.org/jira/browse/FLINK-1100 ;
https://issues.apache.org/jira/browse/FLINK-1146 ;
https://issues.apache.org/jira/browse/FLINK-1335 -> maybe rename?;
https://issues.apache.org/jira/browse/FLINK-1439 ;
https://issues.apache.org/jira/browse/FLINK-1447 -> firefox problem?;
https://issues.apache.org/jira/browse/FLINK-1521 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-1538 -> gsoc2015, is it solved?;
https://issues.apache.org/jira/browse/FLINK-1541 -> gsoc2015, is it solved?;
https://issues.apache.org/jira/browse/FLINK-1723 -> almost done? ;
https://issues.apache.org/jira/browse/FLINK-1814 ;
https://issues.apache.org/jira/browse/FLINK-1858 -> is QA bot deleted?;
https://issues.apache.org/jira/browse/FLINK-1926 -> all subtasks done;

https://issues.apache.org/jira/browse/FLINK-2023 -> does not block Scala Graph 
API;
https://issues.apache.org/jira/browse/FLINK-2032 -> all subtasks done;
https://issues.apache.org/jira/browse/FLINK-2108 -> almost done? ;
https://issues.apache.org/jira/browse/FLINK-2309 -> maybe it's worth to merge 
with https://issues.apache.org/jira/browse/FLINK-2316 ? ;
https://issues.apache.org/jira/browse/FLINK-3109 -> its PR is stuck;
https://issues.apache.org/jira/browse/FLINK-3154 -> must be addressed as part 
of a bigger initiative;
https://issues.apache.org/jira/browse/FLINK-3297 -> solved as third party lib;

  was:
must be closed:
https://issues.apache.org/jira/browse/FLINK-37 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-87 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-481 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-605 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-639 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-650 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-735 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-456 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-788 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-796 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-805 -> from stratosphere ;
https://issues.apache.org/jira/browse/FLINK-867 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-879 -> from stratosphere;
htt

  1   2   >