[ 
https://issues.apache.org/jira/browse/SPARK-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14050670#comment-14050670
 ] 

Hari Shreedharan commented on SPARK-2345:
-----------------------------------------

Looks like we'd have to do this in a new DStream, since the ForEachDStream 
takes a (RDD[T], Time)=> Unit, but to call runJob we'd have to pass in 
(Iterator[T], Time)=>Unit. I am not sure how much value this adds, but it does 
seem like if we are not using one of the built-in save/collect methods, you'd 
have to specifically run this function in context.runJob(...)

Do you think this makes sense, [~tdas], [~pwendell]?

> ForEachDStream should have an option of running the foreachfunc on Spark
> ------------------------------------------------------------------------
>
>                 Key: SPARK-2345
>                 URL: https://issues.apache.org/jira/browse/SPARK-2345
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Hari Shreedharan
>
> Today the Job generated simply calls the foreachfunc, but does not run it on 
> spark itself using the sparkContext.runJob method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to