[ 
https://issues.apache.org/jira/browse/MAHOUT-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13988493#comment-13988493
 ] 

Dmitriy Lyubimov commented on MAHOUT-1529:
------------------------------------------

bq. I think we also need
bq. (6) Rename mahout spark-shell (both command and source dir/files/variables) 
to "mahout shell" (or mahout console?) which only uses the logical layer and 
backend layer is selected at runtime/startup.

No, we don't . Shell is in essense Spark's REPL. in that sense it is exactly 
and literally spark-shell. It includes byte code mechanisms to compile closures 
on-the-fly and pass them to the backend. 

How other engines would want to do that, i have no clue. Chances for a generic 
(and cheap) Mahout shell are very slim IMO.

> Finalize abstraction of distributed logical plans from backend operations
> -------------------------------------------------------------------------
>
>                 Key: MAHOUT-1529
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1529
>             Project: Mahout
>          Issue Type: Improvement
>            Reporter: Dmitriy Lyubimov
>             Fix For: 1.0
>
>
> We have a few situations when algorithm-facing API has Spark dependencies 
> creeping in. 
> In particular, we know of the following cases:
> (1) checkpoint() accepts Spark constant StorageLevel directly;
> (2) certain things in CheckpointedDRM;
> (3) drmParallelize etc. routines in the "drm" and "sparkbindings" package. 
> (5) drmBroadcast returns a Spark-specific Broadcast object



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to