[
https://issues.apache.org/jira/browse/MAHOUT-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13988501#comment-13988501
]
Anand Avati commented on MAHOUT-1529:
-------------------------------------
[~dlyubimov], the purpose of the spark-shell, AFAICT, is to present and
interactive interface to work on the DSL. To that end, there is little reason
to use the Spark REPL for the purpose.. we could use a vanilla Scala REPL with
DSL and operators pre-loaded (imported). And this should work just as fine with
Spark, as using the Spark REPL. Doesn't that feel much cleaner?
> Finalize abstraction of distributed logical plans from backend operations
> -------------------------------------------------------------------------
>
> Key: MAHOUT-1529
> URL: https://issues.apache.org/jira/browse/MAHOUT-1529
> Project: Mahout
> Issue Type: Improvement
> Reporter: Dmitriy Lyubimov
> Fix For: 1.0
>
>
> We have a few situations when algorithm-facing API has Spark dependencies
> creeping in.
> In particular, we know of the following cases:
> (1) checkpoint() accepts Spark constant StorageLevel directly;
> (2) certain things in CheckpointedDRM;
> (3) drmParallelize etc. routines in the "drm" and "sparkbindings" package.
> (5) drmBroadcast returns a Spark-specific Broadcast object
> *Current tracker:*
> https://github.com/dlyubimov/mahout-commits/tree/MAHOUT-1529.
> *Pull requests are welcome*.
--
This message was sent by Atlassian JIRA
(v6.2#6252)