I already did the code and tests in separate modules, that works but is not a 
good way to go imo. If there are tests that will work in math-scala then we can 
put the code in math-scala. I couldn’t find a way to do it. 


On Jul 8, 2014, at 4:40 PM, Anand Avati <av...@gluster.org> wrote:

I'm not completely sure how to address this (code and tests in separate
modules) as I write, but I will give it a shot soon.


On Mon, Jul 7, 2014 at 9:18 AM, Pat Ferrel <pat.fer...@gmail.com> wrote:

> OK, I’m spending more time on this than I have to spare. The test class
> extends MahoutLocalContext, which provides an implicit Spark context. I
> haven’t found a way to test parallel execution of cooccurrence without it.
> So far the only obvious option is to put cf into math-scala but the tests
> would have to remain in spark and that seems like trouble so I’d rather not
> do that.
> 
> I suspect as more math-scala consuming algos get implemented this issue
> will proliferate. We will have implementations that do not require Spark
> but tests that do. We could create a new sub-project that allows for this I
> suppose but a new sub-project will require changes to SparkEngine and
> mahout’s script.
> 
> If someone (Anand?) wants to offer a PR with some way around this I’d be
> happy to integrate.
> 
> On Jun 30, 2014, at 5:39 PM, Pat Ferrel <pat.fer...@gmail.com> wrote:
> 
> No argument, just trying to decide whether to create core-scala or keep
> dumping anything not Spark dependent in math-scala.
> 
> On Jun 30, 2014, at 9:32 AM, Ted Dunning <ted.dunn...@gmail.com> wrote:
> 
> On Mon, Jun 30, 2014 at 8:36 AM, Pat Ferrel <pat.fer...@gmail.com> wrote:
> 
>> Speaking for Sebastian and Dmitriy (with some ignorance) I think the idea
>> was to isolate things with Spark dependencies something like we did
> before
>> with Hadoop.
> 
> 
> Go ahead and speak for me as well here!
> 
> I think isolating the dependencies is crucial for platform nimbleness
> (nimbility?)
> 
> 
> 

Reply via email to