[ 
https://issues.apache.org/jira/browse/MAHOUT-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13983073#comment-13983073
 ] 

Sean Owen commented on MAHOUT-1310:
-----------------------------------

I also meant that instead of

{code}
File tmpFile = new File(System.getProperty("java.io.tmpdir"));
ModelSerializer.writeBinary(tmpFile.getAbsoluteFile().toString()+System.getProperty("file.separator")+"news-group.model",
{code}

you can do 

{code}
ModelSerializer.writeBinary(new File(SystemUtils.getJavaIoTmpDir(), 
"news-group.model").toString(),
{code}

but this is pretty trivial.

For the spark module, does it really need its own separate Hadoop dependencies 
and different Hadoop 2 profile? I'm not questioning it, just curious. 

This is a different question, but does CDH really belong here as a separate 
profile, instead of just having a Hadoop 2 profile -- can they be 'merged'? It 
shouldn't be a special case.

> Mahout support windows
> ----------------------
>
>                 Key: MAHOUT-1310
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1310
>             Project: Mahout
>          Issue Type: Task
>    Affects Versions: 0.9
>         Environment: Operation system: Windows
>            Reporter: Sergey Svinarchuk
>            Assignee: Sergey Svinarchuk
>              Labels: patch
>             Fix For: 1.0
>
>         Attachments: 1310-2.patch, 1310-3.patch, 1310.patch, 
> Mahout_build_on_winodws.PNG
>
>
> Update mahout for build it on Windows with hadoop 2 and add bin/mahout.cmd



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to