Thats great. Well then lets get this release out and start with the
refactoring

Robin

On Sat, Mar 6, 2010 at 3:20 AM, Jake Mannix <jake.man...@gmail.com> wrote:

> On Thu, Mar 4, 2010 at 7:41 AM, Robin Anil <robin.a...@gmail.com> wrote:
>
> > Based on what i have in mind, the usage will just be
> >
> > mahout vectorize -i s3://input -o s3://output -tmp hdfs://file (here,
> there
> > is a risk of fixing a exact path and not knowing the hadoop user, I would
> > have preferred a relative path)
> >
>
> So according to Peter and his support devs over at Amazon, EMR is already
> set to take any unqualified Path and have the default FileSystem to be
> (local) HDFS, so we won't need to do anything special here - if we want to
> have temp directories, we just specify them in the usual unqualified way,
> and they'll hit the local HDFS, and then if you want to grab stuff in and
> out
> of s3, you specify directly:
>
> mahout vectorize -i s3://input -o s3://output -tmp tmp/myJobsTemp
>
>  -jake
>

Reply via email to