An alternative is to use Apache Whirr to quickly set up a Hadoop
cluster on AWS and install the Mahout binary distribution on one of
the nodes.

Checkout http://whirr.apache.org/ and
http://www.searchworkings.org/blog/-/blogs/apache-whirr-includes-mahout-support
for the mahout-client role

Frank

On Tue, Apr 3, 2012 at 11:39 AM, Sean Owen <sro...@gmail.com> wrote:
> This is lightly covered in Mahout in Action but yes there is really little
> more to know. You upload the job jar and run it like anything else in AWS.
> On Apr 3, 2012 10:24 AM, "Sebastian Schelter" <s...@apache.org> wrote:
>
>> None that I'm aware of. But its supereasy to use Mahout in EMR: You need
>> to upload your data and Mahout's job-jar file to Amazon S3. After that
>> you can you simply start a Hadoop job in EMR that makes use of Mahout,
>> just as you would use it on the command line with 'hadoop jar'
>>
>> Best,
>> Sebastian
>>
>> On 03.04.2012 11:20, Yuval Feinstein wrote:
>> > Hi.
>> > I heard about Amazon's Elastic Map Reduce (
>> > http://aws.amazon.com/elasticmapreduce/)
>> > which provides pre-configured Hadoop servers over the cloud.
>> > Does there exist any service providing Mahout over a similar
>> infrastructure?
>> > i.e a cloud server providing either a stand-alone or a distributed Mahout
>> > service where one can upload data files and run Mahout algorithms?
>> > TIA,
>> > Yuval
>> >
>>
>>

Reply via email to