I love this. I am going to pursue getting the collaborative filtering
Hadoop jobs set up for this so people can use it easily. Indeed, would
be great to showcase this example.

Incidentally for anyone that was interested, I did get an Amazon EC2
image ready that reads an S3 file of data, computes recommendations,
writes the results, and shuts down. That is truly efficient use of
on-demand computing; works nicely. It took some effort to work out how
to manage permissions and so on but think I got a decent solution.

Anyway I was just preparing to 'launch' this EC2 image as a sort of
commercialized extension of Mahout. Anyone that has an interest in a
little beta-testing, do let me know.

On Thu, Apr 2, 2009 at 5:24 PM, Tim Bass <[email protected]> wrote:
> Hello Grant,
>
> Here is the link to the (future) page of example applications:
>
> http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=263
>
> This might be where a future Mahout example app might reside?
>
> Yours sincerely, Tim
>
> On Thu, Apr 2, 2009 at 7:19 PM, Grant Ingersoll <[email protected]> wrote:
>> Yeah, saw this today, too.  Very cool.  One of these days, I'll have time to
>> use the credits Amazon has donated to Apache and try this out more.  I think
>> this furthers the need to make it easy to install Mahout on top of Hadoop in
>> this environment.  Scripts for this would be a great donation.
>>
>> On Apr 2, 2009, at 4:28 AM, [email protected] wrote:
>>
>>> FYI.
>>>
>>> ---------- Forwarded message ----------
>>> From: Amazon Web Services <[email protected]>
>>> Date: Apr 2, 2009 3:23pm
>>> Subject: Announcing Amazon Elastic MapReduce
>>> To: "[email protected]" <[email protected]>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>> Dear AWS Customer,
>>>
>>>
>>>> We are excited today to introduce the public beta of Amazon Elastic
>>>> MapReduce, a web service that enables businesses, researchers, data
>>>> analysts, and developers to easily and cost-effectively process vast 
>>>> amounts
>>>> of data. It utilizes a hosted Hadoop framework running on the web-scale
>>>> infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon
>>>> Simple Storage Service (Amazon S3).
>>>
>>>
>>>> Using Amazon Elastic MapReduce, you can instantly provision as much or as
>>>> little capacity as you like to perform data-intensive tasks for 
>>>> applications
>>>> such as web indexing, data mining, log file analysis, machine learning,
>>>> financial analysis, scientific simulation, and bioinformatics research.
>>>> Amazon Elastic MapReduce lets you focus on crunching or analyzing your data
>>>> without having to worry about time-consuming set-up, management or tuning 
>>>> of
>>>> Hadoop clusters or the compute capacity upon which they sit.
>>>
>>>
>>>> Working with the service is easy: Develop your processing application
>>>> using our samples or by building your own, upload your data to Amazon S3,
>>>> use the AWS Management Console or APIs to specify the number and type of
>>>> instances you want, and click "Create Job Flow." We do the rest, running
>>>> Hadoop over the number of specified instances, providing progress
>>>> monitoring, and delivering the output to Amazon S3.
>>>
>>>
>>>> We hope this new service will prove a powerful tool for your data
>>>> processing needs. You can sign up and start using the service today at
>>>> aws.amazon.com/elasticmapreduce.
>>>
>>>
>>>
>>>> Sincerely,
>>>
>>>
>>>> The Amazon Web Services Team
>>>
>>>
>>>> We hope you enjoyed receiving this message. If you wish to remove
>>>> yourself from receiving future product announcements or the AWS Newsletter,
>>>> please update your communication preferences.
>>>
>>>
>>>> Amazon Web Services LLC is a subsidiary of Amazon.com, Inc. Amazon.com is
>>>> a registered trademark of Amazon.com, Inc. This message produced and
>>>> distributed by Amazon Web Services, LLC, 1200 12th Ave South, Seattle, WA
>>>> 98144.
>>>
>>
>> --------------------------
>> Grant Ingersoll
>> http://www.lucidimagination.com/
>>
>> Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids) using
>> Solr/Lucene:
>> http://www.lucidimagination.com/search
>>
>>
>

Reply via email to