Hi Abhik,
Looks like you need to set the hadoop job conf
-Dmapred.max.split.size=xxx(in bytes) smaller than block size, if it
is supported in mahout wrapper.
Shawn
On Thu, Aug 25, 2011 at 11:13 AM, Abhik Banerjee
banerjee.abhik@gmail.com wrote:
Hi ,
I hope you are doing fine. I had a
Thank you Sean,
i'll try that today.
Is there an similar example for classification/classify with an web
application?
-Ursprüngliche Nachricht-
Von: Lance Norskog [mailto:goks...@gmail.com]
Gesendet: Samstag, 27. August 2011 05:05
An: user@mahout.apache.org
Betreff: Re: How to get
No there is not.
On Sat, Aug 27, 2011 at 8:33 AM, Ramo Karahasan
ramo.karaha...@googlemail.com wrote:
Thank you Sean,
i'll try that today.
Is there an similar example for classification/classify with an web
application?
Hello,
i wanted to ask, if there is a common workflow when trying to
categorize/classify documents with mahout. For me one possible workflow with
solr could be:
index documents into solr - fetch data from solr - prepare data for training
- operate training - get data model - operate with
See here: https://github.com/tdunning/Chapter-16
On Sat, Aug 27, 2011 at 12:33 AM, Ramo Karahasan
ramo.karaha...@googlemail.com wrote:
Thank you Sean,
i'll try that today.
Is there an similar example for classification/classify with an web
application?
-Ursprüngliche
Yes. That is a reasonable work-flow. Have you looked at the book Mahout in
Action (conflict alert, I am an author). We provide extensive details on
how you can use categorization and clustering on real problems in the last
two sections of the book.
Also, if you say just a bit more about what