[ 
https://issues.apache.org/jira/browse/MAHOUT-588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12989323#comment-12989323
 ] 

Szymon Chojnacki commented on MAHOUT-588:
-----------------------------------------

I'll place both: tf-vectors and tfidf-vectors generated for 1-grams with 
TamingAnalyzer.java (uploaded to Mahout-588), parameters used for 
DictionaryVectorizer.createTermFrequencyVectors():

 int minSupport = 100;
 int maxNGramSize = 1;
 float minLLRValue = LLRReducer.DEFAULT_MIN_LLR;
 int reduceTasks = 10;
 int chunkSize = 128;
 boolean sequentialAccessOutput = false;
 // new parameters in new API
 float normPower=PartialVectorMerger.NO_NORMALIZING;
 boolean logNormalize=false;
 boolean namedVectors=false;

parameters used in TFIDFConverter.processTfIdf():

 int reduceTasks = 10;
 int chunkSize = 128;
 boolean sequentialAccessOutput = false;
 int minDf = 1;
 int maxDFPercent = 80;
 float norm = PartialVectorMerger.NO_NORMALIZING;
 boolean logNormalize=false;
 boolean namedVectors=false;

It is recommended in "Mahout in Action" that LDA should get tf-vectors as an 
input. Thanks for explaining distcp. Btw. so far I gave up with 3-grams (I was 
getting OutOfMem even for Xmx2000M), after setting Xmx4000M hadoop threw 
IO.Exception).

> Benchmark Mahout's clustering performance on EC2 and publish the results
> ------------------------------------------------------------------------
>
>                 Key: MAHOUT-588
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-588
>             Project: Mahout
>          Issue Type: Task
>            Reporter: Grant Ingersoll
>         Attachments: SequenceFilesFromMailArchives.java, 
> SequenceFilesFromMailArchives2.java, Top1000Tokens_maybe_stopWords, 
> Uncompress.java, clusters_kMeans.txt, distcp_large_to_s3_failed.log, 
> seq2sparse_small_failed.log, seq2sparse_xlarge_ok.log
>
>
> For Taming Text, I've commissioned some benchmarking work on Mahout's 
> clustering algorithms.  I've asked the two doing the project to do all the 
> work in the open here.  The goal is to use a publicly reusable dataset (for 
> now, the ASF mail archives, assuming it is big enough) and run on EC2 and 
> make all resources available so others can reproduce/improve.
> I'd like to add the setup code to utils (although it could possibly be done 
> as a Vectorizer) and the publication of the results will be put up on the 
> Wiki as well as in the book.  This issue is to track the patches, etc.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to