ArrayOutOfBoundException using build-reuters.sh
-----------------------------------------------

                 Key: MAHOUT-694
                 URL: https://issues.apache.org/jira/browse/MAHOUT-694
             Project: Mahout
          Issue Type: Bug
          Components: Clustering
    Affects Versions: 0.5
         Environment: Linux Debian Lenny
Hadoop 0.20 (Cloudera)
            Reporter: Allan BLANCHARD


I run Hadoop-0.20 on distributed mode on 10 VMs (NameNode + JobTracker + 8 
DataNodes/TaskTrackers) with Mahout trunk.
I tried to test kmeans example with build-reuters.sh but I got an 
IndexOutOfBoundException when it starts kmeans.
I don't know which operation fails ... ExtractReuters, seqdirectory, seq2sparse 
or kmeans. Maybe I forgot a configuration ? I searched on the web and didn't 
found solutions ...

NameNode:/usr/local/mahout/trunk/examples/bin# ./build-reuters.sh 
Please select a number to choose the corresponding clustering algorithm
1. kmeans clustering
2. lda clustering
Enter your choice : 1
ok. You chose 1 and we'll use kmeans Clustering
./build-reuters.sh: line 39: cd: examples/bin/: No such file or directory
Running on hadoop, using HADOOP_HOME=/usr/lib/hadoop-0.20
No HADOOP_CONF_DIR set, using /usr/lib/hadoop-0.20/src/conf 
11/05/12 16:53:32 WARN driver.MahoutDriver: No 
org.apache.lucene.benchmark.utils.ExtractReuters.props found on classpath, will 
use command-line arguments only
Deleting all files in ./examples/bin/work/reuters-out/-tmp
11/05/12 16:53:38 INFO driver.MahoutDriver: Program took 5891 ms
Running on hadoop, using HADOOP_HOME=/usr/lib/hadoop-0.20
No HADOOP_CONF_DIR set, using /usr/lib/hadoop-0.20/src/conf 
11/05/12 16:53:39 INFO common.AbstractJob: Command line arguments: 
{--charset=UTF-8, --chunkSize=5, --endPhase=2147483647, 
--fileFilterClass=org.apache.mahout.text.PrefixAdditionFilter, 
--input=./examples/bin/work/reuters-out/, --keyPrefix=, 
--output=./examples/bin/work/reuters-out-seqdir, --startPhase=0, --tempDir=temp}
11/05/12 16:53:40 INFO driver.MahoutDriver: Program took 1054 ms
Running on hadoop, using HADOOP_HOME=/usr/lib/hadoop-0.20
No HADOOP_CONF_DIR set, using /usr/lib/hadoop-0.20/src/conf 
11/05/12 16:53:41 INFO vectorizer.SparseVectorsFromSequenceFiles: Maximum 
n-gram size is: 1
11/05/12 16:53:41 INFO vectorizer.SparseVectorsFromSequenceFiles: Minimum LLR 
value: 1.0
11/05/12 16:53:41 INFO vectorizer.SparseVectorsFromSequenceFiles: Number of 
reduce tasks: 1
11/05/12 16:53:42 INFO input.FileInputFormat: Total input paths to process : 1
11/05/12 16:53:42 INFO mapred.JobClient: Running job: job_201105121350_0001
[...]
11/05/12 16:56:20 INFO driver.MahoutDriver: Program took 158572 ms
Running on hadoop, using HADOOP_HOME=/usr/lib/hadoop-0.20
No HADOOP_CONF_DIR set, using /usr/lib/hadoop-0.20/src/conf 
11/05/12 16:56:21 INFO common.AbstractJob: Command line arguments: 
{--clusters=./examples/bin/work/clusters, --convergenceDelta=0.5, 
--distanceMeasure=org.apache.mahout.common.distance.SquaredEuclideanDistanceMeasure,
 --endPhase=2147483647, 
--input=./examples/bin/work/reuters-out-seqdir-sparse/tfidf-vectors/, 
--maxIter=10, --method=mapreduce, --numClusters=20, 
--output=./examples/bin/work/reuters-kmeans, --overwrite=null, --startPhase=0, 
--tempDir=temp}
examples/bin/work/reuters-out-seqdir-sparse/tfidf-vectors
examples/bin/work/clusters
11/05/12 16:56:21 INFO util.NativeCodeLoader: Loaded the native-hadoop library
11/05/12 16:56:21 INFO zlib.ZlibFactory: Successfully loaded & initialized 
native-zlib library
11/05/12 16:56:21 INFO compress.CodecPool: Got brand-new compressor
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
        at java.util.ArrayList.RangeCheck(ArrayList.java:547)
        at java.util.ArrayList.get(ArrayList.java:322)
        at 
org.apache.mahout.clustering.kmeans.RandomSeedGenerator.buildRandom(RandomSeedGenerator.java:119)
        at 
org.apache.mahout.clustering.kmeans.KMeansDriver.run(KMeansDriver.java:101)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at 
org.apache.mahout.clustering.kmeans.KMeansDriver.main(KMeansDriver.java:58)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
        at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:187)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
NameNode:/usr/local/mahout/trunk/examples/bin#

PS : Sorry for my very bad english :(

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to