Dear sir/madam,

You are running mahout locally, but you should run it on top of Hadoop. For
running mahout on top of Hadoop, you need to set HADOOP_CONF_DIR and
HADOOP_HOME. Please set them and try again.

Regards,
Nooshin


On Wed, Dec 16, 2015 at 5:22 AM, 周子博 <[email protected]> wrote:

> Hi,
>
> When I finished install mahout, and try to run the Wikipedia Bayes
> example, I met the error as follow.
>
> This is the second time I run the program. And it says error while running
> the first job. Yesterday, when I run this program the first time, it says
> error when it comes to 61st job. (The total is 85 I think.) The same 
> error:Could
> not find any valid local directory for output/spill0.out.
>
> Does it mean something wrong when I set the value of Memory? But I'm still
> not sure how to solve it. The memory error always happens and I tried a lot
> value.
>
> Hadoop version: hadoop-0.20.2
>
> Mahout version: 0.6
>
> jdk1.6
>
> Thank you very much.
>
>
>
> [root@localhost bin]# $MAHOUT_HOME/bin/mahout wikipediaDataSetCreator -i
> wikipedia/chunks -o wikipediainput -c
> $MAHOUT_HOME/examples/src/test/resources/country.txt
> MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
> no HADOOP_HOME set, running locally
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/usr/local/mahout/examples/target/mahout-examples-0.6-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/mahout/examples/target/dependency/slf4j-jcl-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/mahout/examples/target/dependency/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> 15/12/16 11:28:52 WARN driver.MahoutDriver: No
> wikipediaDataSetCreator.props found on classpath, will use command-line
> arguments only
> 15/12/16 11:28:53 INFO bayes.WikipediaDatasetCreatorDriver: Input:
> wikipedia/chunks Out: wikipediainput Categories:
> /usr/local/mahout/examples/src/test/resources/country.txt
> 15/12/16 11:28:53 INFO common.HadoopUtil: Deleting wikipediainput
> 15/12/16 11:28:53 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.
> 15/12/16 11:28:53 INFO input.FileInputFormat: Total input paths to
> process : 85
> 15/12/16 11:28:53 INFO mapred.JobClient: Running job: job_local_0001
> 15/12/16 11:28:53 INFO mapred.MapTask: io.sort.mb = 100
> 15/12/16 11:28:53 INFO mapred.MapTask: data buffer = 79691776/99614720
> 15/12/16 11:28:53 INFO mapred.MapTask: record buffer = 262144/327680
> 15/12/16 11:28:53 INFO bayes.WikipediaDatasetCreatorMapper: Configure:
> Input Categories size: 229 Exact Match: false Analyzer:
> org.apache.mahout.analysis.WikipediaAnalyzer
> 15/12/16 11:28:54 INFO mapred.JobClient:  map 0% reduce 0%
> 15/12/16 11:28:59 INFO mapred.LocalJobRunner:
> 15/12/16 11:29:00 INFO mapred.JobClient:  map 59% reduce 0%
> 15/12/16 11:29:02 INFO mapred.LocalJobRunner:
> 15/12/16 11:29:03 INFO mapred.JobClient:  map 89% reduce 0%
> 15/12/16 11:29:03 INFO mapred.MapTask: Starting flush of map output
> 15/12/16 11:29:03 WARN mapred.LocalJobRunner: job_local_0001
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any
> valid local directory for output/spill0.out
>  at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:381)
>  at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:146)
>  at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:127)
>  at
> org.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFile.java:121)
>  at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1392)
>  at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1298)
>  at
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:699)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:766)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
>  at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
> 15/12/16 11:29:04 INFO mapred.JobClient: Job complete: job_local_0001
> 15/12/16 11:29:04 INFO mapred.JobClient: Counters: 11
> 15/12/16 11:29:04 INFO mapred.JobClient:   File Input Format Counters
> 15/12/16 11:29:04 INFO mapred.JobClient:     Bytes Read=30605312
> 15/12/16 11:29:04 INFO mapred.JobClient:   FileSystemCounters
> 15/12/16 11:29:04 INFO mapred.JobClient:     FILE_BYTES_READ=54218800
> 15/12/16 11:29:04 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=23842961
> 15/12/16 11:29:04 INFO mapred.JobClient:   Map-Reduce Framework
> 15/12/16 11:29:04 INFO mapred.JobClient:     Map output materialized
> bytes=0
> 15/12/16 11:29:04 INFO mapred.JobClient:     Combine output records=0
> 15/12/16 11:29:04 INFO mapred.JobClient:     Map input records=7654
> 15/12/16 11:29:04 INFO mapred.JobClient:     Spilled Records=0
> 15/12/16 11:29:04 INFO mapred.JobClient:     Map output bytes=7669293
> 15/12/16 11:29:04 INFO mapred.JobClient:     SPLIT_RAW_BYTES=123
> 15/12/16 11:29:04 INFO mapred.JobClient:     Map output records=1244
> 15/12/16 11:29:04 INFO mapred.JobClient:     Combine input records=0
> 15/12/16 11:29:04 INFO driver.MahoutDriver: Program took 12000 ms
> (Minutes: 0.2)
>

Reply via email to