Re: Hadoop Terasort Benchmark Failure - Need Inputs

2014-11-30 Thread Bing Jiang
hi, Ashish
I have ever seen a similar issue, and reported the issue
https://issues.apache.org/jira/browse/MAPREDUCE-5782

I have some workaround from that jira.

-Bing



2014-11-30 4:07 GMT+08:00 Ashish Kumar9 ashis...@in.ibm.com:

 Hi ,

 I am facing issue when i run teragen / terasort benchmark . Can someone
 suggest if you have also faced the same issue

 *Command Used*
 yarn jar
 /opt/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
  terasort input output

 *Exception*
 14/11/29 07:03:10 INFO Configuration.deprecation:
 mapred.output.value.class is deprecated. Instead, use
 mapreduce.job.output.value.class
 14/11/29 07:03:10 INFO Configuration.deprecation:
 mapred.compress.map.output is deprecated. Instead, use
 mapreduce.map.output.compress
 14/11/29 07:03:10 INFO Configuration.deprecation:
 min.num.spills.for.combine is deprecated. Instead, use
 mapreduce.map.combine.minspills
 14/11/29 07:03:10 WARN mapred.LocalJobRunner: job_local_0001
 java.lang.IllegalArgumentException: can't read paritions file
 at
 org.apache.hadoop.examples.terasort.TeraSort$TotalOrderPartitioner.setConf(TeraSort.java:216)
 at
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
 at
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
 at
 org.apache.hadoop.mapred.MapTask$NewOutputCollector.init(MapTask.java:675)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:740)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:368)
 at
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:270)
 Caused by: java.io.FileNotFoundException: File _partition.lst does not
 exist
 at
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:520)
 at
 org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
 at
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:142)
 at
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:344)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:772)
 at
 org.apache.hadoop.examples.terasort.TeraSort$TotalOrderPartitioner.readPartitions(TeraSort.java:158)
 at
 org.apache.hadoop.examples.terasort.TeraSort$TotalOrderPartitioner.setConf(TeraSort.java:213)
 ... 6 more
 14/11/29 07:03:10 INFO Configuration.deprecation: job.end.notification.url
 is deprecated. Instead, use mapreduce.job.end-notification.url
 14/11/29 07:03:11 INFO mapred.JobClient:  map 0% reduce 0%
 14/11/29 07:03:11 INFO mapred.JobClient: Job complete: job_local_0001
 14/11/29 07:03:11 INFO mapred.JobClient: Counters: 0
 14/11/29 07:03:11 INFO terasort.TeraSort: done

 *Investigations done so far*

- thoroughly validated mapred-site.xml and am completely in sync with
below recommendations


 http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-common/ClusterSetup.html

- teragen executes successfully but terasort fails with above exception
- some sites suggest that i should use the property  in
mapred-site.xml but looks like this property is no more valid .
- Granted full access to hdfs directory
hadoop fs -chmod -R 775 /


 Thanks and Regards,
 Ashish Kumar


Re: Hadoop Terasort Benchmark Failure - Need Inputs

2014-11-30 Thread Ashish Kumar9
I suppose you are suggesting something like below which i tried and did 
not help .

yarn jar 
/opt/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar 
  terasort -Dmapreduce.totalorderpartitioner.path =_sortPartitioning input 
output

Thanks
Ashish


From:   Bing Jiang jiangbinglo...@gmail.com
To: user@hadoop.apache.org
Date:   12/01/2014 11:13 AM
Subject:Re: Hadoop Terasort Benchmark Failure - Need Inputs



hi, Ashish
I have ever seen a similar issue, and reported the issue  
https://issues.apache.org/jira/browse/MAPREDUCE-5782

I have some workaround from that jira.

-Bing



2014-11-30 4:07 GMT+08:00 Ashish Kumar9 ashis...@in.ibm.com:
Hi , 

I am facing issue when i run teragen / terasort benchmark . Can someone 
suggest if you have also faced the same issue 

Command Used 
yarn jar 
/opt/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar 
 terasort input output 

Exception 
14/11/29 07:03:10 INFO Configuration.deprecation: 
mapred.output.value.class is deprecated. Instead, use 
mapreduce.job.output.value.class 
14/11/29 07:03:10 INFO Configuration.deprecation: 
mapred.compress.map.output is deprecated. Instead, use 
mapreduce.map.output.compress 
14/11/29 07:03:10 INFO Configuration.deprecation: 
min.num.spills.for.combine is deprecated. Instead, use 
mapreduce.map.combine.minspills 
14/11/29 07:03:10 WARN mapred.LocalJobRunner: job_local_0001 
java.lang.IllegalArgumentException: can't read paritions file 
at 
org.apache.hadoop.examples.terasort.TeraSort$TotalOrderPartitioner.setConf(TeraSort.java:216)
 

at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73) 
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) 

at 
org.apache.hadoop.mapred.MapTask$NewOutputCollector.init(MapTask.java:675) 

at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:740) 

at org.apache.hadoop.mapred.MapTask.run(MapTask.java:368) 
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:270) 
Caused by: java.io.FileNotFoundException: File _partition.lst does not 
exist 
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:520)
 

at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398) 

at 
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:142)
 

at 
org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:344) 
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:772) 
at 
org.apache.hadoop.examples.terasort.TeraSort$TotalOrderPartitioner.readPartitions(TeraSort.java:158)
 

at 
org.apache.hadoop.examples.terasort.TeraSort$TotalOrderPartitioner.setConf(TeraSort.java:213)
 

... 6 more 
14/11/29 07:03:10 INFO Configuration.deprecation: job.end.notification.url 
is deprecated. Instead, use mapreduce.job.end-notification.url 
14/11/29 07:03:11 INFO mapred.JobClient:  map 0% reduce 0% 
14/11/29 07:03:11 INFO mapred.JobClient: Job complete: job_local_0001 
14/11/29 07:03:11 INFO mapred.JobClient: Counters: 0 
14/11/29 07:03:11 INFO terasort.TeraSort: done 

Investigations done so far 
thoroughly validated mapred-site.xml and am completely in sync with below 
recommendations
http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-common/ClusterSetup.html
 

teragen executes successfully but terasort fails with above exception 
some sites suggest that i should use the property  in mapred-site.xml 
but looks like this property is no more valid . 
Granted full access to hdfs directory 
hadoop fs -chmod -R 775 /

Thanks and Regards,
Ashish Kumar





Re: Benchmark Failure

2014-03-22 Thread Lixiang Ao
Checked the logs, and turned out to be configuration problem. Just
set dfs.namenode.fs-limits.min-block-size to 1 and it's fixed

Thanks.


On Wed, Mar 19, 2014 at 2:51 PM, Brahma Reddy Battula 
brahmareddy.batt...@huawei.com wrote:

  Seems to be this is issue, which is logged..Please check following jirafor 
 sameHope you also facing same issue...



 https://issues.apache.org/jira/browse/HDFS-4929







 Thanks  Regards



 Brahma Reddy Battula


   --
 *From:* Lixiang Ao [aolixi...@gmail.com]
 *Sent:* Tuesday, March 18, 2014 10:34 AM
 *To:* user@hadoop.apache.org
 *Subject:* Re: Benchmark Failure

   the version is release 2.2.0
 2014年3月18日 上午12:26于 Lixiang Ao aolixi...@gmail.com写道:

  Hi all,

  I'm running jobclient tests(on single node), other tests like
 TestDFSIO, mrbench succeed except nnbench.

  I got a lot of Exceptions but without any explanation(see below).

  Could anyone tell me what might went wrong?

  Thanks!


  14/03/17 23:54:22 INFO hdfs.NNBench: Waiting in barrier for: 112819 ms
 14/03/17 23:54:23 INFO mapreduce.Job: Job job_local2133868569_0001
 running in uber mode : false
 14/03/17 23:54:23 INFO mapreduce.Job:  map 0% reduce 0%
 14/03/17 23:54:28 INFO mapred.LocalJobRunner: hdfs://
 0.0.0.0:9000/benchmarks/NNBench-aolx-PC/control/NNBench_Controlfile_10:0+125
  map
 14/03/17 23:54:29 INFO mapreduce.Job:  map 6% reduce 0%
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 (1000 Exceptions)
 .
 .
 .
 results:

  File System Counters
 FILE: Number of bytes read=18769411
 FILE: Number of bytes written=21398315
 FILE: Number of read operations=0
 FILE: Number of large read operations=0
 FILE: Number of write operations=0
 HDFS: Number of bytes read=11185
 HDFS: Number of bytes written=19540
 HDFS: Number of read operations=325
 HDFS: Number of large read operations=0
 HDFS: Number of write operations=13210
 Map-Reduce Framework
 Map input records=12
 Map output records=95
 Map output bytes=1829
 Map output materialized bytes=2091
 Input split bytes=1538
 Combine input records=0
 Combine output records=0
 Reduce input groups=8
 Reduce shuffle bytes=0
 Reduce input records=95
 Reduce output records=8
 Spilled Records=214
 Shuffled Maps =0
 Failed Shuffles=0
 Merged Map outputs=0
 GC time elapsed (ms)=211
 CPU time spent (ms)=0
 Physical memory (bytes) snapshot=0
 Virtual memory (bytes) snapshot=0
 Total committed heap usage (bytes)=4401004544
 File Input Format Counters
 Bytes Read=1490
 File Output Format Counters
 Bytes Written=170
  14/03/17 23:56:18 INFO hdfs.NNBench: -- NNBench
 -- :
 14/03/17 23:56:18 INFO hdfs.NNBench:
  Version: NameNode Benchmark 0.4
 14/03/17 23:56:18 INFO hdfs.NNBench:Date 
 time: 2014-03-17 23:56:18,619
 14/03/17 23:56:18 INFO hdfs.NNBench:
 14/03/17 23:56:18 INFO hdfs.NNBench: Test
 Operation: create_write
 14/03/17 23:56:18 INFO hdfs.NNBench: Start
 time: 2014-03-17 23:56:15,521
 14/03/17 23:56:18 INFO hdfs.NNBench:Maps to
 run: 12
 14/03/17 23:56:18 INFO hdfs.NNBench: Reduces to
 run: 6
 14/03/17 23:56:18 INFO hdfs.NNBench: Block Size
 (bytes): 1
 14/03/17 23:56:18 INFO hdfs.NNBench: Bytes to
 write: 0
 14/03/17 23:56:18 INFO hdfs.NNBench: Bytes per
 checksum: 1
 14/03/17 23:56:18 INFO hdfs.NNBench:Number of
 files: 1000
 14/03/17 23:56:18 INFO hdfs.NNBench: Replication
 factor: 3
 14/03/17 23:56:18 INFO hdfs.NNBench: Successful file
 operations: 0
 14/03/17 23:56:18 INFO hdfs.NNBench:
 14/03/17 23:56:18 INFO hdfs.NNBench: # maps that missed the
 barrier: 11
 14/03/17 23:56:18 INFO hdfs.NNBench:   #
 exceptions: 1000
 14/03/17 23:56:18 INFO hdfs.NNBench:
 14/03/17 23:56:18 INFO hdfs.NNBench:TPS:
 Create/Write/Close: 0
 14/03/17 23:56:18 INFO hdfs.NNBench: Avg exec time (ms):
 Create/Write/Close: Infinity
 14/03/17 23:56:18 INFO hdfs.NNBench: Avg Lat (ms):
 Create/Write: NaN
 14/03/17 23:56:18 INFO hdfs.NNBench:Avg Lat (ms):
 Close: NaN
 14/03/17 23:56:18 INFO hdfs.NNBench:
 14/03/17 23:56:18 INFO hdfs.NNBench:  RAW DATA: AL Total
 #1: 0
 14/03/17

Re: Benchmark Failure

2014-03-22 Thread Harsh J
Do not leave that configuration in after your tests are done. It would be very
harmful to allow such tiny block sizes from clients, enabling them to
flood your NameNode's metadata with a lot of blocks for a small file.

If its instead possible, tune NNBench's block size to be larger perhaps.

On Sat, Mar 22, 2014 at 2:26 PM, Lixiang Ao aolixi...@gmail.com wrote:
 Checked the logs, and turned out to be configuration problem. Just set
 dfs.namenode.fs-limits.min-block-size to 1 and it's fixed

 Thanks.


 On Wed, Mar 19, 2014 at 2:51 PM, Brahma Reddy Battula
 brahmareddy.batt...@huawei.com wrote:

 Seems to be this is issue, which is logged..Please check following jira
 for sameHope you also facing same issue...



 https://issues.apache.org/jira/browse/HDFS-4929







 Thanks  Regards



 Brahma Reddy Battula



 
 From: Lixiang Ao [aolixi...@gmail.com]
 Sent: Tuesday, March 18, 2014 10:34 AM
 To: user@hadoop.apache.org
 Subject: Re: Benchmark Failure

 the version is release 2.2.0

 2014年3月18日 上午12:26于 Lixiang Ao aolixi...@gmail.com写道:

 Hi all,

 I'm running jobclient tests(on single node), other tests like TestDFSIO,
 mrbench succeed except nnbench.

 I got a lot of Exceptions but without any explanation(see below).

 Could anyone tell me what might went wrong?

 Thanks!


 14/03/17 23:54:22 INFO hdfs.NNBench: Waiting in barrier for: 112819 ms
 14/03/17 23:54:23 INFO mapreduce.Job: Job job_local2133868569_0001
 running in uber mode : false
 14/03/17 23:54:23 INFO mapreduce.Job:  map 0% reduce 0%
 14/03/17 23:54:28 INFO mapred.LocalJobRunner:
 hdfs://0.0.0.0:9000/benchmarks/NNBench-aolx-PC/control/NNBench_Controlfile_10:0+125
  map
 14/03/17 23:54:29 INFO mapreduce.Job:  map 6% reduce 0%
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
 Create/Write/Close
 (1000 Exceptions)
 .
 .
 .
 results:

 File System Counters
 FILE: Number of bytes read=18769411
 FILE: Number of bytes written=21398315
 FILE: Number of read operations=0
 FILE: Number of large read operations=0
 FILE: Number of write operations=0
 HDFS: Number of bytes read=11185
 HDFS: Number of bytes written=19540
 HDFS: Number of read operations=325
 HDFS: Number of large read operations=0
 HDFS: Number of write operations=13210
 Map-Reduce Framework
 Map input records=12
 Map output records=95
 Map output bytes=1829
 Map output materialized bytes=2091
 Input split bytes=1538
 Combine input records=0
 Combine output records=0
 Reduce input groups=8
 Reduce shuffle bytes=0
 Reduce input records=95
 Reduce output records=8
 Spilled Records=214
 Shuffled Maps =0
 Failed Shuffles=0
 Merged Map outputs=0
 GC time elapsed (ms)=211
 CPU time spent (ms)=0
 Physical memory (bytes) snapshot=0
 Virtual memory (bytes) snapshot=0
 Total committed heap usage (bytes)=4401004544
 File Input Format Counters
 Bytes Read=1490
 File Output Format Counters
 Bytes Written=170
 14/03/17 23:56:18 INFO hdfs.NNBench: -- NNBench
 -- :
 14/03/17 23:56:18 INFO hdfs.NNBench:
 Version: NameNode Benchmark 0.4
 14/03/17 23:56:18 INFO hdfs.NNBench:Date 
 time: 2014-03-17 23:56:18,619
 14/03/17 23:56:18 INFO hdfs.NNBench:
 14/03/17 23:56:18 INFO hdfs.NNBench: Test
 Operation: create_write
 14/03/17 23:56:18 INFO hdfs.NNBench: Start
 time: 2014-03-17 23:56:15,521
 14/03/17 23:56:18 INFO hdfs.NNBench:Maps to
 run: 12
 14/03/17 23:56:18 INFO hdfs.NNBench: Reduces to
 run: 6
 14/03/17 23:56:18 INFO hdfs.NNBench: Block Size
 (bytes): 1
 14/03/17 23:56:18 INFO hdfs.NNBench: Bytes to
 write: 0
 14/03/17 23:56:18 INFO hdfs.NNBench: Bytes per
 checksum: 1
 14/03/17 23:56:18 INFO hdfs.NNBench:Number of
 files: 1000
 14/03/17 23:56:18 INFO hdfs.NNBench: Replication
 factor: 3
 14/03/17 23:56:18 INFO hdfs.NNBench: Successful file
 operations: 0
 14/03/17 23:56:18 INFO hdfs.NNBench:
 14/03/17 23:56:18 INFO hdfs.NNBench: # maps that missed the
 barrier: 11
 14/03/17 23:56:18 INFO hdfs.NNBench:   #
 exceptions: 1000
 14/03/17 23:56:18 INFO hdfs.NNBench:
 14/03/17 23:56:18 INFO hdfs.NNBench:TPS:
 Create/Write/Close: 0
 14/03/17 23:56:18 INFO

RE: Benchmark Failure

2014-03-19 Thread Brahma Reddy Battula
Seems to be this is issue, which is logged..Please check following jira for 
sameHope you also facing same issue...



https://issues.apache.org/jira/browse/HDFS-4929







Thanks  Regards



Brahma Reddy Battula




From: Lixiang Ao [aolixi...@gmail.com]
Sent: Tuesday, March 18, 2014 10:34 AM
To: user@hadoop.apache.org
Subject: Re: Benchmark Failure


the version is release 2.2.0

2014年3月18日 上午12:26于 Lixiang Ao 
aolixi...@gmail.commailto:aolixi...@gmail.com写道:
Hi all,

I'm running jobclient tests(on single node), other tests like TestDFSIO, 
mrbench succeed except nnbench.

I got a lot of Exceptions but without any explanation(see below).

Could anyone tell me what might went wrong?

Thanks!


14/03/17 23:54:22 INFO hdfs.NNBench: Waiting in barrier for: 112819 ms
14/03/17 23:54:23 INFO mapreduce.Job: Job job_local2133868569_0001 running in 
uber mode : false
14/03/17 23:54:23 INFO mapreduce.Job:  map 0% reduce 0%
14/03/17 23:54:28 INFO mapred.LocalJobRunner: 
hdfs://0.0.0.0:9000/benchmarks/NNBench-aolx-PC/control/NNBench_Controlfile_10:0+125http://0.0.0.0:9000/benchmarks/NNBench-aolx-PC/control/NNBench_Controlfile_10:0+125
  map
14/03/17 23:54:29 INFO mapreduce.Job:  map 6% reduce 0%
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op: 
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op: 
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op: 
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op: 
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op: 
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op: 
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op: 
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op: 
Create/Write/Close
(1000 Exceptions)
.
.
.
results:

File System Counters
FILE: Number of bytes read=18769411
FILE: Number of bytes written=21398315
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=11185
HDFS: Number of bytes written=19540
HDFS: Number of read operations=325
HDFS: Number of large read operations=0
HDFS: Number of write operations=13210
Map-Reduce Framework
Map input records=12
Map output records=95
Map output bytes=1829
Map output materialized bytes=2091
Input split bytes=1538
Combine input records=0
Combine output records=0
Reduce input groups=8
Reduce shuffle bytes=0
Reduce input records=95
Reduce output records=8
Spilled Records=214
Shuffled Maps =0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=211
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=4401004544
File Input Format Counters
Bytes Read=1490
File Output Format Counters
Bytes Written=170
14/03/17 23:56:18 INFO hdfs.NNBench: -- NNBench -- :
14/03/17 23:56:18 INFO hdfs.NNBench:Version: 
NameNode Benchmark 0.4
14/03/17 23:56:18 INFO hdfs.NNBench:Date  time: 
2014-03-17 23:56:18,619
14/03/17 23:56:18 INFO hdfs.NNBench:
14/03/17 23:56:18 INFO hdfs.NNBench: Test Operation: 
create_write
14/03/17 23:56:18 INFO hdfs.NNBench: Start time: 
2014-03-17 23:56:15,521
14/03/17 23:56:18 INFO hdfs.NNBench:Maps to run: 12
14/03/17 23:56:18 INFO hdfs.NNBench: Reduces to run: 6
14/03/17 23:56:18 INFO hdfs.NNBench: Block Size (bytes): 1
14/03/17 23:56:18 INFO hdfs.NNBench: Bytes to write: 0
14/03/17 23:56:18 INFO hdfs.NNBench: Bytes per checksum: 1
14/03/17 23:56:18 INFO hdfs.NNBench:Number of files: 
1000
14/03/17 23:56:18 INFO hdfs.NNBench: Replication factor: 3
14/03/17 23:56:18 INFO hdfs.NNBench: Successful file operations: 0
14/03/17 23:56:18 INFO hdfs.NNBench:
14/03/17 23:56:18 INFO hdfs.NNBench: # maps that missed the barrier: 11
14/03/17 23:56:18 INFO hdfs.NNBench:   # exceptions: 
1000
14/03/17 23:56:18 INFO hdfs.NNBench:
14/03/17 23:56:18 INFO hdfs.NNBench:TPS: Create/Write/Close: 0
14/03/17 23:56:18 INFO hdfs.NNBench: Avg exec time (ms): Create/Write/Close: 
Infinity
14/03/17 23:56:18 INFO hdfs.NNBench: Avg Lat (ms): Create/Write: NaN
14/03/17 23:56:18 INFO hdfs.NNBench:Avg Lat (ms): Close: NaN
14/03/17 23:56:18 INFO hdfs.NNBench:
14/03/17 23:56:18 INFO hdfs.NNBench:  RAW DATA: AL Total #1: 0
14/03/17 23:56:18 INFO hdfs.NNBench:  RAW DATA: AL Total #2: 0
14/03/17 23:56:18 INFO hdfs.NNBench:   RAW DATA: TPS Total (ms): 
1131
14/03/17 23:56:18 INFO hdfs.NNBench:RAW DATA: Longest Map

Benchmark Failure

2014-03-17 Thread Lixiang Ao
Hi all,

I'm running jobclient tests(on single node), other tests like TestDFSIO,
mrbench succeed except nnbench.

I got a lot of Exceptions but without any explanation(see below).

Could anyone tell me what might went wrong?

Thanks!


14/03/17 23:54:22 INFO hdfs.NNBench: Waiting in barrier for: 112819 ms
14/03/17 23:54:23 INFO mapreduce.Job: Job job_local2133868569_0001 running
in uber mode : false
14/03/17 23:54:23 INFO mapreduce.Job:  map 0% reduce 0%
14/03/17 23:54:28 INFO mapred.LocalJobRunner: hdfs://
0.0.0.0:9000/benchmarks/NNBench-aolx-PC/control/NNBench_Controlfile_10:0+125
map
14/03/17 23:54:29 INFO mapreduce.Job:  map 6% reduce 0%
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
Create/Write/Close
14/03/17 23:56:15 INFO hdfs.NNBench: Exception recorded in op:
Create/Write/Close
(1000 Exceptions)
.
.
.
results:

File System Counters
FILE: Number of bytes read=18769411
 FILE: Number of bytes written=21398315
FILE: Number of read operations=0
FILE: Number of large read operations=0
 FILE: Number of write operations=0
HDFS: Number of bytes read=11185
HDFS: Number of bytes written=19540
 HDFS: Number of read operations=325
HDFS: Number of large read operations=0
HDFS: Number of write operations=13210
 Map-Reduce Framework
Map input records=12
Map output records=95
 Map output bytes=1829
Map output materialized bytes=2091
Input split bytes=1538
 Combine input records=0
Combine output records=0
Reduce input groups=8
 Reduce shuffle bytes=0
Reduce input records=95
Reduce output records=8
 Spilled Records=214
Shuffled Maps =0
Failed Shuffles=0
 Merged Map outputs=0
GC time elapsed (ms)=211
CPU time spent (ms)=0
 Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=4401004544
 File Input Format Counters
Bytes Read=1490
File Output Format Counters
 Bytes Written=170
14/03/17 23:56:18 INFO hdfs.NNBench: -- NNBench --
:
14/03/17 23:56:18 INFO hdfs.NNBench:
 Version: NameNode Benchmark 0.4
14/03/17 23:56:18 INFO hdfs.NNBench:Date 
time: 2014-03-17 23:56:18,619
14/03/17 23:56:18 INFO hdfs.NNBench:
14/03/17 23:56:18 INFO hdfs.NNBench: Test
Operation: create_write
14/03/17 23:56:18 INFO hdfs.NNBench: Start
time: 2014-03-17 23:56:15,521
14/03/17 23:56:18 INFO hdfs.NNBench:Maps to
run: 12
14/03/17 23:56:18 INFO hdfs.NNBench: Reduces to
run: 6
14/03/17 23:56:18 INFO hdfs.NNBench: Block Size
(bytes): 1
14/03/17 23:56:18 INFO hdfs.NNBench: Bytes to
write: 0
14/03/17 23:56:18 INFO hdfs.NNBench: Bytes per
checksum: 1
14/03/17 23:56:18 INFO hdfs.NNBench:Number of
files: 1000
14/03/17 23:56:18 INFO hdfs.NNBench: Replication
factor: 3
14/03/17 23:56:18 INFO hdfs.NNBench: Successful file
operations: 0
14/03/17 23:56:18 INFO hdfs.NNBench:
14/03/17 23:56:18 INFO hdfs.NNBench: # maps that missed the
barrier: 11
14/03/17 23:56:18 INFO hdfs.NNBench:   #
exceptions: 1000
14/03/17 23:56:18 INFO hdfs.NNBench:
14/03/17 23:56:18 INFO hdfs.NNBench:TPS:
Create/Write/Close: 0
14/03/17 23:56:18 INFO hdfs.NNBench: Avg exec time (ms):
Create/Write/Close: Infinity
14/03/17 23:56:18 INFO hdfs.NNBench: Avg Lat (ms):
Create/Write: NaN
14/03/17 23:56:18 INFO hdfs.NNBench:Avg Lat (ms):
Close: NaN
14/03/17 23:56:18 INFO hdfs.NNBench:
14/03/17 23:56:18 INFO hdfs.NNBench:  RAW DATA: AL Total
#1: 0
14/03/17 23:56:18 INFO hdfs.NNBench:  RAW DATA: AL Total
#2: 0
14/03/17 23:56:18 INFO hdfs.NNBench:   RAW DATA: TPS Total
(ms): 1131
14/03/17 23:56:18 INFO hdfs.NNBench:RAW DATA: Longest Map Time
(ms): 1.395071776653E12
14/03/17 23:56:18 INFO hdfs.NNBench:RAW DATA: Late
maps: 11
14/03/17 23:56:18 INFO hdfs.NNBench:  RAW DATA: # of
exceptions: 1000
14/03/17 23:56:18 INFO hdfs.NNBench: