I am using Hadoop 0.18.3 with lzo 2.03.  I am able to compile Hadoop's
native code and load lzo's native library.  I am trying to run the grep
example in examples.jar on a lzo-compressed file.  I am getting an
OutOfMemoryError on the Java heap space.  My input file is 1628727 bytes
which compressed into 157879 bytes.  My map/reduce child java process is
running with a 2000m heap.  So I think that ought to be enough.  What am I
doing wrong?
Bill

<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx2000m</value>
  <description>Java opts for the task tracker child processes.
  The following symbol, if present, will be interpolated: @taskid@ is
replaced
  by current TaskID. Any other occurrences of '@' will go unchanged.
  For example, to enable verbose gc logging to a file named for the taskid
in
  /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of:
        -Xmx1024m -verbose:gc -Xloggc:/tmp/@tas...@.gc

  The configuration variable mapred.child.ulimit can be used to control the
  maximum virtual memory of the child processes.
  </description>
</property>


2009-08-21 14:48:44,043 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=MAP, sessionId=
2009-08-21 14:48:44,158 INFO org.apache.hadoop.mapred.MapTask:
numReduceTasks: 1
2009-08-21 14:48:44,175 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb =
100
2009-08-21 14:48:44,286 INFO org.apache.hadoop.mapred.MapTask: data buffer =
79691776/99614720
2009-08-21 14:48:44,286 INFO org.apache.hadoop.mapred.MapTask: record buffer
= 262144/327680
2009-08-21 14:48:44,379 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded
the native-hadoop library
2009-08-21 14:48:44,382 INFO org.apache.hadoop.io.compress.LzoCodec:
Successfully loaded & initialized native-lzo library
2009-08-21 14:48:44,591 WARN org.apache.hadoop.mapred.TaskTracker: Error
running child
java.lang.OutOfMemoryError: Java heap space
at
org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:100)
at
org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:82)
at
org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:74)
at java.io.InputStream.read(InputStream.java:85)
at
org.apache.hadoop.mapred.LineRecordReader$LineReader.backfill(LineRecordReader.java:94)
at
org.apache.hadoop.mapred.LineRecordReader$LineReader.readLine(LineRecordReader.java:124)
at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:266)
at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:39)
at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:165)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:45)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2210)

Reply via email to