I could view the snappy file with hadoop fs -cat but when i issue the -text, it 
gives me this error though the file size is really tiny. what have i done 
wrong? Thanks 
hadoop fs -text 
/test/SinkToHDFS-ip-.us-west-2.compute.internal-6703-22-20131212-0.snappyException
 in thread "main" java.lang.OutOfMemoryError: Java heap space at 
org.apache.hadoop.io.compress.BlockDecompressorStream.getCompressedData(BlockDecompressorStream.java:115)
    at 
org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:95)
    at 
org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:83)
    at java.io.InputStream.read(InputStream.java:82)        at 
org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:78)      at 
org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:52)      at 
org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)     at 
org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:86)        at 
org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:81)  at 
org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306)    at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)     at 
org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260) at 
org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)        at 
org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)     at 
org.apache.hadoop.fs.shell.Command.run(Command.java:154)     at 
org.apache.hadoop.fs.FsShell.run(FsShell.java:254)   at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)    at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)    at 
org.apache.hadoop.fs.FsShell.main(FsShell.java:304)                             
       

Reply via email to