Hi, Good Morning to all of you..

Any update on the below problem.


Thanks & best Regards,
Adarsh Sharma

Amareshwari Sri Ramadasu wrote:
You can not run it with TextInputFormat. You should run it with org.apache.hadoop.mapred.pipes .*WordCountInputFormat. *You can pass the input format by passing it in --inputformat option.
I did not try it myself, but it should work.

-Amareshwari

On 3/31/11 12:23 PM, "Adarsh Sharma" <adarsh.sha...@orkash.com> wrote:

    Thanks Amareshwari,

    here is the posting :
    The *nopipe* example needs more documentation.  It assumes that it
is run with the InputFormat from
    src/test/org/apache/*hadoop*/mapred/*pipes*/
*WordCountInputFormat*.java, which has a very specific input split format. By running with a TextInputFormat, it will send binary bytes as the input split and won't work right. The *nopipe* example should probably be recoded *to* use libhdfs *too*, but that is more complicated *to* get running as a unit test. Also note that since the C++ example is using local file reads, it will only work on a cluster if you have nfs or something working across the cluster.

    Please need if I'm wrong.

    I need to run it with TextInputFormat.

    If posiible Please explain the above post more clearly.


    Thanks & best Regards,
    Adarsh Sharma



    Amareshwari Sri Ramadasu wrote:


        Here is an answer for your question in old mail archive:
        http://lucene.472066.n3.nabble.com/pipe-application-error-td650185.html

Don't understand what is the reason & solution of this.


        On 3/31/11 10:15 AM, "Adarsh Sharma"
        <adarsh.sha...@orkash.com> <mailto:adarsh.sha...@orkash.com>
         wrote:

        Any update on the below error.

        Please guide.


        Thanks & best Regards,
        Adarsh Sharma



        Adarsh Sharma wrote:

            Dear all,

            Today I faced a problem while running a map-reduce job in
            C++. I am
            not able to understand to find the reason of the below error :


            11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
            attempt_201103301130_0011_m_000000_0, Status : FAILED
            java.io.IOException: pipe child exception
                    at
            
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
                    at
            
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
                    at
            org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
                    at
            org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
                    at org.apache.hadoop.mapred.Child.main(Child.java:170)
            Caused by: java.io.EOFException
                    at
            java.io.DataInputStream.readByte(DataInputStream.java:250)
                    at
            org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
                    at
            org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
                    at
            
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)

            attempt_201103301130_0011_m_000000_0: Hadoop Pipes
            Exception: failed
            to open  at wordcount-nopipe.cc:82 in
            WordCountReader::WordCountReader(HadoopPipes::MapContext&)
            11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
            attempt_201103301130_0011_m_000001_0, Status : FAILED
            java.io.IOException: pipe child exception
                    at
            
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
                    at
            
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
                    at
            org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
                    at
            org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
                    at org.apache.hadoop.mapred.Child.main(Child.java:170)
            Caused by: java.io.EOFException
                    at
            java.io.DataInputStream.readByte(DataInputStream.java:250)
                    at
            org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
                    at
            org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
                    at
            
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)

            attempt_201103301130_0011_m_000001_0: Hadoop Pipes
            Exception: failed
            to open  at wordcount-nopipe.cc:82 in
            WordCountReader::WordCountReader(HadoopPipes::MapContext&)
            11/03/30 12:09:02 INFO mapred.JobClient: Task Id :
            attempt_201103301130_0011_m_000002_0, Status : FAILED
            java.io.IOException: pipe child exception
                    at
            
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
                    at
            
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
                    at
            org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
                    at
            org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
                    at org.apache.hadoop.mapred.Child.main(Child.java:170)
            Caused by: java.io.EOFException
                    at
            java.io.DataInputStream.readByte(DataInputStream.java:250)
                    at
            org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
                    at
            org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
                    at
            
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
            attempt_201103301130_0011_m_000002_1: Hadoop Pipes
            Exception: failed
            to open  at wordcount-nopipe.cc:82 in
            WordCountReader::WordCountReader(HadoopPipes::MapContext&)
            11/03/30 12:09:15 INFO mapred.JobClient: Task Id :
            attempt_201103301130_0011_m_000000_2, Status : FAILED
            java.io.IOException: pipe child exception
                    at
            
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151)
                    at
            
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
                    at
            org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:35

            I tried to run *wordcount-nopipe.cc* program in
            */home/hadoop/project/hadoop-0.20.2/src/examples/pipes/impl*
            directory.


            make  wordcount-nopipe
            bin/hadoop fs -put wordcount-nopipe   bin/wordcount-nopipe
            bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D
            hadoop.pipes.java.recordwriter=true -input gutenberg -output
            gutenberg-out11 -program bin/wordcount-nopipe

                                             or
            bin/hadoop pipes -D hadoop.pipes.java.recordreader=false -D
            hadoop.pipes.java.recordwriter=false -input gutenberg -output
            gutenberg-out11 -program bin/wordcount-nopipe

            but error remains the same. I attached my Makefile also.
            Please have some comments on it.

            I am able to wun a simple wordcount.cpp program in Hadoop
            Cluster but
            don't know why this program fails in Broken Pipe error.



            Thanks & best regards
            Adarsh Sharma







Reply via email to