Thanks Steve , U helped me to clear my doubts several times.

I explain U What my Problem is :

I am trying to run *wordcount-nopipe.cc* program in */home/hadoop/project/hadoop-0.20.2/src/examples/pipes/impl* directory. I am able to run a simple wordcount.cpp program in Hadoop Cluster but whebn I am going to run this program, ifaced below exception :

*bash-3.2$ bin/hadoop pipes -D hadoop.pipes.java.recordreader=true -D hadoop.pipes.java.recordwriter=true -input gutenberg -output gutenberg-out1101 -program bin/wordcount-nopipe2* 11/03/31 14:59:07 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). 11/03/31 14:59:07 INFO mapred.FileInputFormat: Total input paths to process : 3
11/03/31 14:59:08 INFO mapred.JobClient: Running job: job_201103310903_0007
11/03/31 14:59:09 INFO mapred.JobClient:  map 0% reduce 0%
11/03/31 14:59:18 INFO mapred.JobClient: Task Id : attempt_201103310903_0007_m_000000_0, Status : FAILED
java.io.IOException: pipe child exception
at org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151) at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
       at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
       at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.net.SocketException: Broken pipe
       at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
       at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
       at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
       at java.io.DataOutputStream.flush(DataOutputStream.java:106)
at org.apache.hadoop.mapred.pipes.BinaryProtocol.flush(BinaryProtocol.java:316) at org.apache.hadoop.mapred.pipes.Application.waitForFinish(Application.java:129) at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:99)
       ... 3 more

attempt_201103310903_0007_m_000000_0: Hadoop Pipes Exception: failed to open at wordcount-nopipe2.cc:86 in WordCountReader::WordCountReader(HadoopPipes::MapContext&) 11/03/31 14:59:18 INFO mapred.JobClient: Task Id : attempt_201103310903_0007_m_000001_0, Status : FAILED
java.io.IOException: pipe child exception
at org.apache.hadoop.mapred.pipes.Application.abort(Application.java:151) at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:101)
       at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
       at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.net.SocketException: Broken pipe
       at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
       at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
       at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
       at java.io.DataOutputStream.flush(DataOutputStream.java:106)
at org.apache.hadoop.mapred.pipes.BinaryProtocol.flush(BinaryProtocol.java:316) at org.apache.hadoop.mapred.pipes.Application.waitForFinish(Application.java:129) at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:99)
       ... 3 more

After some R&D , i find the below links quite useful :

http://lucene.472066.n3.nabble.com/pipe-application-error-td650185.html
http://stackoverflow.com/questions/4395140/eofexception-thrown-by-a-hadoop-pipes-program

But don't know how to resolve this. I think my program try to open the file as file://gutenberg but it requires as hdfs://.....

Here is the contents of my Makefile :

CC = g++
HADOOP_INSTALL =/home/hadoop/project/hadoop-0.20.2
PLATFORM = Linux-amd64-64
CPPFLAGS = -m64 -I/home/hadoop/project/hadoop-0.20.2/c++/Linux-amd64-64/include -I/usr/local/cuda/include

wordcount-nopipe2 : wordcount-nopipe2.cc
$(CC) $(CPPFLAGS) $< -Wall -L/home/hadoop/project/hadoop-0.20.2/c++/Linux-amd64-64/lib -L/usr/local/cuda/lib64 -lhadooppipes \
   -lhadooputils -lpthread -g -O2 -o $@

Would it be a bug in hadoop-0.20.2 and if not Please guide me how to debug it.



Thanks & best Regards,
Adarsh Sharma












Steve Loughran wrote:
On 31/03/11 07:37, Adarsh Sharma wrote:
Thanks a lot for such deep explanation :

I have done it now, but it doesn't help me in my original problem for
which I'm doing this.

Please if you have some idea comment on it. I attached the problem.


Sadly. Matt's deep explanation is what you need, low-level that it is

-patches are designed to be applied to source, so you need the apache source tree, not any binary installations.

-you need to be sure that the source version you have matches that the patch is designed to be applied against, unless you want to get into the problem of understanding the source enough to fix inconsistencies.

-you need to rebuild hadoop afterwards.

Because Apache code is open source, patches and the like are all bits of source. This is not windows where the source is secret and all os updates are bits of binary code. The view is that if you want to apply patches, then yes, you do have to play at the source level.

The good news is once you can do that for one patch, you can apply others, and you will be in a position to find and fix bugs yourself.

-steve

Reply via email to