Hi,
Here is the output in the console,I can't find anymore in the
HAMA_HOME/logs.
hadoop@datanode4:/usr/local/hama$ bin/hama jar
/home/datanode4/Desktop/WeiboRank.jar vertexresult weiborankresult
13/09/05 08:55:08 INFO mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
13/09/05 08:55:11 INFO bsp.FileInputFormat: Total input paths to process : 1
13/09/05 08:55:11 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
13/09/05 08:55:11 WARN snappy.LoadSnappy: Snappy native library not loaded
13/09/05 08:55:11 INFO bsp.FileInputFormat: Total input paths to process : 1
13/09/05 08:55:13 INFO bsp.BSPJobClient: Running job: job_201309041029_0025
13/09/05 08:55:13 INFO bsp.BSPJobClient: Job failed.
13/09/05 08:55:13 ERROR bsp.BSPJobClient: Error partitioning the input path.
Exception in thread "main" java.io.IOException: Runtime partition failed
for the job.
at org.apache.hama.bsp.BSPJobClient.partition(BSPJobClient.java:465)
at org.apache.hama.bsp.BSPJobClient.submitJobInternal(BSPJobClient.java:333)
at org.apache.hama.bsp.BSPJobClient.submitJob(BSPJobClient.java:293)
at org.apache.hama.bsp.BSPJob.submit(BSPJob.java:229)
at org.apache.hama.graph.GraphJob.submit(GraphJob.java:203)
at org.apache.hama.bsp.BSPJob.waitForCompletion(BSPJob.java:236)
at WeiboRank.main(WeiboRank.java:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hama.util.RunJar.main(RunJar.java:146)
The class extends Vertex just like the PageRank.
Here is the InputReader:
public static class WeiboRankReader
extends
VertexInputReader<LongWritable, Text, Text, NullWritable, DoubleWritable> {
@Override
public boolean parseVertex(LongWritable key, Text value,
Vertex<Text, NullWritable, DoubleWritable> vertex)
throws Exception {
String[] keyvaluePair = value.toString().split("\t");
if (keyvaluePair.length > 1) {
vertex.setVertexID(new Text(keyvaluePair[0]));
String edgeString = keyvaluePair[1];
if (!edgeString.equals("")) {
String[] edges = edgeString.split(",");
for (String e : edges) {
vertex.addEdge(new Edge<Text, NullWritable>(
new Text(e), null));
}
}
}
else
vertex.setVertexID(new Text(keyvaluePair[0]));
return true;
}
}
2013/9/5 Edward J. Yoon <[email protected]>
> Can you provide full client console logs?
>
> On Wed, Sep 4, 2013 at 10:21 PM, 邓凯 <[email protected]> wrote:
> > Hi,
> > I have a hadoop-1.1.2 cluster with one namenode and four
> datanodes.I
> > built the hama-0.6.2 on it.When I run the benchmarks and the examples
> such
> > as Pagerank it goes well.
> > But today when I ran my own code it met a exception.
> > The log says ERROR bsp.BSPJobClient:Error partitioning the input
> path
> > The exception is Execption inthread "main" java.io.IOException :
> > Runtime partition failed for the job.
> > According to this,I think there is someting wrong with my code.
> > My hama has 4 groomservers and task capacity is 12.
> > I use the command bin/hama jar Weiborank.jar vertexresult
> > weiborankresult 12
> > The directory vertexresult has only one file in it.And I use the
> > HashPartitioner.class as the partitioner.
> > I wonder whether it caused by the only one file in the input path
> but
> > there are 12 bsp tasks.If so,can I fix it by increasing the num of file
> in
> > the input path.
> > Thanks a lot.
>
>
>
> --
> Best Regards, Edward J. Yoon
> @eddieyoon
>