Hi 

I am using HAMA 0.6.4 and I am running my custom program using a cluster 
of 4 machines. My input is a single file and I am setting the number of 
BSP tasks to the number of Groom servers by using 
(JOB.setNumBspTask(cluster.getGroomServers()). I am using the 
HashPartitioner.class to partition the data.


I have a problem when I load my data. When I run my custom program I get the 
following error messages:

14/03/07 16:02:34 INFO bsp.FileInputFormat: Total input paths to process : 1
14/03/07 16:02:34 INFO util.NativeCodeLoader: Loaded the
 native-hadoop library
14/03/07 16:02:34 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/07 16:02:34 INFO bsp.FileInputFormat: Total input paths to process : 1
Exception
 in thread "main" java.io.IOException: Job failed! The number of splits 
has exceeded the number of max tasks. The number of splits: 52, The 
number of max tasks: 20
        at 
org.apache.hama.bsp.BSPJobClient.submitJobInternal(BSPJobClient.java:349)
        at org.apache.hama.bsp.BSPJobClient.submitJob(BSPJobClient.java:296)
        at org.apache.hama.bsp.BSPJob.submit(BSPJob.java:219)
        at org.apache.hama.bsp.BSPJob.waitForCompletion(BSPJob.java:226)
        at org.apache.hama.bsp.BSPJobClient.partition(BSPJobClient.java:460)
       
 at org.apache.hama.bsp.BSPJobClient.submitJobInternal(BSPJobClient.java:341)
        at org.apache.hama.bsp.BSPJobClient.submitJob(BSPJobClient.java:296)
        at org.apache.hama.bsp.BSPJob.submit(BSPJob.java:219)
        at org.apache.hama.graph.GraphJob.submit(GraphJob.java:208)
        at org.apache.hama.bsp.BSPJob.waitForCompletion(BSPJob.java:226)
        at 
de.rwthaachen.dbis.i5cloudmatch.controller.Matcher.main(Matcher.java:479)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

Any advice of how to solve this problem?

RegardsAmmar

Reply via email to