Hi,
Can you please confirm that the line number 45 in ClosenessMessageWritable
is as follows:

this.previous_bitstring[i] = in.readBoolean();


If its then try the following. Hopefully it will fix your problem



@Override

public void readFields(DataInput in) throws IOException {

size=in.readInt();


previous_bitstring = new boolean[size];

for(int i=0;i<size;i++)

{

previous_bitstring[i] = DataInput.readBoolean();

}

}



On Nov 20, 2013 3:26 AM, "Jyoti Yadav" <rao.jyoti26ya...@gmail.com> wrote:

> 2013-11-20 15:46:09,202 ERROR
> org.apache.giraph.utils.LogStacktraceCallable: Execution of callable failed
>
> *java.lang.NullPointerException    at
> org.apache.giraph.examples.ClosenessMessageWritable.readFields(ClosenessMessageWritable.java:45)*
>     at
> org.apache.giraph.utils.ByteArrayVertexIdMessages.readData(ByteArrayVertexIdMessages.java:77)
>     at
> org.apache.giraph.utils.ByteArrayVertexIdMessages.readData(ByteArrayVertexIdMessages.java:34)
>     at
> org.apache.giraph.utils.ByteArrayVertexIdData$VertexIdDataIterator.next(ByteArrayVertexIdData.java:221)
>     at
> org.apache.giraph.comm.messages.primitives.LongByteArrayMessageStore.addPartitionMessages(LongByteArrayMessageStore.java:149)
>     at
> org.apache.giraph.comm.requests.SendWorkerMessagesRequest.doRequest(SendWorkerMessagesRequest.java:73)
>     at
> org.apache.giraph.comm.netty.NettyWorkerClientRequestProcessor.doRequest(NettyWorkerClientRequestProcessor.java:482)
>     at
> org.apache.giraph.comm.SendMessageCache.flush(SendMessageCache.java:247)
>     at
> org.apache.giraph.comm.netty.NettyWorkerClientRequestProcessor.flush(NettyWorkerClientRequestProcessor.java:415)
>     at
> org.apache.giraph.graph.ComputeCallable.call(ComputeCallable.java:203)
>     at
> org.apache.giraph.graph.ComputeCallable.call(ComputeCallable.java:70)
>     at
> org.apache.giraph.utils.LogStacktraceCallable.call(LogStacktraceCallable.java:51)
>     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:724)
> 2013-11-20 15:46:09,204 ERROR org.apache.giraph.worker.BspServiceWorker:
> unregisterHealth: Got failure, unregistering health on
> /_hadoopBsp/job_201311200901_0018/_applicationAttemptsDir/0/_superstepDir/0/_workerHealthyDir/kanha-Vostro-1014_1
> on superstep 0
> 2013-11-20 15:46:09,261 INFO org.apache.hadoop.mapred.TaskLogsTruncater:
> Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
> 2013-11-20 15:46:09,348 INFO org.apache.hadoop.io.nativeio.NativeIO:
> Initialized cache for UID to User mapping with a cache timeout of 14400
> seconds.
> 2013-11-20 15:46:09,348 INFO org.apache.hadoop.io.nativeio.NativeIO: Got
> UserName hduser for UID 1001 from the native implementation
> 2013-11-20 15:46:09,351 WARN org.apache.hadoop.mapred.Child: Error running
> child
> java.lang.IllegalStateException: run: Caught an unrecoverable exception
> waitFor: ExecutionException occurred while waiting for
> org.apache.giraph.utils.ProgressableUtils$FutureWaitable@16ea72c
>     at org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:101)
>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:369)
>     at org.apache.hadoop.mapred.Child$4.run(Child.java:259)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>     at org.apache.hadoop.mapred.Child.main(Child.java:253)
> Caused by: java.lang.IllegalStateException: waitFor: ExecutionException
> occurred while waiting for
> org.apache.giraph.utils.ProgressableUtils$FutureWaitable@16ea72c
>     at
> org.apache.giraph.utils.ProgressableUtils.waitFor(ProgressableUtils.java:181)
>     at
> org.apache.giraph.utils.ProgressableUtils.waitForever(ProgressableUtils.java:139)
>     at
> org.apache.giraph.utils.ProgressableUtils.waitForever(ProgressableUtils.java:124)
>     at
> org.apache.giraph.utils.ProgressableUtils.getFutureResult(ProgressableUtils.java:87)
>     at
> org.apache.giraph.utils.ProgressableUtils.getResultsWithNCallables(ProgressableUtils.java:221)
>     at
> org.apache.giraph.graph.GraphTaskManager.processGraphPartitions(GraphTaskManager.java:741)
>     at
> org.apache.giraph.graph.GraphTaskManager.execute(GraphTaskManager.java:286)
>     at org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:91)
>     ... 7 more
> Caused by: java.util.concurrent.ExecutionException:
> java.lang.NullPointerException
>     at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:262)
>     at java.util.concurrent.FutureTask.get(FutureTask.java:119)
>     at
> org.apache.giraph.utils.ProgressableUtils$FutureWaitable.waitFor(ProgressableUtils.java:300)
>     at
> org.apache.giraph.utils.ProgressableUtils.waitFor(ProgressableUtils.java:173)
>     ... 14 more
> Caused by: java.lang.NullPointerException
>     at
> org.apache.giraph.examples.ClosenessMessageWritable.readFields(ClosenessMessageWritable.java:45)
>     at
> org.apache.giraph.utils.ByteArrayVertexIdMessages.readData(ByteArrayVertexIdMessages.java:77)
>     at
> org.apache.giraph.utils.ByteArrayVertexIdMessages.readData(ByteArrayVertexIdMessages.java:34)
>     at
> org.apache.giraph.utils.ByteArrayVertexIdData$VertexIdDataIterator.next(ByteArrayVertexIdData.java:221)
>     at
> org.apache.giraph.comm.messages.primitives.LongByteArrayMessageStore.addPartitionMessages(LongByteArrayMessageStore.java:149)
>     at
> org.apache.giraph.comm.requests.SendWorkerMessagesRequest.doRequest(SendWorkerMessagesRequest.java:73)
>     at
> org.apache.giraph.comm.netty.NettyWorkerClientRequestProcessor.doRequest(NettyWorkerClientRequestProcessor.java:482)
>     at
> org.apache.giraph.comm.SendMessageCache.flush(SendMessageCache.java:247)
>     at
> org.apache.giraph.comm.netty.NettyWorkerClientRequestProcessor.flush(NettyWorkerClientRequestProcessor.java:415)
>     at
> org.apache.giraph.graph.ComputeCallable.call(ComputeCallable.java:203)
>     at
> org.apache.giraph.graph.ComputeCallable.call(ComputeCallable.java:70)
>     at
> org.apache.giraph.utils.LogStacktraceCallable.call(LogStacktraceCallable.java:51)
>     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:724)
> 2013-11-20 15:46:09,356 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
> 2013-11-20 15:46:09,408 WARN org.apache.hadoop.conf.Configuration:
> /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201311200901_0018/job.xml:a
> attempt to override final parameter: dfs.data.dir;  Ignoring.
> 2013-11-20 15:46:09,412 WARN org.apache.hadoop.conf.Configuration:
> /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201311200901_0018/job.xml:a
> attempt to override final parameter: dfs.name.dir;  Ignoring.
>
>
> On Wed, Nov 20, 2013 at 4:52 PM, Sebastian Schelter <
> ssc.o...@googlemail.com> wrote:
>
>> What is the exact error message?
>>
>> On 20.11.2013 12:11, Jyoti Yadav wrote:
>> > Thanks Sir for your suggestions...
>> > I looked into the log files...
>> > It is showing some error at readFields() method in
>> > ClosenessMessageWritable.java file..You please have a look again whether
>> > boolean array is serialized properly or not...
>> >
>> > I am doubtful about these serialization functions..
>> >
>> > Thanks..
>> >
>> >
>> > On Wed, Nov 20, 2013 at 4:16 PM, Sebastian Schelter <
>> ssc.o...@googlemail.com
>> >> wrote:
>> >
>> >> It says "Failed map tasks=1", you should have a deeper look into the
>> >> logfiles or into the web console of Hadoop to find out why the map task
>> >> fails.
>> >>
>> >> On 20.11.2013 11:44, Jyoti Yadav wrote:
>> >>> 13/11/20 15:56:28 INFO mapred.JobClient:     Failed map tasks=1
>> >>
>> >>
>> >
>>
>>
>

Reply via email to