I'm getting same as Anton below trying to launch a new job with latest from TRUNK.

Logic in ObjectWriteable#readObject seems a little off. On the way in we test for a null instance. If null, we set to NullWriteable.

Next we test declaredClass to see if its an array. We then try to do an Array.getLength on instance -- which we've above set as NullWriteable.

Looks like we should test instance to see if its NullWriteable before we do the Array.getLength (or do the instance null check later).

Hope above helps,
St.Ack



[EMAIL PROTECTED] wrote:
We updated hadoop from trunk branch. But now we get new errors:

On tasktarcker side:
<skiped>
java.io.IOException: timed out waiting for response
        at org.apache.hadoop.ipc.Client.call(Client.java:305)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
        at org.apache.hadoop.mapred.$Proxy0.pollForTaskWithClosedJob(Unknown
Source)
        at
org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:310)
        at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:374)
        at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:813)
060427 062708 Client connection to 10.0.0.10:9001 caught:
java.lang.RuntimeException:
 java.lang.ClassNotFoundException:
java.lang.RuntimeException: java.lang.ClassNotFoundException:
        at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:152)
        at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:139)
        at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:186)
        at
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:60)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:170)
060427 062708 Client connection to 10.0.0.10:9001: closing


On jobtracker side:
<skiped>
060427 061713 Server handler 3 on 9001 caught:
java.lang.IllegalArgumentException: Ar
gument is not an array
java.lang.IllegalArgumentException: Argument is not an array
        at java.lang.reflect.Array.getLength(Native Method)
        at
org.apache.hadoop.io.ObjectWritable.writeObject(ObjectWritable.java:92)
        at org.apache.hadoop.io.ObjectWritable.write(ObjectWritable.java:64)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:250)
<skiped>

-----Original Message-----
From: Doug Cutting [mailto:[EMAIL PROTECTED] Sent: Thursday, April 27, 2006 12:48 AM
To: nutch-dev@lucene.apache.org
Subject: Re: exception
Importance: High

This is a Hadoop DFS error. It could mean that you don't have any datanodes running, or that all your datanodes are full. Or, it could be a bug in dfs. You might try a recent nightly build of Hadoop to see if it works any better.

Doug

Anton Potehin wrote:
What means error of following type :

java.rmi.RemoteException: java.io.IOException: Cannot obtain additional
block for file /user/root/crawl/indexes/index/_0.prx





Reply via email to