Re: exception

2006-04-26 Thread Doug Cutting
This is a Hadoop DFS error.  It could mean that you don't have any 
datanodes running, or that all your datanodes are full.  Or, it could be 
a bug in dfs.  You might try a recent nightly build of Hadoop to see if 
it works any better.


Doug

Anton Potehin wrote:

What means error of following type :

 


java.rmi.RemoteException: java.io.IOException: Cannot obtain additional
block for file /user/root/crawl/indexes/index/_0.prx

 

 





RE: exception

2006-04-27 Thread anton
We updated hadoop from trunk branch. But now we get new errors:

On tasktarcker side:

java.io.IOException: timed out waiting for response
at org.apache.hadoop.ipc.Client.call(Client.java:305)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
at org.apache.hadoop.mapred.$Proxy0.pollForTaskWithClosedJob(Unknown
Source)
at
org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:310)
at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:374)
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:813)
060427 062708 Client connection to 10.0.0.10:9001 caught:
java.lang.RuntimeException:
 java.lang.ClassNotFoundException:
java.lang.RuntimeException: java.lang.ClassNotFoundException:
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:152)
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:139)
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:186)
at
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:60)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:170)
060427 062708 Client connection to 10.0.0.10:9001: closing


On jobtracker side:

060427 061713 Server handler 3 on 9001 caught:
java.lang.IllegalArgumentException: Ar
gument is not an array
java.lang.IllegalArgumentException: Argument is not an array
at java.lang.reflect.Array.getLength(Native Method)
at
org.apache.hadoop.io.ObjectWritable.writeObject(ObjectWritable.java:92)
at org.apache.hadoop.io.ObjectWritable.write(ObjectWritable.java:64)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:250)


-Original Message-
From: Doug Cutting [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 27, 2006 12:48 AM
To: nutch-dev@lucene.apache.org
Subject: Re: exception
Importance: High

This is a Hadoop DFS error.  It could mean that you don't have any 
datanodes running, or that all your datanodes are full.  Or, it could be 
a bug in dfs.  You might try a recent nightly build of Hadoop to see if 
it works any better.

Doug

Anton Potehin wrote:
> What means error of following type :
> 
>  
> 
> java.rmi.RemoteException: java.io.IOException: Cannot obtain additional
> block for file /user/root/crawl/indexes/index/_0.prx
> 
>  
> 
>  
> 
> 




Re: exception

2006-04-27 Thread Doug Cutting

[EMAIL PROTECTED] wrote:

We updated hadoop from trunk branch. But now we get new errors:


Oops.  Looks like I introduced a bug yesterday.  Let me fix it...

Sorry,

Doug


Re: exception in search.jsp

2010-02-15 Thread Sami Siren

Hi Jesse,

thanks for spotting this. I fixed the problem in trunk, see 
https://issues.apache.org/jira/browse/NUTCH-793


--
 Sami Siren

Jesse Hires wrote:

I am seeing the following and am able to find any notes anywhere on it.

org.apache.jasper.JasperException: Unable to compile class for JSP: 


An error occurred at line: 207 in the jsp file: /search.jsp

query.getParams cannot be resolved or is not a field
204: // position this is good, bad?... ugly?
205:Hits hits;
206:try{
207:   query.getParams.initFrom(start + hitsToRetrieve, hitsPerSite, 
"site", sort, reverse);

208:  hits = bean.search(query);
209:} catch (IOException e){
210:  hits = new Hits(0,new Hit[0]);



It looks like this change came in recently to SVN

--- lucene/nutch/trunk/src/web/jsp/search.jsp   2009/10/09 17:02:32 823614

+++ lucene/nutch/trunk/src/web/jsp/search.jsp   2010/02/01 20:47:34 905410
@@ -204,8 +204,8 @@
 // position this is good, bad?... ugly?
Hits hits;
try{
- hits = bean.search(query, start + hitsToRetrieve, hitsPerSite, "site",

-sort, reverse);
+  query.getParams.initFrom(start + hitsToRetrieve, hitsPerSite, "site", 
sort, reverse);
+ hits = bean.search(query);
} catch (IOException e){
  hits = new Hits(0,new Hit[0]);

}


Has anyone else run into this, or did I miss something when updating to 
the latest version?


Jesse

int GetRandomNumber()
{
   return 4; // Chosen by fair roll of dice
// Guaranteed to be random
} // xkcd.com 





Re: Exception in NutchConfiguration class using java servlet

2008-11-27 Thread Fu Chen

Best Regards
Fu Chen (??)
--- 
Inst.Service Science&Technology
Room 1-211, Future Internet Technology Research Center(FIT)
Tsinghua University, 100084, Beijing, China 
Tel: 86-10-62603217-823,86-13520253784(mobile)
E_Mail:[EMAIL PROTECTED];[EMAIL PROTECTED]
http://nmgroup.tsinghua.edu.cn/cn/people_asso.htm
- Original Message - 
From: "Doun" <[EMAIL PROTECTED]>
To: 
Sent: Friday, November 28, 2008 9:52 AM
Subject: Exception in NutchConfiguration class using java servlet


> 
> Hi,
> 
> I'm pretty newbie in dealing with nutch. I've done the crawling and the
> indexing using a version already installed on a UNIX machine. I'm trying to
> develop a simple JSP page that queries the index and return the results. In
> this, I'm following this tutorial:
> http://wiki.apache.org/nutch/JavaDemoApplication
> 
> However, I'm getting an exception while creating an instance of the
> configuration class:
> Configuration nutchConf = NutchConfiguration.create();
> the exception is:
> 
> Exception occurred in target VM: Could not initialize class
> org.apache.hadoop.conf.Configuration 
> java.lang.NoClassDefFoundError: Could not initialize class
> org.apache.hadoop.conf.Configuration
> at
> org.apache.nutch.util.NutchConfiguration.create(NutchConfiguration.java:51)
> at NewServlet.processRequest(NewServlet.java:70)
> at NewServlet.doPost(NewServlet.java:98)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at
> org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:390)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
> at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
> at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
> at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
> at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
> at java.lang.Thread.run(Thread.java:619)
> 
> I'm wondering if I'm missing any step of the tutorial, or is it because I'm
> using a windows machine? I copied the index from the UNIX machine to the
> Windows machine, and tried to develop an interface in JSP. 
> I googled a lot, but I didn't find anything !!
> -- 
> View this message in context: 
> http://www.nabble.com/Exception-in-NutchConfiguration-class-using-java-servlet-tp20727926p20727926.html
> Sent from the Nutch - Dev mailing list archive at Nabble.com.
> 
>

TRUNK IllegalArgumentException: Argument is not an array (WAS: Re: exception)

2006-04-27 Thread Michael Stack
I'm getting same as Anton below trying to launch a new job with latest 
from TRUNK.


Logic in ObjectWriteable#readObject seems a little off.  On the way in 
we test for a null instance.  If null, we set to NullWriteable.


Next we test declaredClass to see if its an array.  We then try to do an 
Array.getLength on instance -- which we've above set as NullWriteable.


Looks like we should test instance to see if its NullWriteable before we 
do the Array.getLength (or do the instance null check later).


Hope above helps,
St.Ack



[EMAIL PROTECTED] wrote:

We updated hadoop from trunk branch. But now we get new errors:

On tasktarcker side:

java.io.IOException: timed out waiting for response
at org.apache.hadoop.ipc.Client.call(Client.java:305)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
at org.apache.hadoop.mapred.$Proxy0.pollForTaskWithClosedJob(Unknown
Source)
at
org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:310)
at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:374)
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:813)
060427 062708 Client connection to 10.0.0.10:9001 caught:
java.lang.RuntimeException:
 java.lang.ClassNotFoundException:
java.lang.RuntimeException: java.lang.ClassNotFoundException:
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:152)
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:139)
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:186)
at
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:60)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:170)
060427 062708 Client connection to 10.0.0.10:9001: closing


On jobtracker side:

060427 061713 Server handler 3 on 9001 caught:
java.lang.IllegalArgumentException: Ar
gument is not an array
java.lang.IllegalArgumentException: Argument is not an array
at java.lang.reflect.Array.getLength(Native Method)
at
org.apache.hadoop.io.ObjectWritable.writeObject(ObjectWritable.java:92)
at org.apache.hadoop.io.ObjectWritable.write(ObjectWritable.java:64)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:250)


-Original Message-
From: Doug Cutting [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 27, 2006 12:48 AM

To: nutch-dev@lucene.apache.org
Subject: Re: exception
Importance: High

This is a Hadoop DFS error.  It could mean that you don't have any 
datanodes running, or that all your datanodes are full.  Or, it could be 
a bug in dfs.  You might try a recent nightly build of Hadoop to see if 
it works any better.


Doug

Anton Potehin wrote:
  

What means error of following type :

 


java.rmi.RemoteException: java.io.IOException: Cannot obtain additional
block for file /user/root/crawl/indexes/index/_0.prx

 

 







  




Re: TRUNK IllegalArgumentException: Argument is not an array (WAS: Re: exception)

2006-04-27 Thread Doug Cutting

I just fixed this.  Sorry for the inconvenience!

Doug

Michael Stack wrote:
I'm getting same as Anton below trying to launch a new job with latest 
from TRUNK.


Logic in ObjectWriteable#readObject seems a little off.  On the way in 
we test for a null instance.  If null, we set to NullWriteable.


Next we test declaredClass to see if its an array.  We then try to do an 
Array.getLength on instance -- which we've above set as NullWriteable.


Looks like we should test instance to see if its NullWriteable before we 
do the Array.getLength (or do the instance null check later).


Hope above helps,
St.Ack



[EMAIL PROTECTED] wrote:


We updated hadoop from trunk branch. But now we get new errors:

On tasktarcker side:

java.io.IOException: timed out waiting for response
at org.apache.hadoop.ipc.Client.call(Client.java:305)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
at 
org.apache.hadoop.mapred.$Proxy0.pollForTaskWithClosedJob(Unknown

Source)
at
org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:310)
at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:374)
at 
org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:813)

060427 062708 Client connection to 10.0.0.10:9001 caught:
java.lang.RuntimeException:
 java.lang.ClassNotFoundException:
java.lang.RuntimeException: java.lang.ClassNotFoundException:
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:152)
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:139)
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:186)
at
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:60)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:170)
060427 062708 Client connection to 10.0.0.10:9001: closing


On jobtracker side:

060427 061713 Server handler 3 on 9001 caught:
java.lang.IllegalArgumentException: Ar
gument is not an array
java.lang.IllegalArgumentException: Argument is not an array
at java.lang.reflect.Array.getLength(Native Method)
at
org.apache.hadoop.io.ObjectWritable.writeObject(ObjectWritable.java:92)
at 
org.apache.hadoop.io.ObjectWritable.write(ObjectWritable.java:64)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:250)


-Original Message-
From: Doug Cutting [mailto:[EMAIL PROTECTED] Sent: Thursday, April 
27, 2006 12:48 AM

To: nutch-dev@lucene.apache.org
Subject: Re: exception
Importance: High

This is a Hadoop DFS error.  It could mean that you don't have any 
datanodes running, or that all your datanodes are full.  Or, it could 
be a bug in dfs.  You might try a recent nightly build of Hadoop to 
see if it works any better.


Doug

Anton Potehin wrote:
 


What means error of following type :

 


java.rmi.RemoteException: java.io.IOException: Cannot obtain additional
block for file /user/root/crawl/indexes/index/_0.prx