Re: NullPointerException on namenode

2011-02-23 Thread Ravi .
You need to further dig down in NameNode logs, also make sure that your edit log and fsimage is not corrupted. Use backup fsimage & editlog from Secondary name node to restart the cluster and see if this problem persists. Looking at stack trace I have a feeling that it's due to the corrupt file/dir

Re: is there more smarter way to execute a hadoop cluster?

2011-02-23 Thread Harsh J
Hello, On Thu, Feb 24, 2011 at 12:25 PM, Jun Young Kim wrote: > Hi, > I executed my cluster by this way. > > call a command in shell directly. What are you doing within your testCluster.jar? If you are simply submitting a job, you can use a Driver method and get rid of all these hassles. JobClie

is there more smarter way to execute a hadoop cluster?

2011-02-23 Thread Jun Young Kim
Hi, I executed my cluster by this way. call a command in shell directly. String runInCommand ="/opt/hadoop-0.21.0/bin/hadoop jar testCluster.jar example"; Process proc = Runtime.getRuntime().exec(runInCommand); proc.waitFor(); BufferedReader in = new BufferedReader(new InputStreamReader(pro

Re: Current available Memory

2011-02-23 Thread Yang Xiaoliang
I had also encuntered the smae problem a few days ago. any one has another method? 2011/2/24 maha > Based on the Java function documentation, it gives approximately the > available memory, so I need to tweak it with other functions. > So it's a Java issue not Hadoop. > > Thanks anyways, > Maha

Re: Library Issues

2011-02-23 Thread Harsh J
Hey, On Thu, Feb 24, 2011 at 10:13 AM, Adarsh Sharma wrote: > Dear all, > > I am confused about the concepts used while running map-reduce jobs in > Hadoop Cluster. > I attached a program that is used to run in Hadoop Cluster. Please find the > attachment. > My PATH Variable shows that it include

Library Issues

2011-02-23 Thread Adarsh Sharma
Dear all, I am confused about the concepts used while running map-reduce jobs in Hadoop Cluster. I attached a program that is used to run in Hadoop Cluster. Please find the attachment. I used to run this program successfully through below command in /home/hadoop/project/hadoop-0.20.2 directo

Re: Current available Memory

2011-02-23 Thread maha
Based on the Java function documentation, it gives approximately the available memory, so I need to tweak it with other functions. So it's a Java issue not Hadoop. Thanks anyways, Maha On Feb 23, 2011, at 6:31 PM, maha wrote: > Hello Everyone, > > I'm using " Runtime.getRuntime().freeMemory

Current available Memory

2011-02-23 Thread maha
Hello Everyone, I'm using " Runtime.getRuntime().freeMemory()" to see current memory available before and after creation of an object, but this doesn't seem to work well with Hadoop? Why? and is there another alternative? Thank you, Maha

March 2011 San Francisco Hadoop User Meetup ("integration")

2011-02-23 Thread Aaron Kimball
Hadoop fans, I'm pleased to announce that the third SF Hadoop meetup will be held Wednesday, March 9th, from 6pm to 8pm. (We will hopefully continue using the 2nd Wednesday of the month for successive meetups). This meetup will be hosted by the good folks at Yelp. Their office is at 706 Mission S

Is it possible to reset task failure counts to 0 without restarting JT, TT or Mapred.

2011-02-23 Thread Ravi .
I would like to reset failure count of Task tracker to 0, Jobtracker maintains task failures count for each Task tracker and this count is used in blacklisting Task trackers. In a small cluster if I restart corresponding Task tracker there will be inconsistent state and other task trackers will hav

Re: Hadoop issue on 64 bit ubuntu . Native Libraries.

2011-02-23 Thread Todd Lipcon
On Wed, Feb 23, 2011 at 11:21 AM, Todd Lipcon wrote: > Hi Ajay, > > Hadoop should ship with built artifacts for amd64 in > the lib/native/Linux-amd64-64/ subdirectory of your tarball. You just need > to put this directory on your java.library.path system property. > > -Todd > > You need to run "a

Re: Hadoop issue on 64 bit ubuntu . Native Libraries.

2011-02-23 Thread Todd Lipcon
Hi Ajay, Hadoop should ship with built artifacts for amd64 in the lib/native/Linux-amd64-64/ subdirectory of your tarball. You just need to put this directory on your java.library.path system property. -Todd You need to run "ant -Dcompile.native=1 compile-native" fro On Tue, Feb 22, 2011 at 9:1

Problem in using Trash

2011-02-23 Thread 안의건
Hello everybody, I have a problem with adapting Trash into the running system. To my knowledge, the jar file is created in the namenode and sent to each datanode during the MapReduce process. And the jar file is removed automatically after the Mapreduce process is completed. But after I adapted th

NullPointerException on namenode

2011-02-23 Thread gmane.org
I restarted the cluster after the server was way overload by other task and now I get this 2011-02-23 08:36:18,307 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.NullPointerException at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1088)

Hadoop issue on 64 bit ubuntu . Native Libraries.

2011-02-23 Thread Ajay Anandan
Hi, I am using the kmeans clustering in mahout. It ran fine in my 32 bit machine. But when I try to run it in another 64 bit machine I get the following error: *org.apache.hadoop.util.NativeCodeLoader WARNING: Unable to load native-hadoop library for your platform... using builtin-java classes