run hadoop pseudo-distribute examples failed

2011-05-18 Thread 李�S
Hi All, I'm trying to run hadoop(0.20.2) examples in Pseudo-Distributed Mode following the hadoop user guide. After I run the 'start-all.sh', it seems the namenode can't connect to datanode. 'SSH localhost' is OK on my server. Someone advises to rm '/tmp/hadoop-' and format namenode again,

RE: Running M/R jobs from java code

2011-05-18 Thread Aaron Baff
Geoffry, Basically it's replicating what you do in the main() method, and then just making sure you give it a Configuration (or get one via Job.getConfiguration()) with those parameters. I forget which ones are for the old API and which are for the new, but I just set both just in case. See h

Re: Running M/R jobs from java code

2011-05-18 Thread Geoffry Roberts
Aaron, I didn't know one could do this thanks. I'll give it a try. On 18 May 2011 10:18, Aaron Baff wrote: > It's not terribly hard to submit MR Job's. Create a hadoop Configuration > object, and set it's fs.default.name and fs.defaultFS to the Namenode URI, > and mapreduce.jobtracker.address a

RE: Running M/R jobs from java code

2011-05-18 Thread Aaron Baff
It's not terribly hard to submit MR Job's. Create a hadoop Configuration object, and set it's fs.default.name and fs.defaultFS to the Namenode URI, and mapreduce.jobtracker.address and mapred.job.tracker to the JobTracker URI. You can then easily setup and use a Job object (new API), or JobConf

Re: Running M/R jobs from java code

2011-05-18 Thread Joey Echeverria
Just last week I worked on a REST interface hosted in Tomcat that launched a MR job. In my case, I included the jar with the job in the WAR and called the run() method (the job implemented Tool). The only tricky part is a copy of the Hadoop configuration files needed to be in the classpath, but I j

Re: Running M/R jobs from java code

2011-05-18 Thread Geoffry Roberts
I am confronted with the same problem. What I plan to do is to have a servlet simply execute a command on the machine from where I would start the job if I were running it from the command line. e.g. $ ssh '/bin/hadoop jar myjob.jar' Another possibility would be to rig some kind of RMI thing.

Re: Running M/R jobs from java code

2011-05-18 Thread Lior Schachter
Another machine in the cluster. On Wed, May 18, 2011 at 6:05 PM, Geoffry Roberts wrote: > Is Tomcat installed on your hadoop name node? or another machine? > > > On 18 May 2011 07:58, Lior Schachter wrote: > >> Hi, >> I have my application installed on Tomcat and I wish to submit M/R jobs >> pro

Re: Running M/R jobs from java code

2011-05-18 Thread Geoffry Roberts
Is Tomcat installed on your hadoop name node? or another machine? On 18 May 2011 07:58, Lior Schachter wrote: > Hi, > I have my application installed on Tomcat and I wish to submit M/R jobs > programmatically. > Is there any standard way to do that ? > > Thanks, > Lior > -- Geoffry Roberts

Running M/R jobs from java code

2011-05-18 Thread Lior Schachter
Hi, I have my application installed on Tomcat and I wish to submit M/R jobs programmatically. Is there any standard way to do that ? Thanks, Lior

Re: Job works only when TaskTracker & JobTracker on the same machine

2011-05-18 Thread Lucian Iordache
Hi Todd, I think the cluster is well configured, the HDFS and HBase work fine in distributed mode (several datanodes and regionservers are started and work correctly). - all the slaves are present in the ../conf/slaves file - the jobtracker host and port are well set in mapred-site on the TaskTrac