This is one of the reasons we set up edge nodes in the cluster. This is a node where Hadoop is loaded yet none of the Hadoop services are running . This allows jobs to automatically pick up the right Hadoop configuration from the node and point to the right cluster.
The edge nodes are used for staging jobs and data import in to the cluster. Maybe run he MySQL data store there and also hive and pig jobs.... HTH Sent from a remote device. Please excuse any typos... Mike Segel On Apr 28, 2013, at 4:26 PM, Kevin Burton <rkevinbur...@charter.net> wrote: > Part of the problem is nothing comes up on port 50030. 50070 yes but 50030 > no. > > On Apr 28, 2013, at 12:04 PM, shashwat shriparv <dwivedishash...@gmail.com> > wrote: > >> check in namenode:50030 if it appears there its not running in localmode >> else it is >> >> Thanks & Regards >> ∞ >> Shashwat Shriparv >> >> >> >> On Sun, Apr 28, 2013 at 1:18 AM, sudhakara st <sudhakara...@gmail.com> wrote: >>> Hello Kevin, >>> >>> In the case: >>> >>> JobClient client = new JobClient(); >>> JobConf conf - new JobConf(WordCount.class); >>> >>> Job client(default in local system) picks configuration information by >>> referring HADOOP_HOME in local system. >>> >>> if your job configuration like this: >>> Configuration conf = new Configuration(); >>> conf.set("fs.default.name", "hdfs://name_node:9000"); >>> conf.set("mapred.job.tracker", "job_tracker_node:9001"); >>> >>> It pickups configuration information by referring HADOOP_HOME in specified >>> namenode and job tracker. >>> >>> Regards, >>> Sudhakara.st >>> >>> >>> On Sat, Apr 27, 2013 at 2:52 AM, Kevin Burton <rkevinbur...@charter.net> >>> wrote: >>>> It is hdfs://devubuntu05:9000. Is this wrong? Devubuntu05 is the name of >>>> the host where the NameNode and JobTracker should be running. It is also >>>> the host where I am running the M/R client code. >>>> >>>> On Apr 26, 2013, at 4:06 PM, Rishi Yadav <ri...@infoobjects.com> wrote: >>>> >>>>> check core-site.xml and see value of fs.default.name. if it has localhost >>>>> you are running locally. >>>>> >>>>> >>>>> >>>>> >>>>> On Fri, Apr 26, 2013 at 1:59 PM, <rkevinbur...@charter.net> wrote: >>>>>> I suspect that my MapReduce job is being run locally. I don't have any >>>>>> evidence but I am not sure how the specifics of my configuration are >>>>>> communicated to the Java code that I write. Based on the text that I >>>>>> have read online basically I start with code like: >>>>>> >>>>>> JobClient client = new JobClient(); >>>>>> JobConf conf - new JobConf(WordCount.class); >>>>>> . . . . . >>>>>> >>>>>> Where do I communicate the configuration information so that the M/R job >>>>>> runs on the cluster and not locally? Or is the configuration location >>>>>> "magically determined"? >>>>>> >>>>>> Thank you. >>> >>> >>> >>> -- >>> >>> Regards, >>> ..... Sudhakara.st >>