Hi Bill,

This means something else is running at port 9001 on your machine. Do you have 
another JobTracker running, or HDFS on port 9001? Maybe try a different port 
too.

Matei


On Jun 15, 2011, at 3:53 PM, Bill Zhao wrote:

> ---------- Forwarded message ----------
> From: Bill Zhao <[email protected]>
> Date: Wed, Jun 15, 2011 at 3:52 PM
> Subject: Re: hadoop on Mesos
> To: Charles Reiss <[email protected]>
> Cc: [email protected]
> 
> 
> After uncomment the mapred.job.tracker config in mapred-site.xml, I
> got this error:
> 
> billz@dhcp-44-175: ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop jobtracker
> 11/06/15 15:45:16 INFO mapred.JobTracker: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting JobTracker
> STARTUP_MSG:   host = dhcp-44-175.eecs.berkeley.edu/128.32.44.175
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.3-dev
> STARTUP_MSG:   build =  -r ; compiled by 'billz' on Wed Jun 15 14:39:40 PDT 
> 2011
> ************************************************************/
> 11/06/15 15:45:16 INFO mapred.JobTracker: Scheduler configured with
> (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
> limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
> 11/06/15 15:45:16 FATAL mapred.JobTracker: java.net.BindException:
> Problem binding to localhost/127.0.0.1:9001 : Address already in use
>        at org.apache.hadoop.ipc.Server.bind(Server.java:190)
>        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:253)
>        at org.apache.hadoop.ipc.Server.<init>(Server.java:1026)
>        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:488)
>        at org.apache.hadoop.ipc.RPC.getServer(RPC.java:450)
>        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1595)
>        at 
> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
>        at 
> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)
>        at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)
> Caused by: java.net.BindException: Address already in use
>        at sun.nio.ch.Net.bind(Native Method)
>        at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
>        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>        at org.apache.hadoop.ipc.Server.bind(Server.java:188)
>        ... 8 more
> 
> 11/06/15 15:45:16 INFO mapred.JobTracker: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down JobTracker at
> dhcp-44-175.eecs.berkeley.edu/128.32.44.175
> ************************************************************/
> 
> On Wed, Jun 15, 2011 at 3:28 PM, Charles Reiss
> <[email protected]> wrote:
>> On Wed, Jun 15, 2011 at 15:20, Bill Zhao <[email protected]> wrote:
>>> Charles,
>>> 
>>> I have done the following:
>>> 
>>> 1. build the mesos for github source
>>> 2. copied "hadoop-for-new-iface.patch" to
>>> mesos/frameworks/hadoop-0.20.2 directory
>>> 3. applied the patch "git apply
>>> mesos/frameworks/hadoop-0.20.2/hadoop-for-new-iface.patch" this works
>>> 
>>> for some reason the "patch -p0
>>> mesos/frameworks/hadoop-0.20.2/hadoop-for-new-iface.patch" doesn't
>>> work
>>> 
>>> 4. build the jar with this command, and in mesos directory  " ~/mesos$ ant"
>>> 5. set the MESOS_HOME in .bashrc to ~/mesos/
>>> 6. copied the mesos.jar to the 'lib' directory of Hadoop directory
>>> 
>>> However, I am getting this error message.  Plus, I notice that I can
>>> not copy the file from local file system to Hadoop’s HDFS.  I do see
>>> the file in HDFS (bin/hadoop dfs -ls), but it's size 0 byte.
>>> 
>>> billz@dhcp-44-175: ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop jobtracker
>>> 11/06/15 15:12:18 INFO mapred.JobTracker: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting JobTracker
>>> STARTUP_MSG:   host = dhcp-44-175.eecs.berkeley.edu/128.32.44.175
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.20.3-dev
>>> STARTUP_MSG:   build =  -r ; compiled by 'billz' on Wed Jun 15 14:39:40 PDT 
>>> 2011
>>> ************************************************************/
>>> 11/06/15 15:12:18 INFO mapred.JobTracker: Scheduler configured with
>>> (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
>>> limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
>>> 11/06/15 15:12:18 FATAL mapred.JobTracker: java.lang.RuntimeException:
>>> Not a host:port pair: local
>> [snip]
>> 
>> Specifying mapred.job.tracker is not optional.
>> 
>> If you didn't start DFS and/or set a default FS name, you probably
>> aren't actually using HDFS (the 'dfs' command will happily query from
>> any FS that Hadoop understands).
>> 
>> - Charles
>> 
> 
> 
> 
> --
> Bill Zhao
> 
> 
> 
> -- 
> Bill Zhao

Reply via email to