When I try this command"MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos-0.14.0.so 
hadoop jobtracker"
There are some errors:
14/01/15 13:09:14 INFO mapred.JobTracker: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG:   host = hadoop06.ihep.ac.cn/192.168.60.31
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2-cdh3u5
STARTUP_MSG:   build = 
git://hadoop03.ihep.ac.cn/publicfs/cc/zangds/dmdp/hadoop-0.20.2-cdh3u5-hce -r ; 
compiled by 'zangds' on Sun Mar 24 23:36:42 CST 2013
************************************************************/
14/01/15 13:09:15 INFO delegation.AbstractDelegationTokenSecretManager: 
Updating the current master key for generating delegation tokens
14/01/15 13:09:15 INFO delegation.AbstractDelegationTokenSecretManager: 
Starting expired delegation token remover thread, tokenRemoverScanInterval=60 
min(s)
14/01/15 13:09:15 INFO delegation.AbstractDelegationTokenSecretManager: 
Updating the current master key for generating delegation tokens
14/01/15 13:09:15 INFO mapred.JobTracker: Scheduler configured with 
(memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, 
limitMaxMemForReduceTasks) (-1, -1, -1, -1)
14/01/15 13:09:15 INFO util.HostsFileReader: Refreshing hosts (include/exclude) 
list
14/01/15 13:09:15 INFO mapred.JobTracker: Starting jobtracker with owner as 
mapred
14/01/15 13:09:15 INFO ipc.Server: Starting Socket Reader #1 for port 8021
14/01/15 13:09:15 INFO metrics.RpcMetrics: Initializing RPC Metrics with 
hostName=JobTracker, port=8021
14/01/15 13:09:15 INFO metrics.RpcDetailedMetrics: Initializing RPC Metrics 
with hostName=JobTracker, port=8021
14/01/15 13:09:15 INFO mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
14/01/15 13:09:15 INFO http.HttpServer: Added global filtersafety 
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
14/01/15 13:09:15 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/01/15 13:09:15 ERROR security.UserGroupInformation: 
PriviledgedActionException as:mapred (auth:SIMPLE) cause:ENOENT: No such file 
or directory
14/01/15 13:09:15 WARN mapred.JobTracker: Error starting tracker: ENOENT: No 
such file or directory
        at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:521)
        at 
org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
        at 
org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:195)
        at org.apache.hadoop.mapred.JobHistory.init(JobHistory.java:491)
        at org.apache.hadoop.mapred.JobTracker$2.run(JobTracker.java:1852)
        at org.apache.hadoop.mapred.JobTracker$2.run(JobTracker.java:1849)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1849)
        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1724)
        at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:297)
        at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:289)
        at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4499)

14/01/15 13:09:16 INFO security.UserGroupInformation: JAAS Configuration 
already set up for Hadoop, not re-installing.
14/01/15 13:09:16 INFO delegation.AbstractDelegationTokenSecretManager: 
Updating the current master key for generating delegation tokens
14/01/15 13:09:16 INFO delegation.AbstractDelegationTokenSecretManager: 
Starting expired delegation token remover thread, tokenRemoverScanInterval=60 
min(s)
14/01/15 13:09:16 INFO delegation.AbstractDelegationTokenSecretManager: 
Updating the current master key for generating delegation tokens
14/01/15 13:09:16 INFO mapred.JobTracker: Scheduler configured with 
(memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, 
limitMaxMemForReduceTasks) (-1, -1, -1, -1)
14/01/15 13:09:16 INFO util.HostsFileReader: Refreshing hosts (include/exclude) 
list
14/01/15 13:09:16 INFO mapred.JobTracker: Starting jobtracker with owner as 
mapred
14/01/15 13:09:16 FATAL mapred.JobTracker: java.net.BindException: Problem 
binding to hadoop06.ihep.ac.cn/192.168.60.31:8021 : Address already in use
        at org.apache.hadoop.ipc.Server.bind(Server.java:231)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:320)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:1534)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:539)
        at org.apache.hadoop.ipc.RPC.getServer(RPC.java:500)
        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1817)
        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1724)
        at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:297)
        at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:289)
        at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4499)
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind(Native Method)
        at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
        at org.apache.hadoop.ipc.Server.bind(Server.java:229)
        ... 9 more

14/01/15 13:09:16 INFO mapred.JobTracker: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down JobTracker at hadoop06.ihep.ac.cn/192.168.60.31
************************************************************/

please tell me what's wrong.

My hadoop version is CDH4.5.0, installed from yum. 
I put the hadoop-mesos-0.0.5.jar in /usr/lib/hadoop-0.20-mapreduce/lib/  , and  
make a tar package.
put the package hadoop-0.20-mapreduce.tar.gz to hdfs.
and then, change the configuration:
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:9001</value>
</property>
<property>
  <name>mapred.jobtracker.taskScheduler</name>
  <value>org.apache.hadoop.mapred.MesosScheduler</value>
</property>
<property>
  <name>mapred.mesos.taskScheduler</name>
  <value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value>
</property>
<property>
  <name>mapred.mesos.master</name>
  <value>localhost:5050</value>
</property>
<property>
  <name>mapred.mesos.executor.uri</name>
  <value>hdfs://localhost:9000/hadoop-2.0.0-mr1-cdh4.2.2.tar.gz</value>
</property>

But this is not work, please help me!

> -----原始邮件-----
> 发件人: "Adam Bordelon" <[email protected]>
> 发送时间: 2014年1月15日 星期三
> 收件人: [email protected]
> 抄送: mesos-dev <[email protected]>
> 主题: Re: How to run hadoop Jobtracker
> 
> Try running "MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos-0.14.0.so hadoop
> jobtracker"
> The primary executable to run is the 'hadoop' executable, but it needs to
> know where to find MESOS_NATIVE_LIBRARY, so we set that environment
> variable on the command-line first. You could set it in other ways instead
> (in that user's .bashrc or by creating a wrapper around 'hadoop' that sets
> the variable before launching 'hadoop').
> You are very close to having Hadoop running on top of Mesos.
> Good luck!
> -Adam-
> 
> 
> On Tue, Jan 14, 2014 at 6:47 AM, HUO Jing <[email protected]> wrote:
> 
> > Hi,
> > I have installed Mesos and Hadoop CDH4.5.0, changed the mapred-site.xml,
> > and packaged hadoop-mesos-0.0.5.jar with hadoop and upload it to hdfs. In a
> > word, I have done everything in this page: https://github.com/mesos/hadoop
> > .
> > but when I try to run jobtracker with command:
> > bash-3.2$ /usr/local/lib/libmesos-0.14.0.so hadoop jobtracker
> > It says:Segmentation fault
> > please tell me how to deal with this.
> >
> >
> > Huojing
> >

Reply via email to