My hadoop version: hadoop-0.18.1.

Hbase version:
hbase(main):002:0> version
Version: 0.18.0, r, Thu Oct 23 01:54:03 CST 2008

Hama Version:
the latest svn version.



Samuel Guo 写道:
> can you attach your hadoop & hbase configurations?
>
> 2008/10/23 zhuguanyin <[EMAIL PROTECTED]>
>
>   
>> I run the examples with GettingStarted step by step, I had export
>> JAVA_HOME, HADOOP_CONF_DIR, HBASE_CONF_DIR and HAMA_CLASSPATH in
>> hama-env.sh.
>>
>> if I don't set mapred.jar in hama-site.xml, I got the following message:
>>
>> 08/10/23 14:56:12 WARN mapred.JobClient: Use genericOptions for the
>> option -libjars
>> 08/10/23 14:56:13 WARN mapred.JobClient: No job jar file set. User
>> classes may not be found. See JobConf(Class) or JobConf#setJar(String).
>>
>> so I set the hama-site.xml with
>>
>> <property>
>> <name>mapred.jar</name>
>>
>> <value>/home/zhugy/hadoop-v0.18.1/hadoop/lib/hama-0.1.0-dev-examples.jar</value>
>> </property>
>>
>> then I got the mapred task syslog logs:
>>
>> 2008-10-23 15:05:05,968 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=MAP, sessionId=
>> 2008-10-23 15:05:06,095 INFO org.apache.hadoop.mapred.MapTask:
>> numReduceTasks: 1
>> 2008-10-23 15:05:06,116 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb =
>> 256
>> 2008-10-23 15:05:06,411 INFO org.apache.hadoop.mapred.MapTask: data buffer
>> = 204010960/255013696
>> 2008-10-23 15:05:06,411 INFO org.apache.hadoop.mapred.MapTask: record
>> buffer = 671088/838860
>> 2008-10-23 15:05:07,984 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:60000. Already tried 0 time(s).
>> 2008-10-23 15:05:08,986 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:60000. Already tried 1 time(s).
>> 2008-10-23 15:05:09,988 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:60000. Already tried 2 time(s).
>> 2008-10-23 15:05:10,989 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:60000. Already tried 3 time(s).
>> 2008-10-23 15:05:11,991 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:60000. Already tried 4 time(s).
>> 2008-10-23 15:05:12,993 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:60000. Already tried 5 time(s).
>> 2008-10-23 15:05:13,995 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:60000. Already tried 6 time(s).
>> 2008-10-23 15:05:14,997 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:60000. Already tried 7 time(s).
>> 2008-10-23 15:05:15,999 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:60000. Already tried 8 time(s).
>> 2008-10-23 15:05:17,001 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:60000. Already tried 9 time(s).
>> 2008-10-23 15:05:17,001 INFO
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers: Attempt 0 of
>> 10 failed with <java.io.IOException: Call failed on local exception>.
>> Retrying after sleep of 2000
>> 2008-10-23 15:05:20,006 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:60000. Already tried 0 time(s).
>>
>>
>> It seems that in map task, the hbase rangeServer couln't connect to
>> Master, While I checked HBase running ok.
>>
>>
>> Samuel Guo 写道:
>>     
>>> you can check this wiki
>>> http://wiki.apache.org/hama/GettingStarted
>>>
>>>
>>>
>>>       
>>>> On Thu, Oct 23, 2008 at 1:42 PM, zhuguanyin <[EMAIL PROTECTED]>
>>>>         
>> wrote:
>>     
>>>>> hi, I'm a newbie for hama, I set up a hadoop/hbase cluster, and the
>>>>>           
>> hama
>>     
>>>>> environment, but I couldn't run the examples successfull.
>>>>>
>>>>> 1) I don't know how to set the hama-site.xml, if there is a
>>>>> hama-default.xml, it would be very helpful.
>>>>> here is my hama-site.xml:
>>>>>
>>>>>
>>>>>           
>>> A good suggestion.
>>>
>>> Now in hama, a quick way is dealing with conf/hama-env.sh.
>>> let $HADOOP_CONF_DIR point to your hadoop cluster's configuration.
>>> let $HBASE_CONF_DIR point to your hadoop cluster's configuration.
>>>
>>> can you try it again and let us know.
>>>
>>>
>>>
>>>       
>>>>> <configuration>
>>>>>
>>>>> <property>
>>>>> <name>mapred.jar</name>
>>>>> <value>/home/zhugy/hama/hama-trunk/hama-0.1.0-dev.jar</value>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>> <name>hbase.rootdir</name>
>>>>> <value>hdfs://jx-hadoop-data08.jx:52310/hbase-v1</value>
>>>>> <description>The directory shared by region servers.
>>>>> Should be fully-qualified to include the filesystem to use.
>>>>> E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
>>>>> </description>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>> <name>hbase.master</name>
>>>>> <value>jx-hadoop-data08.jx:62310</value>
>>>>> <description>The host and port that the HBase master runs at.
>>>>> A value of 'local' runs the master and a regionserver in
>>>>> a single process.
>>>>> </description>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>> 2) When I test the matrix addition examples:
>>>>> hama/hama-trunk/bin/hama examples addition -m 2 -r 1 2 2
>>>>>
>>>>>
>>>>> I get the follow stderr on map-reduce's stderr on task:
>>>>>
>>>>>
>>>>>           
>>>
>>>       
>>>>> java.lang.NullPointerException
>>>>>        at
>>>>>           
>> org.apache.hama.HamaAdminImpl.initialJob(HamaAdminImpl.java:51)
>>     
>>>>>        at org.apache.hama.HamaAdminImpl.<init>(HamaAdminImpl.java:46)
>>>>>        at
>>>>>
>>>>>           
>>>> org.apache.hama.AbstractMatrix.setConfiguration(AbstractMatrix.java:62)
>>>>
>>>>         
>>>>>        at org.apache.hama.DenseMatrix.<init>(DenseMatrix.java:66)
>>>>>        at
>>>>>
>>>>>           
>>>> org.apache.hama.algebra.Add1DLayoutMap.configure(Add1DLayoutMap.java:43)
>>>>
>>>>         
>>>>>        at
>>>>>
>>>>>           
>>>> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:58)
>>>>
>>>>         
>>>>>        at
>>>>>
>>>>>           
>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:82)
>>     
>>>>>        at
>>>>>           
>> org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:33)
>>     
>>>>>        at
>>>>>
>>>>>           
>>>> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:58)
>>>>
>>>>         
>>>>>        at
>>>>>
>>>>>           
>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:82)
>>     
>>>>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:223)
>>>>>        at
>>>>>
>>>>>           
>>>> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
>>>>
>>>>         
>>>>> java.lang.NullPointerException
>>>>>        at
>>>>>
>>>>>           
>>>> org.apache.hama.HamaAdminImpl.matrixExists(HamaAdminImpl.java:79)
>>>>
>>>>         
>>>>>        at org.apache.hama.DenseMatrix.<init>(DenseMatrix.java:68)
>>>>>        at
>>>>>
>>>>>           
>>>> org.apache.hama.algebra.Add1DLayoutMap.configure(Add1DLayoutMap.java:43)
>>>>
>>>>         
>>>>>        at
>>>>>
>>>>>           
>>>> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:58)
>>>>
>>>>         
>>>>>        at
>>>>>
>>>>>           
>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:82)
>>     
>>>>>        at
>>>>>           
>> org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:33)
>>     
>>>>>        at
>>>>>
>>>>>           
>>>> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:58)
>>>>
>>>>         
>>>>>        at
>>>>>
>>>>>           
>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:82)
>>     
>>>>>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:223)
>>>>>        at
>>>>>
>>>>>           
>>>> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
>>>>
>>>>         
>>>>> thanks!
>>>>>
>>>>>
>>>>> <http://blog.udanax.org>
>>>>>
>>>>>           
>>>       
>>
>>     

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>
  <name>fs.default.name</name>
  <value>hdfs://jx-hadoop-data08.jx:52310</value>
  <description>namenode host:port</description>
</property>

<property>
  <name>mapred.job.tracker</name>
  <value>jx-hadoop-data08.jx:52710</value>
  <description>jobtracker host:port</description>
</property>

<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/zhugy/hadoop-v0.18.1/hadoop-data</value>
  <description>the base dir to store data</description>
</property>

<property>
  <name>dfs.datanode.address</name>
  <value>0.0.0.0:0</value>
  <description>
    The address where the datanode server will listen to.
    If the port is 0 then the server will start on a free port.
  </description>
</property>

<property>
  <name>dfs.datanode.http.address</name>
  <value>0.0.0.0:0</value>
  <description>
    The datanode http server address and port.
    If the port is 0 then the server will start on a free port.
  </description>
</property>

<property>
  <name>dfs.datanode.https.address</name>
  <value>0.0.0.0:52476</value>
</property>

<property>
  <name>mapred.task.tracker.http.address</name>
  <value>0.0.0.0:0</value>
  <description>
    The task tracker http server address and port.
    If the port is 0 then the server will start on a free port.
  </description>
</property>

<property>
  <name>mapred.map.tasks</name>
  <value>1</value>
  <description>the default map task</description>
</property>

<property>
  <name>mapred.reduce.tasks</name>
  <value>1</value>
  <description>the default reduce task</description>
</property>

<property>
  <name>mapred.tasktracker.tasks.maximum</name>  
  <value>2</value>
  <description></description>
</property>

<property>
  <name>dfs.block.size</name>
  <value>268435456</value>
  <description>The default block size for new files.</description>
</property>

<property>
  <name>dfs.datanode.du.reserved</name>
  <value>4194304</value>
  <description>leave this much space free for non dfs use</description>
</property>

<property>
  <name>local.cache.size</name>
  <value>10737418240</value>
  <description></description>
</property>

<property>
  <name>dfs.name.dir</name>
  <value>${hadoop.tmp.dir}/dfs/name</value>
  <description></description>
</property>

<property>
  <name>dfs.client.buffer.dir</name>
  <value>${hadoop.tmp.dir}/dfs/tmp</value>
  <description></description>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>${hadoop.tmp.dir}/dfs/data</value>
  <description></description>
</property>

<property>
  <name>fs.checkpoint.dir</name>
  <value>${hadoop.tmp.dir}/dfs/namesecondary</value>
  <description></description>
</property>

<property>
  <name>mapred.local.dir</name>
  <value>${hadoop.tmp.dir}/mapred/local</value>
  <description></description>
</property>

<property>
  <name>io.sort.factor</name>
  <value>20</value>
  <description></description>
</property>

<property>
  <name>io.sort.mb</name>
  <value>256</value>
  <description></description>
</property>

<property>
  <name>fs.inmemory.size.mb</name>
  <value>256</value>
  <description></description>
</property>

<property>
  <name>io.file.buffer.size</name>
  <value>131072</value>
  <description></description>
</property>
  
<property>
  <name>io.bytes.per.checksum</name>
  <value>4096</value>
  <description></description>
</property>

<property>
  <name>dfs.namenode.handler.count</name>
  <value>50</value>
  <description></description>
</property>

<property>
  <name>mapred.job.tracker.handler.count</name>
  <value>50</value>
  <description></description>
</property>

<property>
  <name>mapred.local.dir.minspacestart</name>
  <value>268435456</value>
  <description></description>
</property>

<property>
  <name>mapred.local.dir.minspacekill</name>
  <value>268435456</value>
  <description></description>
</property>

<property>
  <name>mapred.jobtracker.completeuserjobs.maximum</name>
  <value>20</value>
  <description></description>
</property>

<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx1500m</value>
  <description></description>
</property>

<property>
  <name>mapred.userlog.retain.hours</name>
  <value>10</value>
  <description></description>
</property>

<property>
  <name>hadoop.logfile.size</name>
  <value>50000000</value>
  <description></description>
</property>

<property>
  <name>hadoop.logfile.count</name>
  <value>30</value>
  <description></description>
</property>

<property>
  <name>dfs.namenode.logging.level</name>
  <value>info</value>
  <description></description>
</property>

<property>
  <name>fs.trash.interval</name>
  <value>4320</value>
  <description>Number of minutes between trash checkpoints.
  If zero, the trash feature is disabled. 
  </description>
</property>
</configuration>

Attachment: hama-env.sh
Description: Bourne shell script

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
 * Copyright 2007 The Apache Software Foundation
 *
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>

  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://jx-hadoop-data08.jx:52310/hbase-v1</value>
    <description>The directory shared by region servers.
    Should be fully-qualified to include the filesystem to use.
    E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
    </description>
  </property>
  <property>
    <name>hbase.master</name>
    <value>jx-hadoop-data08.jx:62310</value>
    <description>The host and port that the HBase master runs at.
    A value of 'local' runs the master and a regionserver in
    a single process.
    </description>
  </property>

</configuration>

Reply via email to