I'm running into several problems with Hbase 0.2.0.

 

1)       This mapreduce experiment, a modification of rowcounter,
(using the exact same data) was running in parallel for hbase 0.1.2

2)       I have tested the number of lines in the table using scanner
and it is over 190000. Just like in Hbase 0.1.2 

3)       when I run the mapreduce I get the output: Notice that I get ":
job_local_1".  Why is this local (my site.xml files are below)

 

08/07/30 23:34:50 INFO jvm.JvmMetrics: Initializing JVM Metrics with
processName

=JobTracker, sessionId=

08/07/30 23:34:52 INFO mapred.JobClient: Running job: job_local_1

08/07/30 23:34:52 INFO mapred.MapTask: numReduceTasks: 1

08/07/30 23:34:53 INFO mapred.JobClient:  map 0% reduce 0%

08/07/30 23:34:57 INFO mapred.LocalJobRunner:

08/07/30 23:34:57 INFO mapred.TaskRunner: Task 'job_local_1_map_0000'
done.

08/07/30 23:34:57 INFO mapred.TaskRunner: Saved output of task
'job_local_1_map_

0000' to file:/home/hadoop/TestHbase/results

08/07/30 23:34:57 INFO mapred.LocalJobRunner: reduce > reduce

08/07/30 23:34:57 INFO mapred.TaskRunner: Task 'reduce_xwqalu' done.

08/07/30 23:34:57 INFO mapred.TaskRunner: Saved output of task
'reduce_xwqalu' t

o file:/home/hadoop/TestHbase/results

08/07/30 23:34:58 INFO mapred.JobClient: Job complete: job_local_1

08/07/30 23:34:58 INFO mapred.JobClient: Counters: 11

08/07/30 23:34:58 INFO mapred.JobClient:   Map-Reduce Framework

08/07/30 23:34:58 INFO mapred.JobClient:     Map input records=0

08/07/30 23:34:58 INFO mapred.JobClient:     Map output records=0

08/07/30 23:34:58 INFO mapred.JobClient:     Map input bytes=0

08/07/30 23:34:58 INFO mapred.JobClient:     Map output bytes=0

08/07/30 23:34:58 INFO mapred.JobClient:     Combine input records=0

08/07/30 23:34:58 INFO mapred.JobClient:     Combine output records=0

08/07/30 23:34:58 INFO mapred.JobClient:     Reduce input groups=0

08/07/30 23:34:58 INFO mapred.JobClient:     Reduce input records=0

08/07/30 23:34:58 INFO mapred.JobClient:     Reduce output records=0

08/07/30 23:34:58 INFO mapred.JobClient:   File Systems

08/07/30 23:34:58 INFO mapred.JobClient:     Local bytes read=21049575

08/07/30 23:34:58 INFO mapred.JobClient:     Local bytes
written=21245960

 

 

Once I run this map reduce I get the following exceptions in the
"hadoop-datanode" log file (only on 1 machine out of 5)

 

hadoop-0.17.0/logs/hadoop-hadoop-datanode-compute-0-1.local.log:2008-07-
30 23:34

:07,537 WARN org.apache.hadoop.dfs.DataNode: 10.0.11.253:50010:Got
exception whi

le serving blk_-1187673129240663634 to /10.0.11.253:

hadoop-0.17.0/logs/hadoop-hadoop-datanode-compute-0-1.local.log:java.net
.SocketT

imeoutException: 480000 millis timeout while waiting for channel to be
ready for

 write. ch : java.nio.channels.SocketChannel[connected
local=/10.0.11.253:50010

remote=/10.0.11.253:34013]

hadoop-0.17.0/logs/hadoop-hadoop-datanode-compute-0-1.local.log:2008-07-
30 23:34

:07,537 ERROR org.apache.hadoop.dfs.DataNode:
10.0.11.253:50010:DataXceiver: jav

a.net.SocketTimeoutException: 480000 millis timeout while waiting for
channel to

 be ready for write. ch : java.nio.channels.SocketChannel[connected
local=/10.0.

11.253:50010 remote=/10.0.11.253:34013]

hadoop-0.17.0/logs/hadoop-hadoop-datanode-compute-0-1.local.log:2008-07-
30 23:34

:13,000 WARN org.apache.hadoop.dfs.DataNode: 10.0.11.253:50010:Got
exception whi

le serving blk_-2281938905939443985 to /10.0.11.253:

hadoop-0.17.0/logs/hadoop-hadoop-datanode-compute-0-1.local.log:java.net
.SocketT

imeoutException: 480000 millis timeout while waiting for channel to be
ready for

 write. ch : java.nio.channels.SocketChannel[connected
local=/10.0.11.253:50010

remote=/10.0.11.253:34027]

hadoop-0.17.0/logs/hadoop-hadoop-datanode-compute-0-1.local.log:2008-07-
30 23:34

:13,000 ERROR org.apache.hadoop.dfs.DataNode:
10.0.11.253:50010:DataXceiver: jav

a.net.SocketTimeoutException: 480000 millis timeout while waiting for
channel to

 be ready for write. ch : java.nio.channels.SocketChannel[connected
local=/10.0.

11.253:50010 remote=/10.0.11.253:34027]

 

 

4)       My hadoop-site file

  <property>

    <name>fs.default.name</name>

    <value>hdfs://sb-centercluster01:9000</value>

  </property>

  <property>

    <name>mapred.job.tracker</name>

    <value>sb-centercluster01:9001</value>

  </property>

  <property>

    <name>mapred.map.tasks</name>

    <value>80</value>

  </property>

  <property>

    <name>mapred.reduce.tasks</name>

    <value>16</value>

  </property>

  <property>

    <name>dfs.replication</name>

    <value>3</value>

  </property>

  <property>

    <name>dfs.name.dir</name>

    <value>/home/hadoop/dfs,/tmp/hadoop/dfs</value>

  </property>

  <property>

    <name>dfs.data.dir</name>

    <value>/state/partition1/hadoop/dfs</value>

  </property>

 <property>

  <name>mapred.child.java.opts</name>

  <value>-Xmx1024m</value>

</property>

 

5)       My hbase-site

<property>

    <name>hbase.master</name>

    <value>sb-centercluster01:60002</value>

    <description>The host and port that the HBase master runs at.

    </description>

  </property>

  <property>

    <name>hbase.rootdir</name>

    <value>hdfs://sb-centercluster01:9000/hbase</value>

    <description>The directory shared by region servers.

    </description>

  </property>

  <property>

    <name>hbase.hregion.max.filesize</name>

    <value>67108864</value>

</property>

 

 

Any help is appreciated

 

Thanks

-Yair

Reply via email to