Hello, Tatsuya
Thank you for the fast assistance.

I'm not totally sure, but I think this exception occurs when there is
> no HDFS data node available in the cluster.
>
> Can you access to the HDFS name node status screen at
> <http://servers-ip:50070/> from a web browser to see if there is a
> data node available?
>

Yes, the HDFS name node status is accessible and data node is available
through a web browser using url <http://servers-ip:50070/>.

Could you provide some examples when data node does not available in the
cluster and for the HBase master?
-------------------------------------------------
Best wishes, Artyom Shvedchikov


On Tue, Oct 27, 2009 at 10:01 AM, Tatsuya Kawano
<[email protected]>wrote:

> Hi Artyom,
>
> Your configuration files look just fine.
>
>
> >> 2009-10-26 13:34:30,031 WARN org.apache.hadoop.hdfs.dfsclient:
> datastreamer
> >> exception: org.apache.hadoop.ipc.remoteexcep tion: java.io.ioexception:
> file
> >> /hbase.version could only be replicated to 0 nodes, instead of 1
>
> I'm not totally sure, but I think this exception occurs when there is
> no HDFS data node available in the cluster.
>
> Can you access to the HDFS name node status screen at
> <http://servers-ip:50070/> from a web browser to see if there is a
> data node available?
>
> Thanks,
>
> --
> Tatsuya Kawano (Mr.)
> Tokyo, Japan
>
>
> On Tue, Oct 27, 2009 at 11:24 AM, Artyom Shvedchikov <[email protected]>
> wrote:
> > Hello.
> >
> > We are testing the latest HBase 0.20.1 in pseudo-distributed mode with
> > Hadoop 0.20.1 on such environment:
> > *h/w*: Intel C2D 1.86 GHz, RAM 2 Gb 667 MHz, HDD 1TB Seagate SATA2 7200
> Rpm
> > *s/w*: Ubuntu 9.04, Filesystem type is *ext3*, Java  1.6.0_16-b01, Hadoop
> > 0.20.1, HBase 0.20.1
> >
> > File */etc/hosts*
> >
> >> 127.0.0.1       localhost
> >>
> >> # The following lines are desirable for IPv6 capable hosts
> >> ::1     localhost ip6-localhost ip6-loopback
> >> fe00::0 ip6-localnet
> >> ff00::0 ip6-mcastprefix
> >> ff02::1 ip6-allnodes
> >> ff02::2 ip6-allrouters
> >> ff02::3 ip6-allhosts
> >>
> > Hadoop and HBase are running in pseudo-distributed mode:
> > Two options added to *hadoop-env.sh*:
> >
> >> export JAVA_HOME=/usr/lib/jvm/java-6-sun
> >> export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
> >>
> > *core-site.xml*:
> >
> >> <configuration>
> >> <property>
> >>   <name>fs.default.name</name>
> >>   <value>hdfs://127.0.0.1:9000</value>
> >> </property>
> >> <property>
> >>   <name>hadoop.tmp.dir</name>
> >>   <value>/hadoop/tmp/hadoop-${user.name}</value>
> >>   <description>A base for other temporary directories.</description>
> >> </property>
> >> </configuration>
> >>
> > *hdfs-site.xml*:
> >
> >> <configuration>
> >>   <property>
> >>     <name>dfs.replication</name>
> >>     <value>1</value>
> >>   </property>
> >> <property>
> >>   <name>dfs.name.dir</name>
> >>   <value>/hadoop/hdfs/name</value>
> >> </property>
> >> <property>
> >>   <name>dfs.data.dir</name>
> >>   <value>/hadoop/hdfs/data</value>
> >> </property>
> >> <property>
> >>   <name>dfs.datanode.socket.write.timeout</name>
> >>   <value>0</value>
> >> </property>
> >> <property>
> >>    <name>dfs.datanode.max.xcievers</name>
> >>    <value>1023</value>
> >> </property>
> >> </configuration>
> >>
> > *marped-site.xml:*
> >
> >> <configuration>
> >> <property>
> >>   <name>mapred.job.tracker</name>
> >>   <value>127.0.0.1:9001</value>
> >> </property>
> >> </configuration>
> >>
> > *hbase-site.xml:*
> >
> >> <configuration>
> >>   <property>
> >>     <name>hbase.rootdir</name>
> >>     <value>hdfs://localhost:9000/</value>
> >>     <description>The directory shared by region servers.
> >>     Should be fully-qualified to include the filesystem to use.
> >>     E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
> >>     </description>
> >>   </property>
> >>   <property>
> >>     <name>hbase.master</name>
> >>     <value>127.0.0.1:60000</value>
> >>     <description>The host and port that the HBase master runs at.
> >>     </description>
> >>   </property>
> >>   <property>
> >>      <name>hbase.tmp.dir</name>
> >>      <value>/hadoop/tmp/hbase-${user.name}</value>
> >>      <description>Temporary directory on the local
> >> filesystem.</description>
> >>   </property>
> >>     <property>
> >>         <name>hbase.zookeeper.quorum</name>
> >>         <value>127.0.0.1</value>
> >>         <description>The directory shared by region servers.
> >>         </description>
> >>     </property>
> >> </configuration>
> >>
> >  Hadoop and HBase are running under *hbase *user, all necessary
> directories
> > are owned by *hbase *user (I mean */hadoop* directory and all its
> > subdirectories).
> >
> > First launch was successfull, but after several days of work we trapt in
> > problem that hbase master was down, then we tried to restart it (*
> > stop-hbase.sh*, then *start-hbase.sh*) - restart fails with error:
> >
> >> 2009-10-26 13:34:30,031 WARN org.apache.hadoop.hdfs.dfsclient:
> datastreamer
> >> exception: org.apache.hadoop.ipc.remoteexcep tion: java.io.ioexception:
> file
> >> /hbase.version could only be replicated to 0 nodes, instead of 1
> at
> >>
> org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getadditionalblock(fsnamesystem.java:1267)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.namenode.addblock(namenode.java:422)
> >>
> >
> > Then I tried to reformat hdfs (then, also remove all hadoop and hbase
> data,
> > then format hdfs again) and start hadoop and hbase again, but HBase
> master
> > fails to start with the same error.
> >
> > Could someone revise our configuration and tell us what is the reason for
> > such HBase master instance behaviour?
> >
> > Thanks in advance, Artyom
> > -------------------------------------------------
> > Best wishes, Artyom Shvedchikov
> >
>

Reply via email to