Rao,

Can you make sure your region server is actually running? You can use jps
command to see Java processes, or a "ps ax |grep region".

Thanks,
Stas

On Thu, Sep 27, 2012 at 12:25 PM, Venkateswara Rao Dokku <
dvrao....@gmail.com> wrote:

> When I try to scan the table that is created by hadoop-namenode in the
> hadoop-datanode, I am getting the following error
> 12/09/27 16:47:55 INFO ipc.HBaseRPC: Problem connecting to server:
> localhost/127.0.0.1:60020
>
> Could you please help me out in overcoming this problem.
> Thanks for replying.
>
> On Thu, Sep 27, 2012 at 4:02 PM, Venkateswara Rao Dokku <
> dvrao....@gmail.com
> > wrote:
>
> > I started the Hmaster on the hadoop-namenode. But I was not able to
> access
> > it from the hadoop-datanode. Could you please help me solving this
> problem
> > by sharing what are the possibilities for this to happen.
> >
> >
> > On Thu, Sep 27, 2012 at 1:21 PM, n keywal <nkey...@gmail.com> wrote:
> >
> >> You should launch the master only once, on whatever machine you like.
> Then
> >> you will be able to access it from any other machine.
> >> Please have a look at the blog I mentioned in my previous mail.
> >>
> >> On Thu, Sep 27, 2012 at 9:39 AM, Venkateswara Rao Dokku <
> >> dvrao....@gmail.com
> >> > wrote:
> >>
> >> > I can see that HMaster is not started on the data-node machine when
> the
> >> > start scripts in hadoop & hbase ran on the hadoop-namenode. My doubt
> is
> >> > that,Shall we have to start that master on the hadoop-datanode1 too or
> >> the
> >> > hadoop-datanode1 will access the Hmaster that is running on the
> >> > hadoop-namenode to create,list,scan tables as the two nodes are in the
> >> > cluster as namenode & datanode.
> >> >
> >> > On Thu, Sep 27, 2012 at 1:02 PM, n keywal <nkey...@gmail.com> wrote:
> >> >
> >> > > Hi,
> >> > >
> >> > > I would like to direct you to the reference guide, but I must
> >> acknowledge
> >> > > that, well, it's a reference guide, hence not really easy for a
> plain
> >> new
> >> > > start.
> >> > > You should have a look at Lars' blog (and may be buy his book), and
> >> > > especially this entry:
> >> > >
> http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html
> >> > >
> >> > > Some hints however:
> >> > > - the replication occurs at the hdfs level, not the hbase level:
> hbase
> >> > > writes files that are split in hdfs blocks that are replicated
> accross
> >> > the
> >> > > datanodes. If you want to check the replications, you must look at
> >> what
> >> > > files are written by hbase and how they have been split in blocks by
> >> hdfs
> >> > > and how these blocks have been replicated. That will be in the hdfs
> >> > > interface. As a side note, it's not the easiest thing to learn when
> >> you
> >> > > start :-)
> >> > > - The error > ERROR:
> >> org.apache.hadoop.hbase.MasterNotRunningException:
> >> > > Retried 7 times
> >> > >   this is not linked to replication or whatever. It means that
> second
> >> > > machine cannot find the master. You need to fix this first.
> (googling
> >> &
> >> > > checking the logs).
> >> > >
> >> > >
> >> > > Good luck,
> >> > >
> >> > > Nicolas
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > On Thu, Sep 27, 2012 at 9:07 AM, Venkateswara Rao Dokku <
> >> > > dvrao....@gmail.com
> >> > > > wrote:
> >> > >
> >> > > > How can we verify that the data(tables) is distributed across the
> >> > > cluster??
> >> > > > Is there a way to confirm it that the data is distributed across
> all
> >> > the
> >> > > > nodes in the cluster.?
> >> > > >
> >> > > > On Thu, Sep 27, 2012 at 12:26 PM, Venkateswara Rao Dokku <
> >> > > > dvrao....@gmail.com> wrote:
> >> > > >
> >> > > > > Hi,
> >> > > > >     I am completely new to Hbase. I want to cluster the Hbase on
> >> two
> >> > > > > nodes.I installed hadoop,hbase on the two nodes & my conf files
> >> are
> >> > as
> >> > > > > given below.
> >> > > > > *cat  conf/regionservers *
> >> > > > > hbase-regionserver1
> >> > > > > hbase-master
> >> > > > > *cat conf/masters *
> >> > > > > hadoop-namenode
> >> > > > > * cat conf/slaves *
> >> > > > > hadoop-datanode1
> >> > > > > *vim conf/hdfs-site.xml *
> >> > > > > <?xml version="1.0"?>
> >> > > > > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> >> > > > >
> >> > > > > <!-- Put site-specific property overrides in this file. -->
> >> > > > >
> >> > > > > <configuration>
> >> > > > > <property>
> >> > > > >         <name>dfs.replication</name>
> >> > > > >         <value>2</value>
> >> > > > >         <description>Default block replication.The actual number
> >> of
> >> > > > > replications can be specified when the file is created. The
> >> default
> >> > is
> >> > > > used
> >> > > > > if replication is not specified in create time.
> >> > > > >         </description>
> >> > > > > </property>
> >> > > > > <property>
> >> > > > >         <name>dfs.support.append</name>
> >> > > > >         <value>true</value>
> >> > > > >         <description>Default block replication.The actual number
> >> of
> >> > > > > replications can be specified when the file is created. The
> >> default
> >> > is
> >> > > > used
> >> > > > > if replication is not specified in create time.
> >> > > > >         </description>
> >> > > > > </property>
> >> > > > > </configuration>
> >> > > > > *& finally my /etc/hosts file is *
> >> > > > > 127.0.0.1       localhost
> >> > > > > 127.0.0.1       oc-PowerEdge-R610
> >> > > > > 10.2.32.48  hbase-master hadoop-namenode
> >> > > > > 10.240.13.35 hbase-regionserver1  hadoop-datanode1
> >> > > > >  The above files are identical on both of the machines. The
> >> following
> >> > > are
> >> > > > > the processes that are running on my m/c's when I ran start
> >> scripts
> >> > in
> >> > > > > hadoop as well as hbase
> >> > > > > *hadoop-namenode:*
> >> > > > > HQuorumPeer
> >> > > > > HMaster
> >> > > > > Main
> >> > > > > HRegionServer
> >> > > > > SecondaryNameNode
> >> > > > > Jps
> >> > > > > NameNode
> >> > > > > JobTracker
> >> > > > > *hadoop-datanode1:*
> >> > > > >
> >> > > > > TaskTracker
> >> > > > > Jps
> >> > > > > DataNode
> >> > > > > -- process information unavailable
> >> > > > > Main
> >> > > > > NC
> >> > > > > HRegionServer
> >> > > > >
> >> > > > > I can able to create,list & scan tables on the *hadoop-namenode*
> >> > > machine
> >> > > > > using Hbase shell. But while trying to run the same on the  *
> >> > > > > hadoop-datanode1 *machine I couldn't able to do it as I am
> getting
> >> > > > > following error.
> >> > > > > hbase(main):001:0> list
> >> > > > > TABLE
> >> > > > >
> >> > > > >
> >> > > > > ERROR: org.apache.hadoop.hbase.MasterNotRunningException:
> Retried
> >> 7
> >> > > times
> >> > > > >
> >> > > > > Here is some help for this command:
> >> > > > > List all tables in hbase. Optional regular expression parameter
> >> could
> >> > > > > be used to filter the output. Examples:
> >> > > > >
> >> > > > >   hbase> list
> >> > > > >   hbase> list 'abc.*'
> >> > > > > How can I list,scan the tables that are created by the
> >> > > *hadoop-namenode *
> >> > > > > from the *hadoop-datanode1* machine. Similarly Can I create some
> >> > tables
> >> > > > > on  *hadoop-datanode1 *& can I access them from the
> >> *hadoop-namenode
> >> > *
> >> > > &
> >> > > > > vice-versa as the data is distributed as this is a cluster.
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > --
> >> > > > > Thanks & Regards,
> >> > > > > Venkateswara Rao Dokku,
> >> > > > > Software Engineer,One Convergence Devices Pvt Ltd.,
> >> > > > > Jubille Hills,Hyderabad.
> >> > > > >
> >> > > > >
> >> > > >
> >> > > >
> >> > > > --
> >> > > > Thanks & Regards,
> >> > > > Venkateswara Rao Dokku,
> >> > > > Software Engineer,One Convergence Devices Pvt Ltd.,
> >> > > > Jubille Hills,Hyderabad.
> >> > > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Thanks & Regards,
> >> > Venkateswara Rao Dokku,
> >> > Software Engineer,One Convergence Devices Pvt Ltd.,
> >> > Jubille Hills,Hyderabad.
> >> >
> >>
> >
> >
> >
> > --
> > Thanks & Regards,
> > Venkateswara Rao Dokku,
> > Software Engineer,One Convergence Devices Pvt Ltd.,
> > Jubille Hills,Hyderabad.
> >
> >
>
>
> --
> Thanks & Regards,
> Venkateswara Rao Dokku,
> Software Engineer,One Convergence Devices Pvt Ltd.,
> Jubille Hills,Hyderabad.
>

Reply via email to