How can we verify that the data(tables) is distributed across the cluster?? Is there a way to confirm it that the data is distributed across all the nodes in the cluster.?
On Thu, Sep 27, 2012 at 12:26 PM, Venkateswara Rao Dokku < dvrao....@gmail.com> wrote: > Hi, > I am completely new to Hbase. I want to cluster the Hbase on two > nodes.I installed hadoop,hbase on the two nodes & my conf files are as > given below. > *cat conf/regionservers * > hbase-regionserver1 > hbase-master > *cat conf/masters * > hadoop-namenode > * cat conf/slaves * > hadoop-datanode1 > *vim conf/hdfs-site.xml * > <?xml version="1.0"?> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> > > <!-- Put site-specific property overrides in this file. --> > > <configuration> > <property> > <name>dfs.replication</name> > <value>2</value> > <description>Default block replication.The actual number of > replications can be specified when the file is created. The default is used > if replication is not specified in create time. > </description> > </property> > <property> > <name>dfs.support.append</name> > <value>true</value> > <description>Default block replication.The actual number of > replications can be specified when the file is created. The default is used > if replication is not specified in create time. > </description> > </property> > </configuration> > *& finally my /etc/hosts file is * > 127.0.0.1 localhost > 127.0.0.1 oc-PowerEdge-R610 > 10.2.32.48 hbase-master hadoop-namenode > 10.240.13.35 hbase-regionserver1 hadoop-datanode1 > The above files are identical on both of the machines. The following are > the processes that are running on my m/c's when I ran start scripts in > hadoop as well as hbase > *hadoop-namenode:* > HQuorumPeer > HMaster > Main > HRegionServer > SecondaryNameNode > Jps > NameNode > JobTracker > *hadoop-datanode1:* > > TaskTracker > Jps > DataNode > -- process information unavailable > Main > NC > HRegionServer > > I can able to create,list & scan tables on the *hadoop-namenode* machine > using Hbase shell. But while trying to run the same on the * > hadoop-datanode1 *machine I couldn't able to do it as I am getting > following error. > hbase(main):001:0> list > TABLE > > > ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times > > Here is some help for this command: > List all tables in hbase. Optional regular expression parameter could > be used to filter the output. Examples: > > hbase> list > hbase> list 'abc.*' > How can I list,scan the tables that are created by the *hadoop-namenode * > from the *hadoop-datanode1* machine. Similarly Can I create some tables > on *hadoop-datanode1 *& can I access them from the *hadoop-namenode * & > vice-versa as the data is distributed as this is a cluster. > > > > -- > Thanks & Regards, > Venkateswara Rao Dokku, > Software Engineer,One Convergence Devices Pvt Ltd., > Jubille Hills,Hyderabad. > > -- Thanks & Regards, Venkateswara Rao Dokku, Software Engineer,One Convergence Devices Pvt Ltd., Jubille Hills,Hyderabad.