only get the
error if zookeeper already running.
-Jignesh
On Oct 13, 2011, at 4:53 PM, Ramya Sunil wrote:
You already have zookeeper running on 2181 according to your jps output.
That is the reason, master seems to be complaining.
Can you please stop zookeeper, verify
Hi Jignesh,
--config (i.e. - - config) is the option to use and not -config.
Alternatively you can also set HBASE_CONF_DIR.
Below is the exact command line:
$ hbase --config /home/ramya/hbase/conf shell
hbase(main):001:0 create 'newtable','family'
0 row(s) in 0.5140 seconds
hbase(main):002:0
Ramya
On Thu, Oct 13, 2011 at 12:01 PM, Jignesh Patel jign...@websoft.com wrote:
ok --config worked but it is showing me same error. How to resolve this.
http://pastebin.com/UyRBA7vX
On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote:
Hi Jignesh,
--config (i.e. - - config) is the option
HQuorumPeer
38814 SecondaryNameNode
41578 Jps
38878 JobTracker
38726 DataNode
38639 NameNode
38964 TaskTracker
On Oct 13, 2011, at 3:23 PM, Ramya Sunil wrote:
Jignesh,
I dont see zookeeper running on your master. My cluster reads the
following:
$ jps
15315 Jps
13590 HMaster
15235
Hi Jignesh,
I have been running quite a few hbase tests on Hadoop 0.20.205 without any
issues on both secure and non secure clusters.
I have seen the error you mentioned when one has not specified the hbase
config directory.
Can you please try hbase --config path to hbase config directory shell
Hi John,
How many tasktrackers do you have? Can you check if your tasktrackers are
running and the total available map and reduce capacity in your cluster?
Can you also post the configuration of the scheduler you are using? You
might also want to check the jobtracker logs. It would help in
On Fri, Aug 26, 2011 at 11:50 AM, John Armstrong john.armstr...@ccri.comwrote:
On Fri, 26 Aug 2011 11:46:42 -0700, Ramya Sunil ra...@hortonworks.com
wrote:
How many tasktrackers do you have? Can you check if your tasktrackers
are
running and the total available map and reduce capacity
Hi Keith,
I have tried the exact use case you have mentioned and it works fine for me.
Below is the command line for the same:
[ramya]$ jar vxf samplelib.jar
created: META-INF/
inflated: META-INF/MANIFEST.MF
inflated: libhdfs.so
[ramya]$ hadoop dfs -put samplelib.jar samplelib.jar
[ramya]$