Re: Hbase with Hadoop

2011-10-14 Thread Ramya Sunil
Jignesh,

I have been able to deploy Hbase 0.90.3 and 0.90.4 with hadoop-0.20.205.
Below are the steps I followed:

1. Make sure none of hbasemaster, regionservers or zookeeper are running. As
Matt pointed out, turn on append.
2. hbase-daemon.sh --config $HBASE_CONF_DIR start zookeeper
3. hbase-daemon.sh --config $HBASE_CONF_DIR start master
4. hbase-daemon.sh --config $HBASE_CONF_DIR start regionserver
5. hbase --config $HBASE_CONF_DIR shell


Hope it helps.
Ramya



On Thu, Oct 13, 2011 at 4:11 PM, Jignesh Patel jign...@websoft.com wrote:

 Is there a way to resolve this weird problem.

  bin/hbase-start.sh is supposed to start zookeeper but it doesn't start.
 But on the other side if zookeeper up and running then it says

  Couldnt start ZK at requested address of 2181, instead got: 2182.
 Aborting. Why? Because clients (eg shell) wont be able to find this ZK
 quorum



 On Oct 13, 2011, at 5:40 PM, Jignesh Patel wrote:

  Ok now the problem is
 
  if I only use bin/hbase-start.sh then it doesn't start zookeeper.
 
  But if I use bin/hbase-daemon.sh start zookeeper before starting
 bin/hbase-start.sh then it will try to start zookeeper at port 2181 and then
 I have following error.
 
  Couldnt start ZK at requested address of 2181, instead got: 2182.
 Aborting. Why? Because clients (eg shell) wont be able to find this ZK
 quorum
 
 
  So I am wondering if bin/hbase-start.sh is trying to start zookeeper then
 while zookeeper is not running it should start the zookeeper. I only get the
 error if zookeeper already running.
 
 
  -Jignesh
 
 
  On Oct 13, 2011, at 4:53 PM, Ramya Sunil wrote:
 
  You already have zookeeper running on 2181 according to your jps output.
  That is the reason, master seems to be complaining.
  Can you please stop zookeeper, verify that no daemons are running on
 2181
  and restart your master?
 
  On Thu, Oct 13, 2011 at 12:37 PM, Jignesh Patel jign...@websoft.com
 wrote:
 
  Ramya,
 
 
  Based on Hbase the definite guide it seems zookeeper being started by
  hbase no need to start it separately(may be this is changed for 0.90.4.
  Anyways now  following is the updated status.
 
  Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/start-hbase.sh
  starting master, logging to
 
 /users/hadoop-user/hadoop-hbase/logs/hbase-hadoop-user-master-Jignesh-MacBookPro.local.out
  Couldnt start ZK at requested address of 2181, instead got: 2182.
 Aborting.
  Why? Because clients (eg shell) wont be able to find this ZK quorum
  Jignesh-MacBookPro:hadoop-hbase hadoop-user$ jps
  41486 HQuorumPeer
  38814 SecondaryNameNode
  41578 Jps
  38878 JobTracker
  38726 DataNode
  38639 NameNode
  38964 TaskTracker
 
  On Oct 13, 2011, at 3:23 PM, Ramya Sunil wrote:
 
  Jignesh,
 
  I dont see zookeeper running on your master. My cluster reads the
  following:
 
  $ jps
  15315 Jps
  13590 HMaster
  15235 HQuorumPeer
 
  Can you please shutdown your Hmaster and run the following first:
  $ hbase-daemon.sh start zookeeper
 
  And then start your hbasemaster and regionservers?
 
  Thanks
  Ramya
 
  On Thu, Oct 13, 2011 at 12:01 PM, Jignesh Patel jign...@websoft.com
  wrote:
 
  ok --config worked but it is showing me same error. How to resolve
 this.
 
  http://pastebin.com/UyRBA7vX
 
  On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote:
 
  Hi Jignesh,
 
  --config (i.e. - - config) is the option to use and not -config.
  Alternatively you can also set HBASE_CONF_DIR.
 
  Below is the exact command line:
 
  $ hbase --config /home/ramya/hbase/conf shell
  hbase(main):001:0 create 'newtable','family'
  0 row(s) in 0.5140 seconds
 
  hbase(main):002:0 list 'newtable'
  TABLE
  newtable
  1 row(s) in 0.0120 seconds
 
  OR
 
  $ export HBASE_CONF_DIR=/home/ramya/hbase/conf
  $ hbase shell
 
  hbase(main):001:0 list 'newtable'
  TABLE
 
  newtable
 
  1 row(s) in 0.3860 seconds
 
 
  Thanks
  Ramya
 
 
  On Thu, Oct 13, 2011 at 8:30 AM, jigneshmpatel 
  jigneshmpa...@gmail.com
  wrote:
 
  There is no command like -config see below
 
  Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config
  ./config
  shell
  Unrecognized option: -config
  Could not create the Java virtual machine.
 
  --
  View this message in context:
 
 
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
  Sent from the Hadoop lucene-users mailing list archive at
 Nabble.com.
 
 
 
 
 
 




Re: Hbase with Hadoop

2011-10-14 Thread Jignesh Patel
Ramya,

I have followed the steps you mention but in this steps I don't see you 
starting hbase.
I have followed step 1,2 and 3.
Here is how my hdfs-site.xml looks.

configuration
 property
 namedfs.replication/name
 value1/value
   descriptionDefault block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
 /description
 /property
 property
namedfs.support.append/name
valuetrue/value
 /property
 
  property
namedfs.datanode.max.xcievers/name
value4096/value
  /property

 
/configuration


For the step 4 I got following message which is ok as I am running in pseudo 
mode.

starting regionserver, logging to 
/Users/hadoop-user/hadoop-hbase/bin/../logs/hbase-hadoop-user-regionserver-Jignesh-MacBookPro.local.out
11/10/14 10:25:55 WARN regionserver.HRegionServerCommandLine: Not starting a 
distinct region server because hbase.cluster.distributed is false

then when I have tried to start base - bin/start-hbase.sh --config ./config I 
have same old error.

Couldnt start ZK at requested address of 2181, instead got: 2182. Aborting. 
Why? Because clients (eg shell) wont be able to find this ZK quorum



-Jignesh
On Oct 14, 2011, at 2:31 AM, Ramya Sunil wrote:

 Jignesh,
 
 I have been able to deploy Hbase 0.90.3 and 0.90.4 with hadoop-0.20.205. 
 Below are the steps I followed:
 
 1. Make sure none of hbasemaster, regionservers or zookeeper are running. As 
 Matt pointed out, turn on append.
 2. hbase-daemon.sh --config $HBASE_CONF_DIR start zookeeper
 3. hbase-daemon.sh --config $HBASE_CONF_DIR start master
 4. hbase-daemon.sh --config $HBASE_CONF_DIR start regionserver
 5. hbase --config $HBASE_CONF_DIR shell
 
 
 Hope it helps.
 Ramya
 
 
 
 On Thu, Oct 13, 2011 at 4:11 PM, Jignesh Patel jign...@websoft.com wrote:
 Is there a way to resolve this weird problem.
 
  bin/hbase-start.sh is supposed to start zookeeper but it doesn't start. But 
  on the other side if zookeeper up and running then it says
 
  Couldnt start ZK at requested address of 2181, instead got: 2182. Aborting. 
  Why? Because clients (eg shell) wont be able to find this ZK quorum
 
 
 
 On Oct 13, 2011, at 5:40 PM, Jignesh Patel wrote:
 
  Ok now the problem is
 
  if I only use bin/hbase-start.sh then it doesn't start zookeeper.
 
  But if I use bin/hbase-daemon.sh start zookeeper before starting 
  bin/hbase-start.sh then it will try to start zookeeper at port 2181 and 
  then I have following error.
 
  Couldnt start ZK at requested address of 2181, instead got: 2182. Aborting. 
  Why? Because clients (eg shell) wont be able to find this ZK quorum
 
 
  So I am wondering if bin/hbase-start.sh is trying to start zookeeper then 
  while zookeeper is not running it should start the zookeeper. I only get 
  the error if zookeeper already running.
 
 
  -Jignesh
 
 
  On Oct 13, 2011, at 4:53 PM, Ramya Sunil wrote:
 
  You already have zookeeper running on 2181 according to your jps output.
  That is the reason, master seems to be complaining.
  Can you please stop zookeeper, verify that no daemons are running on 2181
  and restart your master?
 
  On Thu, Oct 13, 2011 at 12:37 PM, Jignesh Patel jign...@websoft.com 
  wrote:
 
  Ramya,
 
 
  Based on Hbase the definite guide it seems zookeeper being started by
  hbase no need to start it separately(may be this is changed for 0.90.4.
  Anyways now  following is the updated status.
 
  Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/start-hbase.sh
  starting master, logging to
  /users/hadoop-user/hadoop-hbase/logs/hbase-hadoop-user-master-Jignesh-MacBookPro.local.out
  Couldnt start ZK at requested address of 2181, instead got: 2182. 
  Aborting.
  Why? Because clients (eg shell) wont be able to find this ZK quorum
  Jignesh-MacBookPro:hadoop-hbase hadoop-user$ jps
  41486 HQuorumPeer
  38814 SecondaryNameNode
  41578 Jps
  38878 JobTracker
  38726 DataNode
  38639 NameNode
  38964 TaskTracker
 
  On Oct 13, 2011, at 3:23 PM, Ramya Sunil wrote:
 
  Jignesh,
 
  I dont see zookeeper running on your master. My cluster reads the
  following:
 
  $ jps
  15315 Jps
  13590 HMaster
  15235 HQuorumPeer
 
  Can you please shutdown your Hmaster and run the following first:
  $ hbase-daemon.sh start zookeeper
 
  And then start your hbasemaster and regionservers?
 
  Thanks
  Ramya
 
  On Thu, Oct 13, 2011 at 12:01 PM, Jignesh Patel jign...@websoft.com
  wrote:
 
  ok --config worked but it is showing me same error. How to resolve this.
 
  http://pastebin.com/UyRBA7vX
 
  On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote:
 
  Hi Jignesh,
 
  --config (i.e. - - config) is the option to use and not -config.
  Alternatively you can also set HBASE_CONF_DIR.
 
  Below is the exact command line:
 
  $ hbase --config /home/ramya/hbase/conf shell
  hbase(main):001:0 create 'newtable','family'
  0 row(s

Re: Hbase with Hadoop

2011-10-14 Thread Jignesh Patel

On Oct 14, 2011, at 2:44 PM, Jignesh Patel wrote:

 According to start-hase.sh if distributed mode=flase then I am supposed to 
 start only masters it doesn't required to start zookeeper, see the script 
 below from the file.
 
 if [ $distMode == 'false' ] 
 then
   $bin/hbase-daemon.sh start master
 else
   $bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} start zookeeper
   $bin/hbase-daemon.sh --config ${HBASE_CONF_DIR} start master 
   $bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} \
 --hosts ${HBASE_REGIONSERVERS} start regionserver
   $bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} \
 --hosts ${HBASE_BACKUP_MASTERS} start master-backup
 fi
 
 According to above script the zookeeper is not required to start as I am not 
 running server in distributed mode but in pseudo mode. But then it is giving 
 error for zookeeper is not able to connect.

-Jignesh
 
 
 is supposed to start zookeeper and master as per the 
 
 On Fri, Oct 14, 2011 at 2:31 AM, Ramya Sunil [via Lucene] 
 ml-node+s472066n342086...@n3.nabble.com wrote:
 Jignesh, 
 
 I have been able to deploy Hbase 0.90.3 and 0.90.4 with hadoop-0.20.205. 
 Below are the steps I followed: 
 
 1. Make sure none of hbasemaster, regionservers or zookeeper are running. As 
 Matt pointed out, turn on append. 
 2. hbase-daemon.sh --config $HBASE_CONF_DIR start zookeeper 
 3. hbase-daemon.sh --config $HBASE_CONF_DIR start master 
 4. hbase-daemon.sh --config $HBASE_CONF_DIR start regionserver 
 5. hbase --config $HBASE_CONF_DIR shell 
 
 
 Hope it helps. 
 Ramya 
 
 
 
 On Thu, Oct 13, 2011 at 4:11 PM, Jignesh Patel [hidden email] wrote: 
 
  Is there a way to resolve this weird problem. 
  
   bin/hbase-start.sh is supposed to start zookeeper but it doesn't start. 
  But on the other side if zookeeper up and running then it says 
  
   Couldnt start ZK at requested address of 2181, instead got: 2182. 
  Aborting. Why? Because clients (eg shell) wont be able to find this ZK 
  quorum 
  
  
  
  On Oct 13, 2011, at 5:40 PM, Jignesh Patel wrote: 
  
   Ok now the problem is 
   
   if I only use bin/hbase-start.sh then it doesn't start zookeeper. 
   
   But if I use bin/hbase-daemon.sh start zookeeper before starting 
  bin/hbase-start.sh then it will try to start zookeeper at port 2181 and 
  then 
  I have following error. 
   
   Couldnt start ZK at requested address of 2181, instead got: 2182. 
  Aborting. Why? Because clients (eg shell) wont be able to find this ZK 
  quorum 
   
   
   So I am wondering if bin/hbase-start.sh is trying to start zookeeper then 
  while zookeeper is not running it should start the zookeeper. I only get 
  the 
  error if zookeeper already running. 
   
   
   -Jignesh 
   
   
   On Oct 13, 2011, at 4:53 PM, Ramya Sunil wrote: 
   
   You already have zookeeper running on 2181 according to your jps output. 
   That is the reason, master seems to be complaining. 
   Can you please stop zookeeper, verify that no daemons are running on 
  2181 
   and restart your master? 
   
   On Thu, Oct 13, 2011 at 12:37 PM, Jignesh Patel [hidden email] 
  wrote: 
   
   Ramya, 
   
   
   Based on Hbase the definite guide it seems zookeeper being started by 
   hbase no need to start it separately(may be this is changed for 0.90.4. 
   Anyways now  following is the updated status. 
   
   Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/start-hbase.sh 
   starting master, logging to 
   
  /users/hadoop-user/hadoop-hbase/logs/hbase-hadoop-user-master-Jignesh-MacBookPro.local.out
   
   Couldnt start ZK at requested address of 2181, instead got: 2182. 
  Aborting. 
   Why? Because clients (eg shell) wont be able to find this ZK quorum 
   Jignesh-MacBookPro:hadoop-hbase hadoop-user$ jps 
   41486 HQuorumPeer 
   38814 SecondaryNameNode 
   41578 Jps 
   38878 JobTracker 
   38726 DataNode 
   38639 NameNode 
   38964 TaskTracker 
   
   On Oct 13, 2011, at 3:23 PM, Ramya Sunil wrote: 
   
   Jignesh, 
   
   I dont see zookeeper running on your master. My cluster reads the 
   following: 
   
   $ jps 
   15315 Jps 
   13590 HMaster 
   15235 HQuorumPeer 
   
   Can you please shutdown your Hmaster and run the following first: 
   $ hbase-daemon.sh start zookeeper 
   
   And then start your hbasemaster and regionservers? 
   
   Thanks 
   Ramya 
   
   On Thu, Oct 13, 2011 at 12:01 PM, Jignesh Patel [hidden email] 
   wrote: 
   
   ok --config worked but it is showing me same error. How to resolve 
  this. 
   
   http://pastebin.com/UyRBA7vX
   
   On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote: 
   
   Hi Jignesh, 
   
   --config (i.e. - - config) is the option to use and not -config. 
   Alternatively you can also set HBASE_CONF_DIR. 
   
   Below is the exact command line: 
   
   $ hbase --config /home/ramya/hbase/conf shell 
   hbase(main):001:0 create 'newtable','family' 
   0 row(s) in 0.5140 seconds 
   
   hbase(main):002:0 list 'newtable' 
   TABLE 
   newtable 
   1 row(s) in 0.0120 seconds

Re: Hbase with Hadoop

2011-10-14 Thread Jignesh Patel
Can somebody help me to work Hadoop 0.20.205.0 and Hbase 0.90.4 in pseudo mode. 
This is third day in a row and I am not able to make it run.

The details are as follows

http://pastebin.com/KrJePt64


If this is not going to work then let me know which version I should use to get 
it run. 

On Oct 14, 2011, at 2:46 PM, Jignesh Patel wrote:

 
 On Oct 14, 2011, at 2:44 PM, Jignesh Patel wrote:
 
 According to start-hase.sh if distributed mode=flase then I am supposed to 
 start only masters it doesn't required to start zookeeper, see the script 
 below from the file.
 
 if [ $distMode == 'false' ] 
 then
   $bin/hbase-daemon.sh start master
 else
   $bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} start zookeeper
   $bin/hbase-daemon.sh --config ${HBASE_CONF_DIR} start master 
   $bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} \
 --hosts ${HBASE_REGIONSERVERS} start regionserver
   $bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} \
 --hosts ${HBASE_BACKUP_MASTERS} start master-backup
 fi
 
 According to above script the zookeeper is not required to start as I am not 
 running server in distributed mode but in pseudo mode. But then it is giving 
 error for zookeeper is not able to connect.
 
 -Jignesh
 
 
 is supposed to start zookeeper and master as per the 
 
 On Fri, Oct 14, 2011 at 2:31 AM, Ramya Sunil [via Lucene] 
 ml-node+s472066n342086...@n3.nabble.com wrote:
 Jignesh, 
 
 I have been able to deploy Hbase 0.90.3 and 0.90.4 with hadoop-0.20.205. 
 Below are the steps I followed: 
 
 1. Make sure none of hbasemaster, regionservers or zookeeper are running. As 
 Matt pointed out, turn on append. 
 2. hbase-daemon.sh --config $HBASE_CONF_DIR start zookeeper 
 3. hbase-daemon.sh --config $HBASE_CONF_DIR start master 
 4. hbase-daemon.sh --config $HBASE_CONF_DIR start regionserver 
 5. hbase --config $HBASE_CONF_DIR shell 
 
 
 Hope it helps. 
 Ramya 
 
 
 
 On Thu, Oct 13, 2011 at 4:11 PM, Jignesh Patel [hidden email] wrote: 
 
  Is there a way to resolve this weird problem. 
  
   bin/hbase-start.sh is supposed to start zookeeper but it doesn't start. 
  But on the other side if zookeeper up and running then it says 
  
   Couldnt start ZK at requested address of 2181, instead got: 2182. 
  Aborting. Why? Because clients (eg shell) wont be able to find this ZK 
  quorum 
  
  
  
  On Oct 13, 2011, at 5:40 PM, Jignesh Patel wrote: 
  
   Ok now the problem is 
   
   if I only use bin/hbase-start.sh then it doesn't start zookeeper. 
   
   But if I use bin/hbase-daemon.sh start zookeeper before starting 
  bin/hbase-start.sh then it will try to start zookeeper at port 2181 and 
  then 
  I have following error. 
   
   Couldnt start ZK at requested address of 2181, instead got: 2182. 
  Aborting. Why? Because clients (eg shell) wont be able to find this ZK 
  quorum 
   
   
   So I am wondering if bin/hbase-start.sh is trying to start zookeeper 
   then 
  while zookeeper is not running it should start the zookeeper. I only get 
  the 
  error if zookeeper already running. 
   
   
   -Jignesh 
   
   
   On Oct 13, 2011, at 4:53 PM, Ramya Sunil wrote: 
   
   You already have zookeeper running on 2181 according to your jps 
   output. 
   That is the reason, master seems to be complaining. 
   Can you please stop zookeeper, verify that no daemons are running on 
  2181 
   and restart your master? 
   
   On Thu, Oct 13, 2011 at 12:37 PM, Jignesh Patel [hidden email] 
  wrote: 
   
   Ramya, 
   
   
   Based on Hbase the definite guide it seems zookeeper being started 
   by 
   hbase no need to start it separately(may be this is changed for 
   0.90.4. 
   Anyways now  following is the updated status. 
   
   Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/start-hbase.sh 
   starting master, logging to 
   
  /users/hadoop-user/hadoop-hbase/logs/hbase-hadoop-user-master-Jignesh-MacBookPro.local.out
   
   Couldnt start ZK at requested address of 2181, instead got: 2182. 
  Aborting. 
   Why? Because clients (eg shell) wont be able to find this ZK quorum 
   Jignesh-MacBookPro:hadoop-hbase hadoop-user$ jps 
   41486 HQuorumPeer 
   38814 SecondaryNameNode 
   41578 Jps 
   38878 JobTracker 
   38726 DataNode 
   38639 NameNode 
   38964 TaskTracker 
   
   On Oct 13, 2011, at 3:23 PM, Ramya Sunil wrote: 
   
   Jignesh, 
   
   I dont see zookeeper running on your master. My cluster reads the 
   following: 
   
   $ jps 
   15315 Jps 
   13590 HMaster 
   15235 HQuorumPeer 
   
   Can you please shutdown your Hmaster and run the following first: 
   $ hbase-daemon.sh start zookeeper 
   
   And then start your hbasemaster and regionservers? 
   
   Thanks 
   Ramya 
   
   On Thu, Oct 13, 2011 at 12:01 PM, Jignesh Patel [hidden email] 
   wrote: 
   
   ok --config worked but it is showing me same error. How to resolve 
  this. 
   
   http://pastebin.com/UyRBA7vX
   
   On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote: 
   
   Hi Jignesh, 
   
   --config (i.e. - - config) is the option

Re: Hbase with Hadoop

2011-10-14 Thread Todd Lipcon
On Wed, Oct 12, 2011 at 9:31 AM, Vinod Gupta Tankala
tvi...@readypulse.com wrote:
 its free and open source too.. basically, their releases are ahead of public
 releases of hadoop/hbase - from what i understand, major bug fixes and
 enhancements are checked in to their branch first and then eventually make
 it to public release branches.


You've got it a bit backwards - except for very rare exceptions, we
check our fixes into the public ASF codebase before we commit anything
to CDH releases. Sometimes, it will show up in a CDH release before an
ASF release, but the changes are always done as backports from ASF'[s
subversion. You can see the list of public JIRAs referenced in our
changelists here:
http://archive.cloudera.com/cdh/3/hadoop-0.20.2+923.97.CHANGES.txt

Apologies for the vendor-specific comment: I just wanted to clarify
that Cloudera's aim is to contribute to the community and not any kind
of fork as suggested above.

Back to work on 0.23 for me!

-Todd
-- 
Todd Lipcon
Software Engineer, Cloudera


Re: Hbase with Hadoop

2011-10-14 Thread Jignesh Patel
At last I move one step further. It was a problem with the hadoop jar file. I 
need to replace hadoop-core-xx.jar in base/lib with hadoop/lib.
After replacing it I got following error:

2011-10-14 17:09:12,409 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled 
exception. Starting shutdown.
java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:37)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.clinit(DefaultMetricsSystem.java:34)
at 
org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:196)
at 
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
at 
org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
at 
org.apache.hadoop.security.KerberosName.clinit(KerberosName.java:83)
at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:189)
at 
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:159)
at 
org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:216)
at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:409)
at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:395)
at 
org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:1436)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1337)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:364)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.init(MasterFileSystem.java:81)
at 
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:346)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:282)
at 
org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCommandLine.java:193)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.lang.ClassNotFoundException: 
org.apache.commons.configuration.Configuration
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 22 more


On Oct 14, 2011, at 3:35 PM, Jignesh Patel wrote:

 Can somebody help me to work Hadoop 0.20.205.0 and Hbase 0.90.4 in pseudo 
 mode. This is third day in a row and I am not able to make it run.
 
 The details are as follows
 
 http://pastebin.com/KrJePt64
 
 
 If this is not going to work then let me know which version I should use to 
 get it run. 
 
 On Oct 14, 2011, at 2:46 PM, Jignesh Patel wrote:
 
 
 On Oct 14, 2011, at 2:44 PM, Jignesh Patel wrote:
 
 According to start-hase.sh if distributed mode=flase then I am supposed to 
 start only masters it doesn't required to start zookeeper, see the script 
 below from the file.
 
 if [ $distMode == 'false' ] 
 then
   $bin/hbase-daemon.sh start master
 else
   $bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} start zookeeper
   $bin/hbase-daemon.sh --config ${HBASE_CONF_DIR} start master 
   $bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} \
 --hosts ${HBASE_REGIONSERVERS} start regionserver
   $bin/hbase-daemons.sh --config ${HBASE_CONF_DIR} \
 --hosts ${HBASE_BACKUP_MASTERS} start master-backup
 fi
 
 According to above script the zookeeper is not required to start as I am 
 not running server in distributed mode but in pseudo mode. But then it is 
 giving error for zookeeper is not able to connect.
 
 -Jignesh
 
 
 is supposed to start zookeeper and master as per the 
 
 On Fri, Oct 14, 2011 at 2:31 AM, Ramya Sunil [via Lucene] 
 ml-node+s472066n342086...@n3.nabble.com wrote:
 Jignesh, 
 
 I have been able to deploy Hbase 0.90.3 and 0.90.4 with hadoop-0.20.205. 
 Below are the steps I followed: 
 
 1. Make sure none of hbasemaster, regionservers or zookeeper are running. 
 As 
 Matt pointed out, turn on append. 
 2. hbase-daemon.sh --config $HBASE_CONF_DIR start zookeeper 
 3. hbase-daemon.sh --config $HBASE_CONF_DIR start master 
 4. hbase-daemon.sh --config $HBASE_CONF_DIR start regionserver 
 5. hbase --config $HBASE_CONF_DIR shell 
 
 
 Hope it helps. 
 Ramya 
 
 
 
 On Thu, Oct 13, 2011 at 4:11 PM, Jignesh Patel [hidden email] wrote: 
 
  Is there a way to resolve this weird problem. 
  
   bin

Re: Hbase with Hadoop

2011-10-14 Thread Jignesh Patel
Cool Everything is good now after copying commons-configuration.jar file.

No need to start zookeeper or master. Only start hbase-start.sh and everything 
works. I see my status changed,

On Oct 14, 2011, at 5:16 PM, Jignesh Patel wrote:

 undError: org/apache/commons/configuration/Configuration



Re: Hbase with Hadoop

2011-10-13 Thread giridharan kesavan

Jignesh,

passing --config path_to_hbase_configs would help.

Like:
bin/hbase --config path_to_hbase_configs shell

-Giri

On 10/12/11 4:50 PM, Matt Foley wrote:

Hi Jignesh,
Not clear what's going on with your ZK, but as a starting point, the
hsync/flush feature in 205 was implemented with an on-off switch.  Make sure
you've turned it on by setting  *dfs.support.append  *to true in the
hdfs-site.xml config file.

Also, are you installing Hadoop with security turned on or off?

I'll gather some other config info that should help.
--Matt


On Wed, Oct 12, 2011 at 1:47 PM, Jignesh Pateljign...@websoft.com  wrote:


When I tried to run Hbase 0.90.4 with hadoop-.0.20.205.0 I got following
error

Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase shell
HBase Shell; enter 'helpRETURN' for list of supported commands.
Type exitRETURN to leave the HBase Shell
Version 0.90.4, r1150278, Sun Jul 24 15:53:29 PDT 2011

hbase(main):001:0  status

ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able
to connect to ZooKeeper but the connection closes immediately. This could be
a sign that the server has too many connections (30 is the default).
Consider inspecting your ZK server logs for that error and then make sure
you are reusing HBaseConfiguration as often as you can. See HTable's javadoc
for more information.


And when I tried to stop Hbase I continuously sees dot being printed and no
sign of stopping it. Not sure why it just simply stop it.

stopping
hbase...….


On Oct 12, 2011, at 3:19 PM, Jignesh Patel wrote:


The new plugin works after deleting eclipse and reinstalling it.
On Oct 12, 2011, at 2:39 PM, Jignesh Patel wrote:


I have installed Hadoop-0.20.205.0 but when I replace the hadoop

0.20.204.0 eclipse plugin with the 0.20.205.0, eclipse is not recognizing
it.

-Jignesh
On Oct 12, 2011, at 12:31 PM, Vinod Gupta Tankala wrote:


its free and open source too.. basically, their releases are ahead of

public

releases of hadoop/hbase - from what i understand, major bug fixes and
enhancements are checked in to their branch first and then eventually

make

it to public release branches.

thanks

On Wed, Oct 12, 2011 at 9:26 AM, Jignesh Pateljign...@websoft.com

wrote:

Sorry to here that.
Is CDH3 is a open source or a paid version?

-jignesh
On Oct 12, 2011, at 11:58 AM, Vinod Gupta Tankala wrote:


for what its worth, i was in a similar situation/dilemma few days ago

and

got frustrated figuring out what version combination of hadoop/hbase

to

use

and how to build hadoop manually to be compatible with hbase. the

build

process didn't work for me either.
eventually, i ended up using cloudera distribution and i think it

saved

me a

lot of headache and time.

thanks

On Tue, Oct 11, 2011 at 8:29 PM, jigneshmpatel

jigneshmpa...@gmail.com

wrote:


Matt,
Thanks a lot. Just wanted to have some more information. If hadoop
0.2.205.0
voted by the community members then will it become major release?

And

what

if it is not approved by community members.

And as you said I do like to use 0.90.3 if it works. If it is ok,

can

you

share the deails of those configuration changes?

-Jignesh

--
View this message in context:


http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html

Sent from the Hadoop lucene-users mailing list archive at

Nabble.com.







--
-Giri



Re: Hbase with Hadoop

2011-10-13 Thread Jignesh Patel
Actually real problem is here

http://pastebin.com/jyvpivt6

Moreover I didn't find any command like -config.

-Jignesh

On Oct 13, 2011, at 2:02 AM, giridharan kesavan wrote:

 Jignesh,
 
 passing --config path_to_hbase_configs would help.
 
 Like:
 bin/hbase --config path_to_hbase_configs shell
 
 -Giri
 
 On 10/12/11 4:50 PM, Matt Foley wrote:
 Hi Jignesh,
 Not clear what's going on with your ZK, but as a starting point, the
 hsync/flush feature in 205 was implemented with an on-off switch.  Make sure
 you've turned it on by setting  *dfs.support.append  *to true in the
 hdfs-site.xml config file.
 
 Also, are you installing Hadoop with security turned on or off?
 
 I'll gather some other config info that should help.
 --Matt
 
 
 On Wed, Oct 12, 2011 at 1:47 PM, Jignesh Pateljign...@websoft.com  wrote:
 
 When I tried to run Hbase 0.90.4 with hadoop-.0.20.205.0 I got following
 error
 
 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase shell
 HBase Shell; enter 'helpRETURN' for list of supported commands.
 Type exitRETURN to leave the HBase Shell
 Version 0.90.4, r1150278, Sun Jul 24 15:53:29 PDT 2011
 
 hbase(main):001:0  status
 
 ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able
 to connect to ZooKeeper but the connection closes immediately. This could be
 a sign that the server has too many connections (30 is the default).
 Consider inspecting your ZK server logs for that error and then make sure
 you are reusing HBaseConfiguration as often as you can. See HTable's javadoc
 for more information.
 
 
 And when I tried to stop Hbase I continuously sees dot being printed and no
 sign of stopping it. Not sure why it just simply stop it.
 
 stopping
 hbase...….
 
 
 On Oct 12, 2011, at 3:19 PM, Jignesh Patel wrote:
 
 The new plugin works after deleting eclipse and reinstalling it.
 On Oct 12, 2011, at 2:39 PM, Jignesh Patel wrote:
 
 I have installed Hadoop-0.20.205.0 but when I replace the hadoop
 0.20.204.0 eclipse plugin with the 0.20.205.0, eclipse is not recognizing
 it.
 -Jignesh
 On Oct 12, 2011, at 12:31 PM, Vinod Gupta Tankala wrote:
 
 its free and open source too.. basically, their releases are ahead of
 public
 releases of hadoop/hbase - from what i understand, major bug fixes and
 enhancements are checked in to their branch first and then eventually
 make
 it to public release branches.
 
 thanks
 
 On Wed, Oct 12, 2011 at 9:26 AM, Jignesh Pateljign...@websoft.com
 wrote:
 Sorry to here that.
 Is CDH3 is a open source or a paid version?
 
 -jignesh
 On Oct 12, 2011, at 11:58 AM, Vinod Gupta Tankala wrote:
 
 for what its worth, i was in a similar situation/dilemma few days ago
 and
 got frustrated figuring out what version combination of hadoop/hbase
 to
 use
 and how to build hadoop manually to be compatible with hbase. the
 build
 process didn't work for me either.
 eventually, i ended up using cloudera distribution and i think it
 saved
 me a
 lot of headache and time.
 
 thanks
 
 On Tue, Oct 11, 2011 at 8:29 PM, jigneshmpatel
 jigneshmpa...@gmail.com
 wrote:
 
 Matt,
 Thanks a lot. Just wanted to have some more information. If hadoop
 0.2.205.0
 voted by the community members then will it become major release?
 And
 what
 if it is not approved by community members.
 
 And as you said I do like to use 0.90.3 if it works. If it is ok,
 can
 you
 share the deails of those configuration changes?
 
 -Jignesh
 
 --
 View this message in context:
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html
 Sent from the Hadoop lucene-users mailing list archive at
 Nabble.com.
 
 
 
 
 -- 
 -Giri
 



Re: Hbase with Hadoop

2011-10-13 Thread jigneshmpatel
Another thing I am using hadoop in psuedo single node server. But even if I
don't start Hbase I will have same error.

ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able
to connect to ZooKeeper but the connection closes immediately. This could be
a sign that the server has too many connections (30 is the default).
Consider inspecting your ZK server logs for that error and then make sure
you are reusing HBaseConfiguration as often as you can. See HTable's javadoc
for more information.


 

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418992.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.


Re: Hbase with Hadoop

2011-10-13 Thread jigneshmpatel
There is no command like -config see below

Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config ./config
shell
Unrecognized option: -config
Could not create the Java virtual machine.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.


Re: Hbase with Hadoop

2011-10-13 Thread Harsh J
You'll need two hyphens before 'config'.

On 13-Oct-2011, at 9:00 PM, jigneshmpatel wrote:

 There is no command like -config see below
 
 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config ./config
 shell
 Unrecognized option: -config
 Could not create the Java virtual machine.
 
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.



Re: Hbase with Hadoop

2011-10-13 Thread Matt Foley
Hi Jignesh,
the option is --config (with a double dash) not -config (with a single
dash).  Please let me know if that works.

--Matt


On Thu, Oct 13, 2011 at 8:30 AM, jigneshmpatel jigneshmpa...@gmail.comwrote:

 There is no command like -config see below

 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config ./config
 shell
 Unrecognized option: -config
 Could not create the Java virtual machine.

 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.



Re: Hbase with Hadoop

2011-10-13 Thread Ramya Sunil
Hi Jignesh,

--config (i.e. - - config) is the option to use and not -config.
Alternatively you can also set HBASE_CONF_DIR.

Below is the exact command line:

$ hbase --config /home/ramya/hbase/conf shell
hbase(main):001:0 create 'newtable','family'
0 row(s) in 0.5140 seconds

hbase(main):002:0 list 'newtable'
TABLE
newtable
1 row(s) in 0.0120 seconds

OR

$ export HBASE_CONF_DIR=/home/ramya/hbase/conf
$ hbase shell

hbase(main):001:0 list 'newtable'
TABLE

newtable

1 row(s) in 0.3860 seconds


Thanks
Ramya


On Thu, Oct 13, 2011 at 8:30 AM, jigneshmpatel jigneshmpa...@gmail.comwrote:

 There is no command like -config see below

 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config ./config
 shell
 Unrecognized option: -config
 Could not create the Java virtual machine.

 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.



Re: Hbase with Hadoop

2011-10-13 Thread Jignesh Patel
ok --config worked but it is showing me same error. How to resolve this.

http://pastebin.com/UyRBA7vX

On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote:

 Hi Jignesh,
 
 --config (i.e. - - config) is the option to use and not -config.
 Alternatively you can also set HBASE_CONF_DIR.
 
 Below is the exact command line:
 
 $ hbase --config /home/ramya/hbase/conf shell
 hbase(main):001:0 create 'newtable','family'
 0 row(s) in 0.5140 seconds
 
 hbase(main):002:0 list 'newtable'
 TABLE
 newtable
 1 row(s) in 0.0120 seconds
 
 OR
 
 $ export HBASE_CONF_DIR=/home/ramya/hbase/conf
 $ hbase shell
 
 hbase(main):001:0 list 'newtable'
 TABLE
 
 newtable
 
 1 row(s) in 0.3860 seconds
 
 
 Thanks
 Ramya
 
 
 On Thu, Oct 13, 2011 at 8:30 AM, jigneshmpatel jigneshmpa...@gmail.comwrote:
 
 There is no command like -config see below
 
 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config ./config
 shell
 Unrecognized option: -config
 Could not create the Java virtual machine.
 
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 



Re: Hbase with Hadoop

2011-10-13 Thread Ramya Sunil
Jignesh,

I dont see zookeeper running on your master. My cluster reads the following:

$ jps
15315 Jps
13590 HMaster
15235 HQuorumPeer

Can you please shutdown your Hmaster and run the following first:
$ hbase-daemon.sh start zookeeper

And then start your hbasemaster and regionservers?

Thanks
Ramya

On Thu, Oct 13, 2011 at 12:01 PM, Jignesh Patel jign...@websoft.com wrote:

 ok --config worked but it is showing me same error. How to resolve this.

 http://pastebin.com/UyRBA7vX

 On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote:

  Hi Jignesh,
 
  --config (i.e. - - config) is the option to use and not -config.
  Alternatively you can also set HBASE_CONF_DIR.
 
  Below is the exact command line:
 
  $ hbase --config /home/ramya/hbase/conf shell
  hbase(main):001:0 create 'newtable','family'
  0 row(s) in 0.5140 seconds
 
  hbase(main):002:0 list 'newtable'
  TABLE
  newtable
  1 row(s) in 0.0120 seconds
 
  OR
 
  $ export HBASE_CONF_DIR=/home/ramya/hbase/conf
  $ hbase shell
 
  hbase(main):001:0 list 'newtable'
  TABLE
 
  newtable
 
  1 row(s) in 0.3860 seconds
 
 
  Thanks
  Ramya
 
 
  On Thu, Oct 13, 2011 at 8:30 AM, jigneshmpatel jigneshmpa...@gmail.com
 wrote:
 
  There is no command like -config see below
 
  Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config ./config
  shell
  Unrecognized option: -config
  Could not create the Java virtual machine.
 
  --
  View this message in context:
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
  Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 




Re: Hbase with Hadoop

2011-10-13 Thread Jignesh Patel
Ramya,


Based on Hbase the definite guide it seems zookeeper being started by hbase 
no need to start it separately(may be this is changed for 0.90.4. Anyways now  
following is the updated status.

Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/start-hbase.sh
starting master, logging to 
/users/hadoop-user/hadoop-hbase/logs/hbase-hadoop-user-master-Jignesh-MacBookPro.local.out
Couldnt start ZK at requested address of 2181, instead got: 2182. Aborting. 
Why? Because clients (eg shell) wont be able to find this ZK quorum
Jignesh-MacBookPro:hadoop-hbase hadoop-user$ jps
41486 HQuorumPeer
38814 SecondaryNameNode
41578 Jps
38878 JobTracker
38726 DataNode
38639 NameNode
38964 TaskTracker

On Oct 13, 2011, at 3:23 PM, Ramya Sunil wrote:

 Jignesh,
 
 I dont see zookeeper running on your master. My cluster reads the following:
 
 $ jps
 15315 Jps
 13590 HMaster
 15235 HQuorumPeer
 
 Can you please shutdown your Hmaster and run the following first:
 $ hbase-daemon.sh start zookeeper
 
 And then start your hbasemaster and regionservers?
 
 Thanks
 Ramya
 
 On Thu, Oct 13, 2011 at 12:01 PM, Jignesh Patel jign...@websoft.com wrote:
 
 ok --config worked but it is showing me same error. How to resolve this.
 
 http://pastebin.com/UyRBA7vX
 
 On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote:
 
 Hi Jignesh,
 
 --config (i.e. - - config) is the option to use and not -config.
 Alternatively you can also set HBASE_CONF_DIR.
 
 Below is the exact command line:
 
 $ hbase --config /home/ramya/hbase/conf shell
 hbase(main):001:0 create 'newtable','family'
 0 row(s) in 0.5140 seconds
 
 hbase(main):002:0 list 'newtable'
 TABLE
 newtable
 1 row(s) in 0.0120 seconds
 
 OR
 
 $ export HBASE_CONF_DIR=/home/ramya/hbase/conf
 $ hbase shell
 
 hbase(main):001:0 list 'newtable'
 TABLE
 
 newtable
 
 1 row(s) in 0.3860 seconds
 
 
 Thanks
 Ramya
 
 
 On Thu, Oct 13, 2011 at 8:30 AM, jigneshmpatel jigneshmpa...@gmail.com
 wrote:
 
 There is no command like -config see below
 
 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config ./config
 shell
 Unrecognized option: -config
 Could not create the Java virtual machine.
 
 --
 View this message in context:
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 
 
 



Re: Hbase with Hadoop

2011-10-13 Thread Ramya Sunil
You already have zookeeper running on 2181 according to your jps output.
That is the reason, master seems to be complaining.
Can you please stop zookeeper, verify that no daemons are running on 2181
and restart your master?

On Thu, Oct 13, 2011 at 12:37 PM, Jignesh Patel jign...@websoft.com wrote:

 Ramya,


 Based on Hbase the definite guide it seems zookeeper being started by
 hbase no need to start it separately(may be this is changed for 0.90.4.
 Anyways now  following is the updated status.

 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/start-hbase.sh
 starting master, logging to
 /users/hadoop-user/hadoop-hbase/logs/hbase-hadoop-user-master-Jignesh-MacBookPro.local.out
 Couldnt start ZK at requested address of 2181, instead got: 2182. Aborting.
 Why? Because clients (eg shell) wont be able to find this ZK quorum
 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ jps
 41486 HQuorumPeer
 38814 SecondaryNameNode
 41578 Jps
 38878 JobTracker
 38726 DataNode
 38639 NameNode
 38964 TaskTracker

 On Oct 13, 2011, at 3:23 PM, Ramya Sunil wrote:

  Jignesh,
 
  I dont see zookeeper running on your master. My cluster reads the
 following:
 
  $ jps
  15315 Jps
  13590 HMaster
  15235 HQuorumPeer
 
  Can you please shutdown your Hmaster and run the following first:
  $ hbase-daemon.sh start zookeeper
 
  And then start your hbasemaster and regionservers?
 
  Thanks
  Ramya
 
  On Thu, Oct 13, 2011 at 12:01 PM, Jignesh Patel jign...@websoft.com
 wrote:
 
  ok --config worked but it is showing me same error. How to resolve this.
 
  http://pastebin.com/UyRBA7vX
 
  On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote:
 
  Hi Jignesh,
 
  --config (i.e. - - config) is the option to use and not -config.
  Alternatively you can also set HBASE_CONF_DIR.
 
  Below is the exact command line:
 
  $ hbase --config /home/ramya/hbase/conf shell
  hbase(main):001:0 create 'newtable','family'
  0 row(s) in 0.5140 seconds
 
  hbase(main):002:0 list 'newtable'
  TABLE
  newtable
  1 row(s) in 0.0120 seconds
 
  OR
 
  $ export HBASE_CONF_DIR=/home/ramya/hbase/conf
  $ hbase shell
 
  hbase(main):001:0 list 'newtable'
  TABLE
 
  newtable
 
  1 row(s) in 0.3860 seconds
 
 
  Thanks
  Ramya
 
 
  On Thu, Oct 13, 2011 at 8:30 AM, jigneshmpatel 
 jigneshmpa...@gmail.com
  wrote:
 
  There is no command like -config see below
 
  Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config
 ./config
  shell
  Unrecognized option: -config
  Could not create the Java virtual machine.
 
  --
  View this message in context:
 
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
  Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 
 
 




Re: Hbase with Hadoop

2011-10-13 Thread Jignesh Patel
Ok now the problem is

if I only use bin/hbase-start.sh then it doesn't start zookeeper.

But if I use bin/hbase-daemon.sh start zookeeper before starting 
bin/hbase-start.sh then it will try to start zookeeper at port 2181 and then I 
have following error.

Couldnt start ZK at requested address of 2181, instead got: 2182. Aborting. 
Why? Because clients (eg shell) wont be able to find this ZK quorum


So I am wondering if bin/hbase-start.sh is trying to start zookeeper then while 
zookeeper is not running it should start the zookeeper. I only get the error if 
zookeeper already running.


-Jignesh


On Oct 13, 2011, at 4:53 PM, Ramya Sunil wrote:

 You already have zookeeper running on 2181 according to your jps output.
 That is the reason, master seems to be complaining.
 Can you please stop zookeeper, verify that no daemons are running on 2181
 and restart your master?
 
 On Thu, Oct 13, 2011 at 12:37 PM, Jignesh Patel jign...@websoft.com wrote:
 
 Ramya,
 
 
 Based on Hbase the definite guide it seems zookeeper being started by
 hbase no need to start it separately(may be this is changed for 0.90.4.
 Anyways now  following is the updated status.
 
 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/start-hbase.sh
 starting master, logging to
 /users/hadoop-user/hadoop-hbase/logs/hbase-hadoop-user-master-Jignesh-MacBookPro.local.out
 Couldnt start ZK at requested address of 2181, instead got: 2182. Aborting.
 Why? Because clients (eg shell) wont be able to find this ZK quorum
 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ jps
 41486 HQuorumPeer
 38814 SecondaryNameNode
 41578 Jps
 38878 JobTracker
 38726 DataNode
 38639 NameNode
 38964 TaskTracker
 
 On Oct 13, 2011, at 3:23 PM, Ramya Sunil wrote:
 
 Jignesh,
 
 I dont see zookeeper running on your master. My cluster reads the
 following:
 
 $ jps
 15315 Jps
 13590 HMaster
 15235 HQuorumPeer
 
 Can you please shutdown your Hmaster and run the following first:
 $ hbase-daemon.sh start zookeeper
 
 And then start your hbasemaster and regionservers?
 
 Thanks
 Ramya
 
 On Thu, Oct 13, 2011 at 12:01 PM, Jignesh Patel jign...@websoft.com
 wrote:
 
 ok --config worked but it is showing me same error. How to resolve this.
 
 http://pastebin.com/UyRBA7vX
 
 On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote:
 
 Hi Jignesh,
 
 --config (i.e. - - config) is the option to use and not -config.
 Alternatively you can also set HBASE_CONF_DIR.
 
 Below is the exact command line:
 
 $ hbase --config /home/ramya/hbase/conf shell
 hbase(main):001:0 create 'newtable','family'
 0 row(s) in 0.5140 seconds
 
 hbase(main):002:0 list 'newtable'
 TABLE
 newtable
 1 row(s) in 0.0120 seconds
 
 OR
 
 $ export HBASE_CONF_DIR=/home/ramya/hbase/conf
 $ hbase shell
 
 hbase(main):001:0 list 'newtable'
 TABLE
 
 newtable
 
 1 row(s) in 0.3860 seconds
 
 
 Thanks
 Ramya
 
 
 On Thu, Oct 13, 2011 at 8:30 AM, jigneshmpatel 
 jigneshmpa...@gmail.com
 wrote:
 
 There is no command like -config see below
 
 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config
 ./config
 shell
 Unrecognized option: -config
 Could not create the Java virtual machine.
 
 --
 View this message in context:
 
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 
 
 
 
 



Re: Hbase with Hadoop

2011-10-13 Thread Jignesh Patel
Is there a way to resolve this weird problem.

 bin/hbase-start.sh is supposed to start zookeeper but it doesn't start. But 
 on the other side if zookeeper up and running then it says 

 Couldnt start ZK at requested address of 2181, instead got: 2182. Aborting. 
 Why? Because clients (eg shell) wont be able to find this ZK quorum



On Oct 13, 2011, at 5:40 PM, Jignesh Patel wrote:

 Ok now the problem is
 
 if I only use bin/hbase-start.sh then it doesn't start zookeeper.
 
 But if I use bin/hbase-daemon.sh start zookeeper before starting 
 bin/hbase-start.sh then it will try to start zookeeper at port 2181 and then 
 I have following error.
 
 Couldnt start ZK at requested address of 2181, instead got: 2182. Aborting. 
 Why? Because clients (eg shell) wont be able to find this ZK quorum
 
 
 So I am wondering if bin/hbase-start.sh is trying to start zookeeper then 
 while zookeeper is not running it should start the zookeeper. I only get the 
 error if zookeeper already running.
 
 
 -Jignesh
 
 
 On Oct 13, 2011, at 4:53 PM, Ramya Sunil wrote:
 
 You already have zookeeper running on 2181 according to your jps output.
 That is the reason, master seems to be complaining.
 Can you please stop zookeeper, verify that no daemons are running on 2181
 and restart your master?
 
 On Thu, Oct 13, 2011 at 12:37 PM, Jignesh Patel jign...@websoft.com wrote:
 
 Ramya,
 
 
 Based on Hbase the definite guide it seems zookeeper being started by
 hbase no need to start it separately(may be this is changed for 0.90.4.
 Anyways now  following is the updated status.
 
 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/start-hbase.sh
 starting master, logging to
 /users/hadoop-user/hadoop-hbase/logs/hbase-hadoop-user-master-Jignesh-MacBookPro.local.out
 Couldnt start ZK at requested address of 2181, instead got: 2182. Aborting.
 Why? Because clients (eg shell) wont be able to find this ZK quorum
 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ jps
 41486 HQuorumPeer
 38814 SecondaryNameNode
 41578 Jps
 38878 JobTracker
 38726 DataNode
 38639 NameNode
 38964 TaskTracker
 
 On Oct 13, 2011, at 3:23 PM, Ramya Sunil wrote:
 
 Jignesh,
 
 I dont see zookeeper running on your master. My cluster reads the
 following:
 
 $ jps
 15315 Jps
 13590 HMaster
 15235 HQuorumPeer
 
 Can you please shutdown your Hmaster and run the following first:
 $ hbase-daemon.sh start zookeeper
 
 And then start your hbasemaster and regionservers?
 
 Thanks
 Ramya
 
 On Thu, Oct 13, 2011 at 12:01 PM, Jignesh Patel jign...@websoft.com
 wrote:
 
 ok --config worked but it is showing me same error. How to resolve this.
 
 http://pastebin.com/UyRBA7vX
 
 On Oct 13, 2011, at 1:34 PM, Ramya Sunil wrote:
 
 Hi Jignesh,
 
 --config (i.e. - - config) is the option to use and not -config.
 Alternatively you can also set HBASE_CONF_DIR.
 
 Below is the exact command line:
 
 $ hbase --config /home/ramya/hbase/conf shell
 hbase(main):001:0 create 'newtable','family'
 0 row(s) in 0.5140 seconds
 
 hbase(main):002:0 list 'newtable'
 TABLE
 newtable
 1 row(s) in 0.0120 seconds
 
 OR
 
 $ export HBASE_CONF_DIR=/home/ramya/hbase/conf
 $ hbase shell
 
 hbase(main):001:0 list 'newtable'
 TABLE
 
 newtable
 
 1 row(s) in 0.3860 seconds
 
 
 Thanks
 Ramya
 
 
 On Thu, Oct 13, 2011 at 8:30 AM, jigneshmpatel 
 jigneshmpa...@gmail.com
 wrote:
 
 There is no command like -config see below
 
 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase -config
 ./config
 shell
 Unrecognized option: -config
 Could not create the Java virtual machine.
 
 --
 View this message in context:
 
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3418924.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 
 
 
 
 
 



Re: Hbase with Hadoop

2011-10-12 Thread Vinod Gupta Tankala
for what its worth, i was in a similar situation/dilemma few days ago and
got frustrated figuring out what version combination of hadoop/hbase to use
and how to build hadoop manually to be compatible with hbase. the build
process didn't work for me either.
eventually, i ended up using cloudera distribution and i think it saved me a
lot of headache and time.

thanks

On Tue, Oct 11, 2011 at 8:29 PM, jigneshmpatel jigneshmpa...@gmail.comwrote:

 Matt,
 Thanks a lot. Just wanted to have some more information. If hadoop
 0.2.205.0
 voted by the community members then will it become major release? And what
 if it is not approved by community members.

 And as you said I do like to use 0.90.3 if it works. If it is ok, can you
 share the deails of those configuration changes?

 -Jignesh

 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.



Re: Hbase with Hadoop

2011-10-12 Thread Jignesh Patel
Sorry to here that.
Is CDH3 is a open source or a paid version?

-jignesh
On Oct 12, 2011, at 11:58 AM, Vinod Gupta Tankala wrote:

 for what its worth, i was in a similar situation/dilemma few days ago and
 got frustrated figuring out what version combination of hadoop/hbase to use
 and how to build hadoop manually to be compatible with hbase. the build
 process didn't work for me either.
 eventually, i ended up using cloudera distribution and i think it saved me a
 lot of headache and time.
 
 thanks
 
 On Tue, Oct 11, 2011 at 8:29 PM, jigneshmpatel jigneshmpa...@gmail.comwrote:
 
 Matt,
 Thanks a lot. Just wanted to have some more information. If hadoop
 0.2.205.0
 voted by the community members then will it become major release? And what
 if it is not approved by community members.
 
 And as you said I do like to use 0.90.3 if it works. If it is ok, can you
 share the deails of those configuration changes?
 
 -Jignesh
 
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 



Re: Hbase with Hadoop

2011-10-12 Thread Vinod Gupta Tankala
its free and open source too.. basically, their releases are ahead of public
releases of hadoop/hbase - from what i understand, major bug fixes and
enhancements are checked in to their branch first and then eventually make
it to public release branches.

thanks

On Wed, Oct 12, 2011 at 9:26 AM, Jignesh Patel jign...@websoft.com wrote:

 Sorry to here that.
 Is CDH3 is a open source or a paid version?

 -jignesh
 On Oct 12, 2011, at 11:58 AM, Vinod Gupta Tankala wrote:

  for what its worth, i was in a similar situation/dilemma few days ago and
  got frustrated figuring out what version combination of hadoop/hbase to
 use
  and how to build hadoop manually to be compatible with hbase. the build
  process didn't work for me either.
  eventually, i ended up using cloudera distribution and i think it saved
 me a
  lot of headache and time.
 
  thanks
 
  On Tue, Oct 11, 2011 at 8:29 PM, jigneshmpatel jigneshmpa...@gmail.com
 wrote:
 
  Matt,
  Thanks a lot. Just wanted to have some more information. If hadoop
  0.2.205.0
  voted by the community members then will it become major release? And
 what
  if it is not approved by community members.
 
  And as you said I do like to use 0.90.3 if it works. If it is ok, can
 you
  share the deails of those configuration changes?
 
  -Jignesh
 
  --
  View this message in context:
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html
  Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 




Re: Hbase with Hadoop

2011-10-12 Thread Jignesh Patel
I have installed Hadoop-0.20.205.0 but when I replace the hadoop 0.20.204.0 
eclipse plugin with the 0.20.205.0, eclipse is not recognizing it.

-Jignesh
On Oct 12, 2011, at 12:31 PM, Vinod Gupta Tankala wrote:

 its free and open source too.. basically, their releases are ahead of public
 releases of hadoop/hbase - from what i understand, major bug fixes and
 enhancements are checked in to their branch first and then eventually make
 it to public release branches.
 
 thanks
 
 On Wed, Oct 12, 2011 at 9:26 AM, Jignesh Patel jign...@websoft.com wrote:
 
 Sorry to here that.
 Is CDH3 is a open source or a paid version?
 
 -jignesh
 On Oct 12, 2011, at 11:58 AM, Vinod Gupta Tankala wrote:
 
 for what its worth, i was in a similar situation/dilemma few days ago and
 got frustrated figuring out what version combination of hadoop/hbase to
 use
 and how to build hadoop manually to be compatible with hbase. the build
 process didn't work for me either.
 eventually, i ended up using cloudera distribution and i think it saved
 me a
 lot of headache and time.
 
 thanks
 
 On Tue, Oct 11, 2011 at 8:29 PM, jigneshmpatel jigneshmpa...@gmail.com
 wrote:
 
 Matt,
 Thanks a lot. Just wanted to have some more information. If hadoop
 0.2.205.0
 voted by the community members then will it become major release? And
 what
 if it is not approved by community members.
 
 And as you said I do like to use 0.90.3 if it works. If it is ok, can
 you
 share the deails of those configuration changes?
 
 -Jignesh
 
 --
 View this message in context:
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 
 
 



Re: Hbase with Hadoop

2011-10-12 Thread Jignesh Patel
The new plugin works after deleting eclipse and reinstalling it.
On Oct 12, 2011, at 2:39 PM, Jignesh Patel wrote:

 I have installed Hadoop-0.20.205.0 but when I replace the hadoop 0.20.204.0 
 eclipse plugin with the 0.20.205.0, eclipse is not recognizing it.
 
 -Jignesh
 On Oct 12, 2011, at 12:31 PM, Vinod Gupta Tankala wrote:
 
 its free and open source too.. basically, their releases are ahead of public
 releases of hadoop/hbase - from what i understand, major bug fixes and
 enhancements are checked in to their branch first and then eventually make
 it to public release branches.
 
 thanks
 
 On Wed, Oct 12, 2011 at 9:26 AM, Jignesh Patel jign...@websoft.com wrote:
 
 Sorry to here that.
 Is CDH3 is a open source or a paid version?
 
 -jignesh
 On Oct 12, 2011, at 11:58 AM, Vinod Gupta Tankala wrote:
 
 for what its worth, i was in a similar situation/dilemma few days ago and
 got frustrated figuring out what version combination of hadoop/hbase to
 use
 and how to build hadoop manually to be compatible with hbase. the build
 process didn't work for me either.
 eventually, i ended up using cloudera distribution and i think it saved
 me a
 lot of headache and time.
 
 thanks
 
 On Tue, Oct 11, 2011 at 8:29 PM, jigneshmpatel jigneshmpa...@gmail.com
 wrote:
 
 Matt,
 Thanks a lot. Just wanted to have some more information. If hadoop
 0.2.205.0
 voted by the community members then will it become major release? And
 what
 if it is not approved by community members.
 
 And as you said I do like to use 0.90.3 if it works. If it is ok, can
 you
 share the deails of those configuration changes?
 
 -Jignesh
 
 --
 View this message in context:
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 
 
 
 



Re: Hbase with Hadoop

2011-10-12 Thread Jignesh Patel
When I tried to run Hbase 0.90.4 with hadoop-.0.20.205.0 I got following error

Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase shell
HBase Shell; enter 'helpRETURN' for list of supported commands.
Type exitRETURN to leave the HBase Shell
Version 0.90.4, r1150278, Sun Jul 24 15:53:29 PDT 2011

hbase(main):001:0 status

ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to 
connect to ZooKeeper but the connection closes immediately. This could be a 
sign that the server has too many connections (30 is the default). Consider 
inspecting your ZK server logs for that error and then make sure you are 
reusing HBaseConfiguration as often as you can. See HTable's javadoc for more 
information.


And when I tried to stop Hbase I continuously sees dot being printed and no 
sign of stopping it. Not sure why it just simply stop it.

stopping 
hbase...….


On Oct 12, 2011, at 3:19 PM, Jignesh Patel wrote:

 The new plugin works after deleting eclipse and reinstalling it.
 On Oct 12, 2011, at 2:39 PM, Jignesh Patel wrote:
 
 I have installed Hadoop-0.20.205.0 but when I replace the hadoop 0.20.204.0 
 eclipse plugin with the 0.20.205.0, eclipse is not recognizing it.
 
 -Jignesh
 On Oct 12, 2011, at 12:31 PM, Vinod Gupta Tankala wrote:
 
 its free and open source too.. basically, their releases are ahead of public
 releases of hadoop/hbase - from what i understand, major bug fixes and
 enhancements are checked in to their branch first and then eventually make
 it to public release branches.
 
 thanks
 
 On Wed, Oct 12, 2011 at 9:26 AM, Jignesh Patel jign...@websoft.com wrote:
 
 Sorry to here that.
 Is CDH3 is a open source or a paid version?
 
 -jignesh
 On Oct 12, 2011, at 11:58 AM, Vinod Gupta Tankala wrote:
 
 for what its worth, i was in a similar situation/dilemma few days ago and
 got frustrated figuring out what version combination of hadoop/hbase to
 use
 and how to build hadoop manually to be compatible with hbase. the build
 process didn't work for me either.
 eventually, i ended up using cloudera distribution and i think it saved
 me a
 lot of headache and time.
 
 thanks
 
 On Tue, Oct 11, 2011 at 8:29 PM, jigneshmpatel jigneshmpa...@gmail.com
 wrote:
 
 Matt,
 Thanks a lot. Just wanted to have some more information. If hadoop
 0.2.205.0
 voted by the community members then will it become major release? And
 what
 if it is not approved by community members.
 
 And as you said I do like to use 0.90.3 if it works. If it is ok, can
 you
 share the deails of those configuration changes?
 
 -Jignesh
 
 --
 View this message in context:
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
 
 
 
 
 



Re: Hbase with Hadoop

2011-10-12 Thread Matt Foley
Hi Jignesh,
Not clear what's going on with your ZK, but as a starting point, the
hsync/flush feature in 205 was implemented with an on-off switch.  Make sure
you've turned it on by setting  *dfs.support.append  *to true in the
hdfs-site.xml config file.

Also, are you installing Hadoop with security turned on or off?

I'll gather some other config info that should help.
--Matt


On Wed, Oct 12, 2011 at 1:47 PM, Jignesh Patel jign...@websoft.com wrote:

 When I tried to run Hbase 0.90.4 with hadoop-.0.20.205.0 I got following
 error

 Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase shell
 HBase Shell; enter 'helpRETURN' for list of supported commands.
 Type exitRETURN to leave the HBase Shell
 Version 0.90.4, r1150278, Sun Jul 24 15:53:29 PDT 2011

 hbase(main):001:0 status

 ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able
 to connect to ZooKeeper but the connection closes immediately. This could be
 a sign that the server has too many connections (30 is the default).
 Consider inspecting your ZK server logs for that error and then make sure
 you are reusing HBaseConfiguration as often as you can. See HTable's javadoc
 for more information.


 And when I tried to stop Hbase I continuously sees dot being printed and no
 sign of stopping it. Not sure why it just simply stop it.

 stopping
 hbase...….


 On Oct 12, 2011, at 3:19 PM, Jignesh Patel wrote:

  The new plugin works after deleting eclipse and reinstalling it.
  On Oct 12, 2011, at 2:39 PM, Jignesh Patel wrote:
 
  I have installed Hadoop-0.20.205.0 but when I replace the hadoop
 0.20.204.0 eclipse plugin with the 0.20.205.0, eclipse is not recognizing
 it.
 
  -Jignesh
  On Oct 12, 2011, at 12:31 PM, Vinod Gupta Tankala wrote:
 
  its free and open source too.. basically, their releases are ahead of
 public
  releases of hadoop/hbase - from what i understand, major bug fixes and
  enhancements are checked in to their branch first and then eventually
 make
  it to public release branches.
 
  thanks
 
  On Wed, Oct 12, 2011 at 9:26 AM, Jignesh Patel jign...@websoft.com
 wrote:
 
  Sorry to here that.
  Is CDH3 is a open source or a paid version?
 
  -jignesh
  On Oct 12, 2011, at 11:58 AM, Vinod Gupta Tankala wrote:
 
  for what its worth, i was in a similar situation/dilemma few days ago
 and
  got frustrated figuring out what version combination of hadoop/hbase
 to
  use
  and how to build hadoop manually to be compatible with hbase. the
 build
  process didn't work for me either.
  eventually, i ended up using cloudera distribution and i think it
 saved
  me a
  lot of headache and time.
 
  thanks
 
  On Tue, Oct 11, 2011 at 8:29 PM, jigneshmpatel 
 jigneshmpa...@gmail.com
  wrote:
 
  Matt,
  Thanks a lot. Just wanted to have some more information. If hadoop
  0.2.205.0
  voted by the community members then will it become major release?
 And
  what
  if it is not approved by community members.
 
  And as you said I do like to use 0.90.3 if it works. If it is ok,
 can
  you
  share the deails of those configuration changes?
 
  -Jignesh
 
  --
  View this message in context:
 
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html
  Sent from the Hadoop lucene-users mailing list archive at
 Nabble.com.
 
 
 
 
 




Re: Hbase with Hadoop

2011-10-12 Thread Ramya Sunil
Hi Jignesh,

I have been running quite a few hbase tests on Hadoop 0.20.205 without any
issues on both secure and non secure clusters.

I have seen the error you mentioned when one has not specified the hbase
config directory.

Can you please try hbase --config path to hbase config directory shell
and check if that solves the problem?

Thanks
Ramya


On Wed, Oct 12, 2011 at 4:50 PM, Matt Foley mfo...@hortonworks.com wrote:

 Hi Jignesh,
 Not clear what's going on with your ZK, but as a starting point, the
 hsync/flush feature in 205 was implemented with an on-off switch.  Make
 sure
 you've turned it on by setting  *dfs.support.append  *to true in the
 hdfs-site.xml config file.

 Also, are you installing Hadoop with security turned on or off?

 I'll gather some other config info that should help.
 --Matt


 On Wed, Oct 12, 2011 at 1:47 PM, Jignesh Patel jign...@websoft.com
 wrote:

  When I tried to run Hbase 0.90.4 with hadoop-.0.20.205.0 I got following
  error
 
  Jignesh-MacBookPro:hadoop-hbase hadoop-user$ bin/hbase shell
  HBase Shell; enter 'helpRETURN' for list of supported commands.
  Type exitRETURN to leave the HBase Shell
  Version 0.90.4, r1150278, Sun Jul 24 15:53:29 PDT 2011
 
  hbase(main):001:0 status
 
  ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is
 able
  to connect to ZooKeeper but the connection closes immediately. This could
 be
  a sign that the server has too many connections (30 is the default).
  Consider inspecting your ZK server logs for that error and then make sure
  you are reusing HBaseConfiguration as often as you can. See HTable's
 javadoc
  for more information.
 
 
  And when I tried to stop Hbase I continuously sees dot being printed and
 no
  sign of stopping it. Not sure why it just simply stop it.
 
  stopping
 
 hbase...….
 
 
  On Oct 12, 2011, at 3:19 PM, Jignesh Patel wrote:
 
   The new plugin works after deleting eclipse and reinstalling it.
   On Oct 12, 2011, at 2:39 PM, Jignesh Patel wrote:
  
   I have installed Hadoop-0.20.205.0 but when I replace the hadoop
  0.20.204.0 eclipse plugin with the 0.20.205.0, eclipse is not recognizing
  it.
  
   -Jignesh
   On Oct 12, 2011, at 12:31 PM, Vinod Gupta Tankala wrote:
  
   its free and open source too.. basically, their releases are ahead of
  public
   releases of hadoop/hbase - from what i understand, major bug fixes
 and
   enhancements are checked in to their branch first and then eventually
  make
   it to public release branches.
  
   thanks
  
   On Wed, Oct 12, 2011 at 9:26 AM, Jignesh Patel jign...@websoft.com
  wrote:
  
   Sorry to here that.
   Is CDH3 is a open source or a paid version?
  
   -jignesh
   On Oct 12, 2011, at 11:58 AM, Vinod Gupta Tankala wrote:
  
   for what its worth, i was in a similar situation/dilemma few days
 ago
  and
   got frustrated figuring out what version combination of
 hadoop/hbase
  to
   use
   and how to build hadoop manually to be compatible with hbase. the
  build
   process didn't work for me either.
   eventually, i ended up using cloudera distribution and i think it
  saved
   me a
   lot of headache and time.
  
   thanks
  
   On Tue, Oct 11, 2011 at 8:29 PM, jigneshmpatel 
  jigneshmpa...@gmail.com
   wrote:
  
   Matt,
   Thanks a lot. Just wanted to have some more information. If hadoop
   0.2.205.0
   voted by the community members then will it become major release?
  And
   what
   if it is not approved by community members.
  
   And as you said I do like to use 0.90.3 if it works. If it is ok,
  can
   you
   share the deails of those configuration changes?
  
   -Jignesh
  
   --
   View this message in context:
  
  
 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html
   Sent from the Hadoop lucene-users mailing list archive at
  Nabble.com.
  
  
  
  
  
 
 



Hbase with Hadoop

2011-10-11 Thread Jignesh Patel
Can I integrate HBase 0.90.4 with hadoop 0.20.204.0 ?

-Jignesh


Re: Hbase with Hadoop

2011-10-11 Thread Matt Foley
Hi Jignesh,
0.20.204.0 does not have hflush/sync support, but 0.20.205.0 does.
Without HDFS hsync, HBase will still work, but is subject to data loss if
the datanode is restarted.  In 205, this deficiency is fixed.

0.20.205.0-rc2 is up for vote in common-dev@.  Please try it out with HBase
:-)
We've been using it with HBase 0.90.3 and it works, with some config
adjustments.

--Matt, RM for 0.20.205.0


On Tue, Oct 11, 2011 at 2:11 PM, Jignesh Patel jign...@websoft.com wrote:

 Can I integrate HBase 0.90.4 with hadoop 0.20.204.0 ?

 -Jignesh



Re: Hbase with Hadoop

2011-10-11 Thread jigneshmpatel
Matt,
Thanks a lot. Just wanted to have some more information. If hadoop 0.2.205.0
voted by the community members then will it become major release? And what
if it is not approved by community members.

And as you said I do like to use 0.90.3 if it works. If it is ok, can you
share the deails of those configuration changes?

-Jignesh

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.


Re: Hbase with Hadoop

2011-10-11 Thread Konstantin Boudnik
Matt,

I'd like to re-enforce the inquiry about posting (or blogging perhaps ;) details
about Hbase/0.20.205 coexistence. I am sure lotta people will benefit from
this.

Thanks in advance,
  Cos

On Tue, Oct 11, 2011 at 08:29PM, jigneshmpatel wrote:
 Matt,
 Thanks a lot. Just wanted to have some more information. If hadoop 0.2.205.0
 voted by the community members then will it become major release? And what
 if it is not approved by community members.
 
 And as you said I do like to use 0.90.3 if it works. If it is ok, can you
 share the deails of those configuration changes?
 
 -Jignesh
 
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Hbase-with-Hadoop-tp3413950p3414658.html
 Sent from the Hadoop lucene-users mailing list archive at Nabble.com.


Re: HBASE on Hadoop

2011-04-28 Thread Bennett Andrews
Setup a distributed HBase cluster.

On each node that runs TaskTracker and Datanode, also run a HBase
RegionServer.

On the node that runs JobTracker and Namenode, run the HBase Master.

There is a getting started guide here.
http://hbase.apache.org/book/notsoquick.html

Check out the HBase user list for more.
http://hbase.apache.org/mail-lists.html



On Wed, Apr 27, 2011 at 4:58 PM, gaurav garg gaurav.gar...@gmail.comwrote:

 Praveenesh,

 I will recommend you to read the google Big Table paper(
 http://labs.google.com/papers/bigtable.html) which is a foundation for the
 hbase.
 Terminology is little different though:

 Mapping of terms(not exhaustive):
 ***
 Big Table Hbase
 ***
 Master Server  HMaster
 Tablet  region
 Tablet serverregionserver
  chubbyzookeeper (It is apache implementation of
 distributed
 synchronization server)

 Hbase stores data on hadoop dfs. hbase is a client of hdfs. Hence hadoop
 will automatically distribute and replicate the data across your hadoop
 cluster.
 Hbase master and regionservers formats/transforms the data and relies on
 hadoop for storage and retrieval.


 Hbase cluster can run on a separate set of nodes or it can even share
 hadoop
 nodes.

 Once you have setup hdfs cluster, hbase cluster can be easily setup.

 Thanks
 Gaurav


 On Tue, Apr 26, 2011 at 10:55 AM, praveenesh kumar praveen...@gmail.com
 wrote:

  Hello everyone,
 
  Thanks everyone for guiding me everytime. I am able to setup hadoop
 cluster
  of 10 nodes.
  Now comes HBASE..!!!
 
  I am new to all this...
  My problem is I have huge data to analyze.
  so shall I go for single node Hbase installation on all nodes or go for
  distributed Hbase installation.??
 
  How distributed installation is different from single node installaion ??
  Now suppose if I have distributed Hbase...
  and If I design some table on my master node.. and then store data on
 it..
  say around 100M. How the data is going to be distributed.. Will HBASE do
 it
  automatically or we have to write codes for getting it distributed ??
  Is there any good tutorial that tells us more about HBase and how to work
  on
  it ???
 
  Thanks,
  Praveenesh
 



HBASE on Hadoop

2011-04-25 Thread praveenesh kumar
Hello everyone,

Thanks everyone for guiding me everytime. I am able to setup hadoop cluster
of 10 nodes.
Now comes HBASE..!!!

I am new to all this...
My problem is I have huge data to analyze.
so shall I go for single node Hbase installation on all nodes or go for
distributed Hbase installation.??

How distributed installation is different from single node installaion ??
Now suppose if I have distributed Hbase...
and If I design some table on my master node.. and then store data on it..
say around 100M. How the data is going to be distributed.. Will HBASE do it
automatically or we have to write codes for getting it distributed ??
Is there any good tutorial that tells us more about HBase and how to work on
it ???

Thanks,
Praveenesh


Help with Hbase and Hadoop on S3

2009-08-06 Thread Ananth T. Sarathy
I can't seem to get Hbase to run using the hadoop i have connected to my s3
bucket

Running
Hbase 0.19.2
Hadoop  0.19.2

Hadoop-site.xml
  configuration

property
  namefs.default.name/name
  values3://hbase/value
/property

property
  namefs.s3.awsAccessKeyId/name
  valueID/value
/property

property
  namefs.s3.awsSecretAccessKey/name
  valueSECRET/value
/property
/configuration

and it seems to start up no problem

my hbase-site.xml

configuration
property
 namehbase.master/name
 value174.129.15.236:6/value
 descriptionThe host and port that the HBase master runs at.
 A value of 'local' runs the master and a regionserver in
 a single process.
 /description
   /property

 property
namehbase.rootdir/name
values3://hbase/value
descriptionThe directory shared by region servers.
/description
  /property

/configuration


keeps giving me

]
2009-08-06 17:20:44,526 ERROR org.apache.hadoop.hbase.master.HMaster: Can
not start master
java.lang.NoClassDefFoundError: org/jets3t/service/S3ServiceException
at
org.apache.hadoop.fs.s3.S3FileSystem.createDefaultStore(S3FileSystem.java:84)
at
org.apache.hadoop.fs.s3.S3FileSystem.initialize(S3FileSystem.java:74)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1367)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:56)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1379)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:215)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:120)
at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:186)
at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:156)
at
org.apache.hadoop.hbase.LocalHBaseCluster.init(LocalHBaseCluster.java:96)
at
org.apache.hadoop.hbase.LocalHBaseCluster.init(LocalHBaseCluster.java:78)
at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1013)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1057)
Caused by: java.lang.ClassNotFoundException:
org.jets3t.service.S3ServiceException
at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)


what am i doing wrong here?

Ananth T Sarathy