Re: Expert suggestion needed to create table in Hbase - Banking

2012-11-26 Thread Ramasubramanian Narayanan
Hi,
Thanks! Can we have the customer number as the RowKey for the customer
(client) master table? Please help in educating me on the advantage and
disadvantage of having customer number as the Row key...

Also SCD2 we may need to implement in that table.. will it work if I have
like that?

Or

SCD2 is not needed instead we can achieve the same by increasing the
version number that it will hold?

pls suggest...

regards,
Rams

On Mon, Nov 26, 2012 at 1:10 PM, Li, Min m...@microstrategy.com wrote:

 When 1 cf need to do split, other 599 cfs will split at the same time. So
 many fragments will be produced when you use so many column families.
 Actually, many cfs can be merge to only one cf with specific tags in
 rowkey. For example, rowkey of customer address can be uid+'AD', and
 customer profile can be uid+'PR'.

 Min
 -Original Message-
 From: Ramasubramanian Narayanan [mailto:
 ramasubramanian.naraya...@gmail.com]
 Sent: Monday, November 26, 2012 3:05 PM
 To: user@hbase.apache.org
 Subject: Expert suggestion needed to create table in Hbase - Banking

 Hi,

   I have a requirement of physicalising the logical model... I have a
 client model which has 600+ entities...

   Need suggestion how to go about physicalising it...

   I have few other doubts :
   1) Whether is it good to create a single table for all the 600+ columns?
   2) To have different column families for different groups or can it be
 under a single column family? For example, customer address can we have as
 a different column family?

   Please help on this..


 regards,
 Rams



Re: Expert suggestion needed to create table in Hbase - Banking

2012-11-26 Thread Mohammad Tariq
Hello sir,

You might become a victim of RS hotspotting, since the cutomerIDs will
be sequential(I assume). To keep things simple Hbase puts all the rows with
similar keys to the same RS. But, it becomes a bottleneck in the long run
as all the data keeps on going to the same region.

HTH

Regards,
Mohammad Tariq



On Mon, Nov 26, 2012 at 3:53 PM, Ramasubramanian Narayanan 
ramasubramanian.naraya...@gmail.com wrote:

 Hi,
 Thanks! Can we have the customer number as the RowKey for the customer
 (client) master table? Please help in educating me on the advantage and
 disadvantage of having customer number as the Row key...

 Also SCD2 we may need to implement in that table.. will it work if I have
 like that?

 Or

 SCD2 is not needed instead we can achieve the same by increasing the
 version number that it will hold?

 pls suggest...

 regards,
 Rams

 On Mon, Nov 26, 2012 at 1:10 PM, Li, Min m...@microstrategy.com wrote:

  When 1 cf need to do split, other 599 cfs will split at the same time. So
  many fragments will be produced when you use so many column families.
  Actually, many cfs can be merge to only one cf with specific tags in
  rowkey. For example, rowkey of customer address can be uid+'AD', and
  customer profile can be uid+'PR'.
 
  Min
  -Original Message-
  From: Ramasubramanian Narayanan [mailto:
  ramasubramanian.naraya...@gmail.com]
  Sent: Monday, November 26, 2012 3:05 PM
  To: user@hbase.apache.org
  Subject: Expert suggestion needed to create table in Hbase - Banking
 
  Hi,
 
I have a requirement of physicalising the logical model... I have a
  client model which has 600+ entities...
 
Need suggestion how to go about physicalising it...
 
I have few other doubts :
1) Whether is it good to create a single table for all the 600+
 columns?
2) To have different column families for different groups or can it be
  under a single column family? For example, customer address can we have
 as
  a different column family?
 
Please help on this..
 
 
  regards,
  Rams
 



Re: Can we insert into Hbase without specifying the column name?

2012-11-26 Thread yonghu
Hi Rams,

yes. You can. See follows:

hbase(main):001:0 create 'test1','course'
0 row(s) in 1.6760 seconds

hbase(main):002:0 put 'test1','tom','course',90
0 row(s) in 0.1040 seconds

hbase(main):003:0 scan 'test1'
ROW   COLUMN+CELL
 tom  column=course:, timestamp=1353925674312, value=90
1 row(s) in 0.0440 seconds

regards!

Yong

On Mon, Nov 26, 2012 at 11:25 AM, Ramasubramanian Narayanan
ramasubramanian.naraya...@gmail.com wrote:
 Hi,

 Is it possible to insert into Hbase without specifying the column name
 instead using the Column family name alone (assume there will not be any
 field created for that column family name)

 regards,
 Rams


Re: Can we insert into Hbase without specifying the column name?

2012-11-26 Thread Mohammad Tariq
Just out of curiosity, why would you want to do that? What would you do if
you want to do a quick fetch, say I want the 'username' from a table called
'users'?Moreover, we do not use the shell for any real world use-case and
the API doesn't holds any Put.add() that can be used without the key i.e
qualifier name.

Please, pardon my ignorance.

Regards,
Mohammad Tariq



On Mon, Nov 26, 2012 at 3:59 PM, yonghu yongyong...@gmail.com wrote:

 Hi Rams,

 yes. You can. See follows:

 hbase(main):001:0 create 'test1','course'
 0 row(s) in 1.6760 seconds

 hbase(main):002:0 put 'test1','tom','course',90
 0 row(s) in 0.1040 seconds

 hbase(main):003:0 scan 'test1'
 ROW   COLUMN+CELL
  tom  column=course:, timestamp=1353925674312, value=90
 1 row(s) in 0.0440 seconds

 regards!

 Yong

 On Mon, Nov 26, 2012 at 11:25 AM, Ramasubramanian Narayanan
 ramasubramanian.naraya...@gmail.com wrote:
  Hi,
 
  Is it possible to insert into Hbase without specifying the column name
  instead using the Column family name alone (assume there will not be any
  field created for that column family name)
 
  regards,
  Rams



Re: setup of a standalone HBase on local filesystem

2012-11-26 Thread Mohammad Tariq
Have you changed the line 127.0.1.1 in your /etc/hosts file to
127.0.0.1??

Regards,
Mohammad Tariq



On Mon, Nov 26, 2012 at 3:57 PM, Alok Singh Mahor alokma...@gmail.comwrote:

 Hi all,
 I want to setup HBase in standalone mode on local filesystem.
 I want to use local file system so I guess no need to install hadoop
 and zookeeper.
 I followed the instructions from
 http://hbase.apache.org/book/quickstart.html
 i didnt set hbase.rootdir in conf/hbase-site.xml as it will use default
 /tmp

 I am using Kubuntu 12.10 and and
 JAVA_HOME=/usr/lib/jvm/java-1.
 6.0-openjdk as java 1.6 is required.

 HBase shell is running fine but I am unable to create table
 I am getting error

 hbase(main):001:0 create 'test', 'cf'

 ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times

 Here is some help for this command:
 Create table; pass table name, a dictionary of specifications per
 column family, and optionally a dictionary of table configuration.
 Dictionaries are described below in the GENERAL NOTES section.
 Examples:

   hbase create 't1', {NAME = 'f1', VERSIONS = 5}
   hbase create 't1', {NAME = 'f1'}, {NAME = 'f2'}, {NAME = 'f3'}
   hbase # The above in shorthand would be the following:
   hbase create 't1', 'f1', 'f2', 'f3'
   hbase create 't1', {NAME = 'f1', VERSIONS = 1, TTL = 2592000,
 BLOCKCACHE = true}
   hbase create 't1', 'f1', {SPLITS = ['10', '20', '30', '40']}
   hbase create 't1', 'f1', {SPLITS_FILE = 'splits.txt'}
   hbase # Optionally pre-split the table into NUMREGIONS, using
   hbase # SPLITALGO (HexStringSplit, UniformSplit or classname)
   hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO =
 'HexStringSplit'}


 hbase(main):002:0

 could someone tell me where I am wrong?



Re: setup of a standalone HBase on local filesystem

2012-11-26 Thread Alok Singh Mahor
content of my /etc/hosts is

127.0.0.1   localhost
127.0.1.1   alok

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

do i need to change anything in this?
and do i need to install hadoop and zookeeper for standalone HBase
installation?

On Mon, Nov 26, 2012 at 4:07 PM, Mohammad Tariq donta...@gmail.com wrote:

 Have you changed the line 127.0.1.1 in your /etc/hosts file to
 127.0.0.1??

 Regards,
 Mohammad Tariq



 On Mon, Nov 26, 2012 at 3:57 PM, Alok Singh Mahor alokma...@gmail.com
 wrote:

  Hi all,
  I want to setup HBase in standalone mode on local filesystem.
  I want to use local file system so I guess no need to install hadoop
  and zookeeper.
  I followed the instructions from
  http://hbase.apache.org/book/quickstart.html
  i didnt set hbase.rootdir in conf/hbase-site.xml as it will use default
  /tmp
 
  I am using Kubuntu 12.10 and and
  JAVA_HOME=/usr/lib/jvm/java-1.
  6.0-openjdk as java 1.6 is required.
 
  HBase shell is running fine but I am unable to create table
  I am getting error
 
  hbase(main):001:0 create 'test', 'cf'
 
  ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times
 
  Here is some help for this command:
  Create table; pass table name, a dictionary of specifications per
  column family, and optionally a dictionary of table configuration.
  Dictionaries are described below in the GENERAL NOTES section.
  Examples:
 
hbase create 't1', {NAME = 'f1', VERSIONS = 5}
hbase create 't1', {NAME = 'f1'}, {NAME = 'f2'}, {NAME = 'f3'}
hbase # The above in shorthand would be the following:
hbase create 't1', 'f1', 'f2', 'f3'
hbase create 't1', {NAME = 'f1', VERSIONS = 1, TTL = 2592000,
  BLOCKCACHE = true}
hbase create 't1', 'f1', {SPLITS = ['10', '20', '30', '40']}
hbase create 't1', 'f1', {SPLITS_FILE = 'splits.txt'}
hbase # Optionally pre-split the table into NUMREGIONS, using
hbase # SPLITALGO (HexStringSplit, UniformSplit or classname)
hbase create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO =
  'HexStringSplit'}
 
 
  hbase(main):002:0
 
  could someone tell me where I am wrong?
 




-- 
Alok Singh Mahor
http://alokmahor.co.cc
Join the next generation of computing, Open Source and Linux/GNU!!


Re: setup of a standalone HBase on local filesystem

2012-11-26 Thread Mohammad Tariq
Change 127.0.1.1   alok to 127.0.0.1   alok.

No, Hadoop and ZK are not required for local Hbase setup. But, I would
recommend at least a pseudo-distributed setup in order to get yourself
familiar with Habse properly.

HTH
Regards,
Mohammad Tariq



On Mon, Nov 26, 2012 at 4:12 PM, Alok Singh Mahor alokma...@gmail.comwrote:

 127.0.1.1   alok


Re: setup of a standalone HBase on local filesystem

2012-11-26 Thread Mohammad Tariq
You are welcome Alok :)

Yes, you can set that value through hbase-site.xml file.

You can visit this link, if you need any help :
http://cloudfront.blogspot.in/2012/06/how-to-configure-habse-in-pseudo.html

I have outlined the whole process there.

HTH

Regards,
Mohammad Tariq



On Mon, Nov 26, 2012 at 4:24 PM, Alok Singh Mahor alokma...@gmail.comwrote:

 wow :)
 thanks a lot , my hbase shell commands are working now :)
 I will try to setup pseudo-distributed mode

 please tell me one more thing ..can I set any directory for hbase.rootdir
 in conf/hbase-site.xml
 currently I have not set anything
 thanks

 On Mon, Nov 26, 2012 at 4:16 PM, Mohammad Tariq donta...@gmail.com
 wrote:

  Change 127.0.1.1   alok to 127.0.0.1   alok.
 
  No, Hadoop and ZK are not required for local Hbase setup. But, I would
  recommend at least a pseudo-distributed setup in order to get yourself
  familiar with Habse properly.
 
  HTH
  Regards,
  Mohammad Tariq
 
 
 
  On Mon, Nov 26, 2012 at 4:12 PM, Alok Singh Mahor alokma...@gmail.com
  wrote:
 
   127.0.1.1   alok
 



 --
 Alok Singh Mahor
 http://alokmahor.co.cc
 Join the next generation of computing, Open Source and Linux/GNU!!



Re: Expert suggestion needed to create table in Hbase - Banking

2012-11-26 Thread syed kather
Hello Sir ,

 For solving RS hotspotting you can also try this below
http://blog.sematext.com/2012/04/09/hbasewd-avoid-regionserver-hotspotting-despite-writing-records-with-sequential-keys/
It works fine ..

Regrading the Columns Family you can also try to group similar columns
towards one family, based on the process which you decide .

thanks and regards,
Syed Abdul Kather


Thanks and Regards,
S SYED ABDUL KATHER



On Mon, Nov 26, 2012 at 3:58 PM, Mohammad Tariq donta...@gmail.com wrote:

 Hello sir,

 You might become a victim of RS hotspotting, since the cutomerIDs will
 be sequential(I assume). To keep things simple Hbase puts all the rows with
 similar keys to the same RS. But, it becomes a bottleneck in the long run
 as all the data keeps on going to the same region.

 HTH

 Regards,
 Mohammad Tariq



 On Mon, Nov 26, 2012 at 3:53 PM, Ramasubramanian Narayanan 
 ramasubramanian.naraya...@gmail.com wrote:

  Hi,
  Thanks! Can we have the customer number as the RowKey for the customer
  (client) master table? Please help in educating me on the advantage and
  disadvantage of having customer number as the Row key...
 
  Also SCD2 we may need to implement in that table.. will it work if I have
  like that?
 
  Or
 
  SCD2 is not needed instead we can achieve the same by increasing the
  version number that it will hold?
 
  pls suggest...
 
  regards,
  Rams
 
  On Mon, Nov 26, 2012 at 1:10 PM, Li, Min m...@microstrategy.com wrote:
 
   When 1 cf need to do split, other 599 cfs will split at the same time.
 So
   many fragments will be produced when you use so many column families.
   Actually, many cfs can be merge to only one cf with specific tags in
   rowkey. For example, rowkey of customer address can be uid+'AD', and
   customer profile can be uid+'PR'.
  
   Min
   -Original Message-
   From: Ramasubramanian Narayanan [mailto:
   ramasubramanian.naraya...@gmail.com]
   Sent: Monday, November 26, 2012 3:05 PM
   To: user@hbase.apache.org
   Subject: Expert suggestion needed to create table in Hbase - Banking
  
   Hi,
  
 I have a requirement of physicalising the logical model... I have a
   client model which has 600+ entities...
  
 Need suggestion how to go about physicalising it...
  
 I have few other doubts :
 1) Whether is it good to create a single table for all the 600+
  columns?
 2) To have different column families for different groups or can it
 be
   under a single column family? For example, customer address can we have
  as
   a different column family?
  
 Please help on this..
  
  
   regards,
   Rams
  
 



Re: Expert suggestion needed to create table in Hbase - Banking

2012-11-26 Thread Michael Segel

Rams, 

I think you need to go back and think about why you want to use Hadoop and 
HBase in the first place. 
Second, you need to think about your data and how you are planning to use it. 

Beyond that, we can only give you a bit of generic answers

1) You can create a table with 600 columns, however... it depends on what you 
are trying to do.  There are some limitations that you have to consider in your 
design. However for the specific use case you stated they are not 
applicable. 

2) You can have models with different column families. However again it depends 
on what you are trying to do. 
However, in your example ... customer address... That's not a good example of 
when to use a column family. 
I was going to do a schema design course at a Hadoop conference next year, but 
it got turned down because it was considered to 'basic'. Maybe I'll propose it 
for the Hadoop conference in Amsterdam...  sorry, I digressed. 

Have you thought about using a schema on top of HBase? At a minimum, Avro, or 
possibly Wibidata's Kiji ? (Not that I'm plugging Aaron's project. ;-) 

I am also curious... this isn't the first time this question has come up on the 
lists... class project? 

HTH

-Mike



On Nov 26, 2012, at 1:04 AM, Ramasubramanian Narayanan 
ramasubramanian.naraya...@gmail.com wrote:

 Hi,
 
  I have a requirement of physicalising the logical model... I have a
 client model which has 600+ entities...
 
  Need suggestion how to go about physicalising it...
 
  I have few other doubts :
  1) Whether is it good to create a single table for all the 600+ columns?
  2) To have different column families for different groups or can it be
 under a single column family? For example, customer address can we have as
 a different column family?
 
  Please help on this..
 
 
 regards,
 Rams



Re: Expert suggestion needed to create table in Hbase - Banking

2012-11-26 Thread Michael Segel
If the row Key is just the customer ID, then a simple MD5 hash or SHA-1 hash 
would suffice. 
That would clear up any risk of hot spotting, once you do your initial load of 
data. 

And that's probably a key point... hot spotting when you're first loading a 
very large table is really a moot point. It may be painful, but the pain lasts 
for less than an hour.

On Nov 26, 2012, at 4:28 AM, Mohammad Tariq donta...@gmail.com wrote:

 Hello sir,
 
You might become a victim of RS hotspotting, since the cutomerIDs will
 be sequential(I assume). To keep things simple Hbase puts all the rows with
 similar keys to the same RS. But, it becomes a bottleneck in the long run
 as all the data keeps on going to the same region.
 
 HTH
 
 Regards,
Mohammad Tariq
 
 
 
 On Mon, Nov 26, 2012 at 3:53 PM, Ramasubramanian Narayanan 
 ramasubramanian.naraya...@gmail.com wrote:
 
 Hi,
 Thanks! Can we have the customer number as the RowKey for the customer
 (client) master table? Please help in educating me on the advantage and
 disadvantage of having customer number as the Row key...
 
 Also SCD2 we may need to implement in that table.. will it work if I have
 like that?
 
 Or
 
 SCD2 is not needed instead we can achieve the same by increasing the
 version number that it will hold?
 
 pls suggest...
 
 regards,
 Rams
 
 On Mon, Nov 26, 2012 at 1:10 PM, Li, Min m...@microstrategy.com wrote:
 
 When 1 cf need to do split, other 599 cfs will split at the same time. So
 many fragments will be produced when you use so many column families.
 Actually, many cfs can be merge to only one cf with specific tags in
 rowkey. For example, rowkey of customer address can be uid+'AD', and
 customer profile can be uid+'PR'.
 
 Min
 -Original Message-
 From: Ramasubramanian Narayanan [mailto:
 ramasubramanian.naraya...@gmail.com]
 Sent: Monday, November 26, 2012 3:05 PM
 To: user@hbase.apache.org
 Subject: Expert suggestion needed to create table in Hbase - Banking
 
 Hi,
 
  I have a requirement of physicalising the logical model... I have a
 client model which has 600+ entities...
 
  Need suggestion how to go about physicalising it...
 
  I have few other doubts :
  1) Whether is it good to create a single table for all the 600+
 columns?
  2) To have different column families for different groups or can it be
 under a single column family? For example, customer address can we have
 as
 a different column family?
 
  Please help on this..
 
 
 regards,
 Rams
 
 



Re: Expert suggestion needed to create table in Hbase - Banking

2012-11-26 Thread Doug Meil

Hi there, somebody already wisely mentioned the link to the # of CF's
entry, but here are a few other entries that can save you some heartburn
if you read them ahead of time.

http://hbase.apache.org/book.html#datamodel

http://hbase.apache.org/book.html#schema

http://hbase.apache.org/book.html#architecture





On 11/26/12 5:28 AM, Mohammad Tariq donta...@gmail.com wrote:

Hello sir,

You might become a victim of RS hotspotting, since the cutomerIDs will
be sequential(I assume). To keep things simple Hbase puts all the rows
with
similar keys to the same RS. But, it becomes a bottleneck in the long run
as all the data keeps on going to the same region.

HTH

Regards,
Mohammad Tariq



On Mon, Nov 26, 2012 at 3:53 PM, Ramasubramanian Narayanan 
ramasubramanian.naraya...@gmail.com wrote:

 Hi,
 Thanks! Can we have the customer number as the RowKey for the customer
 (client) master table? Please help in educating me on the advantage and
 disadvantage of having customer number as the Row key...

 Also SCD2 we may need to implement in that table.. will it work if I
have
 like that?

 Or

 SCD2 is not needed instead we can achieve the same by increasing the
 version number that it will hold?

 pls suggest...

 regards,
 Rams

 On Mon, Nov 26, 2012 at 1:10 PM, Li, Min m...@microstrategy.com wrote:

  When 1 cf need to do split, other 599 cfs will split at the same
time. So
  many fragments will be produced when you use so many column families.
  Actually, many cfs can be merge to only one cf with specific tags in
  rowkey. For example, rowkey of customer address can be uid+'AD', and
  customer profile can be uid+'PR'.
 
  Min
  -Original Message-
  From: Ramasubramanian Narayanan [mailto:
  ramasubramanian.naraya...@gmail.com]
  Sent: Monday, November 26, 2012 3:05 PM
  To: user@hbase.apache.org
  Subject: Expert suggestion needed to create table in Hbase - Banking
 
  Hi,
 
I have a requirement of physicalising the logical model... I have a
  client model which has 600+ entities...
 
Need suggestion how to go about physicalising it...
 
I have few other doubts :
1) Whether is it good to create a single table for all the 600+
 columns?
2) To have different column families for different groups or can it
be
  under a single column family? For example, customer address can we
have
 as
  a different column family?
 
Please help on this..
 
 
  regards,
  Rams
 





recommended nodes

2012-11-26 Thread David Charle
hi

what's the recommended nodes for NN, hmaster and zk nodes for a larger cluster, 
lets say 50-100+

also, what would be the ideal replication factor for larger clusters when u 
have 3-4 racks ?

--
David

Re: recommended nodes

2012-11-26 Thread Mohammad Tariq
Hello David,

 Do you mean the recommended specs?IMHO, it depends more on the data
and the kind of processing you are going to perform, rather than the size
of your cluster.

Regards,
Mohammad Tariq



On Mon, Nov 26, 2012 at 7:23 PM, David Charle dbchar2...@gmail.com wrote:

 hi

 what's the recommended nodes for NN, hmaster and zk nodes for a larger
 cluster, lets say 50-100+

 also, what would be the ideal replication factor for larger clusters when
 u have 3-4 racks ?

 --
 David


Re: recommended nodes

2012-11-26 Thread Marcos Ortiz

Are you asking about hardware recommendations?
Eric Sammer on his Hadoop Operations book, did a great job about this:
For middle size clusters (until 300 nodes):
Processor: A dual quad-core 2.6 Ghz
RAM: 24 GB DDR3
Dual 1 Gb Ethernet NICs
a SAS drive controller
at least two SATA II drives in a JBOD configuration

The replication factor depends heavily of the primary use of your cluster.

On 11/26/2012 08:53 AM, David Charle wrote:

hi

what's the recommended nodes for NN, hmaster and zk nodes for a larger cluster, 
lets say 50-100+

also, what would be the ideal replication factor for larger clusters when u 
have 3-4 racks ?

--
David
10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS 
INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci


--

Marcos Luis Ortíz Valmaseda
about.me/marcosortiz http://about.me/marcosortiz
@marcosluis2186 http://twitter.com/marcosluis2186



10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS 
INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci

Re: recommended nodes

2012-11-26 Thread Michael Segel
Uhm, those specs are actually now out of date. 

If you're running HBase, or want to also run R on top of Hadoop, you will need 
to add more memory. 
Also forget 1GBe got 10GBe,  and w 2 SATA drives, you will be disk i/o bound 
way too quickly. 


On Nov 26, 2012, at 8:05 AM, Marcos Ortiz mlor...@uci.cu wrote:

 Are you asking about hardware recommendations?
 Eric Sammer on his Hadoop Operations book, did a great job about this:
 For middle size clusters (until 300 nodes):
 Processor: A dual quad-core 2.6 Ghz
 RAM: 24 GB DDR3
 Dual 1 Gb Ethernet NICs
 a SAS drive controller
 at least two SATA II drives in a JBOD configuration
 
 The replication factor depends heavily of the primary use of your cluster.
 
 On 11/26/2012 08:53 AM, David Charle wrote:
 hi
 
 what's the recommended nodes for NN, hmaster and zk nodes for a larger 
 cluster, lets say 50-100+
 
 also, what would be the ideal replication factor for larger clusters when u 
 have 3-4 racks ?
 
 --
 David
 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS 
 INFORMATICAS...
 CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION
 
 http://www.uci.cu
 http://www.facebook.com/universidad.uci
 http://www.flickr.com/photos/universidad_uci
 
 -- 
 
 Marcos Luis Ortíz Valmaseda
 about.me/marcosortiz http://about.me/marcosortiz
 @marcosluis2186 http://twitter.com/marcosluis2186
 
 
 
 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS 
 INFORMATICAS...
 CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION
 
 http://www.uci.cu
 http://www.facebook.com/universidad.uci
 http://www.flickr.com/photos/universidad_uci



Re: Connecting to standalone HBase from a remote client

2012-11-26 Thread matan
Thanks, but hard-coding the master's IP in my client code doesn't work - I
also don't really understand why it has to be set in the client, as
according to the flow you describe, the client is getting all it needs to
know from zookeeper (?).

Doing some digging on the HBase server side, I found that
conf/regionservers has a single line containing the name 'localhost'. I
changed it to the IP of the server, and restarted hbase. However my hbase
client still thinks it should contact localhost after successfully
connecting to zookeeper

My hbase-site.xml only contains what
http://hbase.apache.org/book.html#quickstart asked for, as seen right
below. Perhaps that's not enough?

configuration
property
namehbase.rootdir/name
valuefile:/usr/local/hbase/hbase-0.94.2/data/hbase/value
  /property
  property
namehbase.zookeeper.property.dataDir/name
value/usr/local/hbase/hbase-0.94.2/data/zookeeper/value
  /property
/configuration

Kind of hoping there's a straightforward way to configure a solution.
Must be something that's always being configured when clustering, otherwise
the same problems would arise in a clustered environment, yet in my case
I'm still running a standalone instance...


On Sun, Nov 25, 2012 at 10:48 PM, Tariq [via Apache HBase] 
ml-node+s679495n4034365...@n3.nabble.com wrote:

 Also, add the IP and hostname of the machine running Hbase in your
 /etc/hosts file.

 Regards,
 Mohammad Tariq



 On Mon, Nov 26, 2012 at 2:15 AM, Mohammad Tariq [hidden 
 email]http://user/SendEmail.jtp?type=nodenode=4034365i=0
 wrote:

  Sent from handheld, don't mind typos. :)
 
  Regards,
  Mohammad Tariq
 
 
 
  On Mon, Nov 26, 2012 at 2:14 AM, Mohammad Tariq [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=4034365i=1wrote:

 
  Hello Matan,
 
  The client first contact the zookeeper to get the region that holds
  the ROOt table. From ROOt, client gets the server that holds META and
 from
  there it gets the info about the server which actually holds the key of
 the
  table of interest. Your client seems to get wrong info. Please add
 these
  props in your client code and see it works :
  hbaseConfiguration.set(hbase.zookeeper.quorum,
  192.168.2.121);
  hbaseConfiguration.set(hbase.zookeeper.property.clientPort,2181);
   hbaseConfiguration.set(hbase.master, 192.168.2.121:6);
 
  Change the ports and addresses as per your config.
 
  HTH
 
  Regards,
  Mohammad Tariq
 
 
 
  On Mon, Nov 26, 2012 at 2:07 AM, matan [hidden 
  email]http://user/SendEmail.jtp?type=nodenode=4034365i=2
 wrote:
 
  Hi,
 
  With gracious help on this forum (from ramkrishna vasudevan) I've
  managed to
  setup HBase 0.94.2 in standalone mode on Ubuntu, and proceeded to
  writing a
  small client. I am trying to run the client from a remote server, not
 the
  one where HBase is running on. It seems pretty obvious looking at both
  server and client side logs, that my client successfully connects to
  zookeeper, but then tries to perform the actual interaction against
 the
  wrong network address. It looks like it is wrongfully trying to
 address
  localhost on the HBase client side rather than addressing the server
  where
  HBase is installed.
 
  In terms of flow, I guess that zookeeper provides my client with how
 to
  interact with HBase, and that it informs my client to that end that
 the
  name
  of the server to contact is 'localhost'. I can guess this may be
 changed,
  presumably by configuring HBase on the server side. Assuming that the
  correct flow should be that my client would get informed of the real
  name of
  the HBase server, by zookeeper. However I failed managing to configure
  just
  that. I tried the hbase.master property, but it had no effect.
 
  local HBase shell works just fine. The logs which led me to this
 analysis
  follow, perhaps you will agree with my analysis. How should I change
 my
  configuration to solve this? (making my client able to communicate
 with
  HBase after making the zookeeper connection...).
 
  *Server side log:*
  2012-11-25 22:25:14,856 INFO
  org.apache.hadoop.hbase.master.AssignmentManager: The master has
 opened
  the
  region test4,,1353836779589.bb29c037092c5d69c9efc8f13c2b2563. that was
  online on localhost,58063,1353875103994
  2012-11-25 22:26:05,670 INFO
  org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket
  connection
  from /my-client-ip:49447
  2012-11-25 22:26:05,672 INFO
 org.apache.zookeeper.server.ZooKeeperServer:
  Client attempting to establish new session at /my-client-ip:49447
  2012-11-25 22:26:05,720 INFO
  org.apache.zookeeper.server.ZooKeeperServer:*
  Established session 0x13b393e9d1d0004 with negotiated timeout 4
 for
  client /my-client-ip:49447*
  2012-11-25 22:27:05,354 WARN
 org.apache.zookeeper.server.NIOServerCnxn:
  Exception causing close of session 0x13b393e9d1d0004 due to
  java.io.IOException: Connection reset by peer
  2012-11-25 22:27:05,355 INFO
 

Re: Connecting to standalone HBase from a remote client

2012-11-26 Thread Nicolas Liochon
Yes, it's not useful to set the master address in the client. I suppose it
was different a long time ago, hence there are some traces on different
documentation.
The master references itself in ZooKeeper. So if the master finds itself to
be locahost, ZooKeeper will contain locahost, and the clients on
another computer won't be able to connect. The issue lies on the master
host, not the client.

On Mon, Nov 26, 2012 at 4:06 PM, matan ma...@cloudaloe.org wrote:

  also don't really understand why it has to be set in the client, as
 according to the flow you describe, the client is getting all it needs to
 know from zookeeper (?).



Re: setup of a standalone HBase on local filesystem

2012-11-26 Thread Alok Singh Mahor
thank you :)

On Mon, Nov 26, 2012 at 4:28 PM, Mohammad Tariq donta...@gmail.com wrote:

 You are welcome Alok :)

 Yes, you can set that value through hbase-site.xml file.

 You can visit this link, if you need any help :
 http://cloudfront.blogspot.in/2012/06/how-to-configure-habse-in-pseudo.html

 I have outlined the whole process there.

 HTH

 Regards,
 Mohammad Tariq



 On Mon, Nov 26, 2012 at 4:24 PM, Alok Singh Mahor alokma...@gmail.com
 wrote:

  wow :)
  thanks a lot , my hbase shell commands are working now :)
  I will try to setup pseudo-distributed mode
 
  please tell me one more thing ..can I set any directory for hbase.rootdir
  in conf/hbase-site.xml
  currently I have not set anything
  thanks
 
  On Mon, Nov 26, 2012 at 4:16 PM, Mohammad Tariq donta...@gmail.com
  wrote:
 
   Change 127.0.1.1   alok to 127.0.0.1   alok.
  
   No, Hadoop and ZK are not required for local Hbase setup. But, I would
   recommend at least a pseudo-distributed setup in order to get yourself
   familiar with Habse properly.
  
   HTH
   Regards,
   Mohammad Tariq
  
  
  
   On Mon, Nov 26, 2012 at 4:12 PM, Alok Singh Mahor alokma...@gmail.com
   wrote:
  
127.0.1.1   alok
  
 
 
 
  --
  Alok Singh Mahor
  http://alokmahor.co.cc
  Join the next generation of computing, Open Source and Linux/GNU!!
 




-- 
Alok Singh Mahor
http://alokmahor.co.cc
Join the next generation of computing, Open Source and Linux/GNU!!


Re: setup of a standalone HBase on local filesystem

2012-11-26 Thread ramkrishna vasudevan
Setting up a local system HBase is frequently asked in the mailing list :).


Regards
Ram

On Mon, Nov 26, 2012 at 10:01 PM, Alok Singh Mahor alokma...@gmail.comwrote:

 thank you :)

 On Mon, Nov 26, 2012 at 4:28 PM, Mohammad Tariq donta...@gmail.com
 wrote:

  You are welcome Alok :)
 
  Yes, you can set that value through hbase-site.xml file.
 
  You can visit this link, if you need any help :
 
 http://cloudfront.blogspot.in/2012/06/how-to-configure-habse-in-pseudo.html
 
  I have outlined the whole process there.
 
  HTH
 
  Regards,
  Mohammad Tariq
 
 
 
  On Mon, Nov 26, 2012 at 4:24 PM, Alok Singh Mahor alokma...@gmail.com
  wrote:
 
   wow :)
   thanks a lot , my hbase shell commands are working now :)
   I will try to setup pseudo-distributed mode
  
   please tell me one more thing ..can I set any directory for
 hbase.rootdir
   in conf/hbase-site.xml
   currently I have not set anything
   thanks
  
   On Mon, Nov 26, 2012 at 4:16 PM, Mohammad Tariq donta...@gmail.com
   wrote:
  
Change 127.0.1.1   alok to 127.0.0.1   alok.
   
No, Hadoop and ZK are not required for local Hbase setup. But, I
 would
recommend at least a pseudo-distributed setup in order to get
 yourself
familiar with Habse properly.
   
HTH
Regards,
Mohammad Tariq
   
   
   
On Mon, Nov 26, 2012 at 4:12 PM, Alok Singh Mahor 
 alokma...@gmail.com
wrote:
   
 127.0.1.1   alok
   
  
  
  
   --
   Alok Singh Mahor
   http://alokmahor.co.cc
   Join the next generation of computing, Open Source and Linux/GNU!!
  
 



 --
 Alok Singh Mahor
 http://alokmahor.co.cc
 Join the next generation of computing, Open Source and Linux/GNU!!



Re: setup of a standalone HBase on local filesystem

2012-11-26 Thread Michael Segel
I wouldn't do that unless you're running in a VM. Also don't lose the local 
host reference. That's the important one. 

On Nov 26, 2012, at 4:46 AM, Mohammad Tariq donta...@gmail.com wrote:

 Change 127.0.1.1   alok to 127.0.0.1   alok.
 
 No, Hadoop and ZK are not required for local Hbase setup. But, I would
 recommend at least a pseudo-distributed setup in order to get yourself
 familiar with Habse properly.
 
 HTH
 Regards,
Mohammad Tariq
 
 
 
 On Mon, Nov 26, 2012 at 4:12 PM, Alok Singh Mahor alokma...@gmail.comwrote:
 
 127.0.1.1   alok



Re: Unable to Create Table in Hbase

2012-11-26 Thread Stack
What happens if you put up a shell on your hbase instance and do the
same thing?  Does it succeed?
St.Ack

On Sun, Nov 25, 2012 at 11:45 PM, shyam kumar lakshyam.sh...@gmail.com wrote:
 HI

 I am unable to create a Table in hbase dynamically
 am using the following code

 if (!TABLE_EXISTS) {
 try{
 *admin.createTable(htable);*
 }catch(Exception e){
 e.printStackTrace();
 }
 TABLE_EXISTS = true;
 }

 But the process is not terminating and the table is not created in the
 hbase.
 No exception is thrown. but the process is not terminating.




 --
 View this message in context: 
 http://apache-hbase.679495.n3.nabble.com/Unable-to-Create-Table-in-Hbase-tp4034375.html
 Sent from the HBase User mailing list archive at Nabble.com.


Re: Connecting to standalone HBase from a remote client

2012-11-26 Thread Stack
On Mon, Nov 26, 2012 at 7:28 AM, Nicolas Liochon nkey...@gmail.com wrote:
 Yes, it's not useful to set the master address in the client. I suppose it
 was different a long time ago, hence there are some traces on different
 documentation.
 The master references itself in ZooKeeper. So if the master finds itself to
 be locahost, ZooKeeper will contain locahost, and the clients on
 another computer won't be able to connect. The issue lies on the master
 host, not the client.


Sounds like something to fix.  If distributed, write other than localhost to zk?
St.Ack


Re: Connecting to standalone HBase from a remote client

2012-11-26 Thread Mohammad Tariq
Hello Matan,

 Did it work?If not, add these properties in your hbase-site.xml file
and see if it works for you.

 property
   namehbase.zookeeper.quorum/name
   valueZH-HOST_MACHINE/value
   /property
   property
   namehbase.zookeeper.property.clientPort/name
   value2181/value
   /property
   property
   namehbase.zookeeper.property.dataDir/name
   valuepath_to_your_datadir/value
   /property

HTH

Regards,
Mohammad Tariq



On Mon, Nov 26, 2012 at 8:58 PM, Nicolas Liochon nkey...@gmail.com wrote:

 Yes, it's not useful to set the master address in the client. I suppose it
 was different a long time ago, hence there are some traces on different
 documentation.
 The master references itself in ZooKeeper. So if the master finds itself to
 be locahost, ZooKeeper will contain locahost, and the clients on
 another computer won't be able to connect. The issue lies on the master
 host, not the client.

 On Mon, Nov 26, 2012 at 4:06 PM, matan ma...@cloudaloe.org wrote:

   also don't really understand why it has to be set in the client, as
  according to the flow you describe, the client is getting all it needs to
  know from zookeeper (?).
 



Re: standalone HBase instance fails to start

2012-11-26 Thread Stack
On Sun, Nov 25, 2012 at 8:28 AM, matan ma...@cloudaloe.org wrote:
 Nothing. Maybe just link to it from
 http://hbase.apache.org/book/quickstart.html such that people for whom the
 quick start doesn't work, will have a direct route to this and other
 prerequisites.


I just added note on loopback to the getting started:
http://hbase.apache.org/book.html#quickstart

I don't want to clutter the getting started w/ a long list of prereqs
that actually are not needed putting up hbase in standalone mode; e.g.
you don't need to make sure ssh to localhost is working when doing
standalone.

Thanks.  Any other suggestions on how to improve the doc. are most welcome.
St.Ack


Re: HBase manager GUI

2012-11-26 Thread Harsh J
What are your exact 'manager GUI' needs though? I mean, what are you
envisioning it will help you perform (over the functionality already
offered by the HBase Web UI)?

On Mon, Nov 26, 2012 at 9:59 PM, Alok Singh Mahor alokma...@gmail.com wrote:
 Hi all,
 I have set up standalone Hbase on my laptop. HBase shell is working fine.
 and I am not using hadoop and zookeeper
 I found one frontend for HBase
 https://sourceforge.net/projects/hbasemanagergui/
 but i am not able to use this

 to set up connection i have to give information
 hbase.zookeeper.quorum:
 hbase.zookeeper.property.clientport:
 hbase.master:

 I values I have to set in these fields and I am not using zookeeper.
 did anyone try this GUI?
 thanks in advance :)


 --
 Alok Singh Mahor
 http://alokmahor.co.cc
 Join the next generation of computing, Open Source and Linux/GNU!!



-- 
Harsh J


Re: Connecting to standalone HBase from a remote client

2012-11-26 Thread Nicolas Liochon
Hi Mohammad,

Your answer was right, just that specifying the master address is not
necessary (anymore I think). But it does no harm.
Changing the /etc/hosts (as you did) is right too.
Lastly, if the cluster is standalone and accessed locally, having localhost
in ZK will not be an issue. However, it's perfectly possible to have a
standalone cluster accessed remotely, so you don't want to have the master
to write I'm on the server named localhost in this case. I expect it
won't be an issue for communications between the region servers or hdfs as
they would be all on the same localhost...

Cheers,

Nicolas

On Mon, Nov 26, 2012 at 7:16 PM, Mohammad Tariq donta...@gmail.com wrote:

 what


Re: HBase manager GUI

2012-11-26 Thread Alok Singh Mahor
I need frontend for HBase shell like we have phpmyadmin for MySql.

I tried 127.0.0.1:600010 and 127.0.0.1:60030 these are just giving
information about master mode and regional server respectively. so I tried
to use hbasemanagergui but i am unable to connect it

does HBase web UI have feature of using it as hbase shell GUI alternative ?
if yes how to run that?

On Tue, Nov 27, 2012 at 12:16 AM, Harsh J ha...@cloudera.com wrote:

 What are your exact 'manager GUI' needs though? I mean, what are you
 envisioning it will help you perform (over the functionality already
 offered by the HBase Web UI)?

 On Mon, Nov 26, 2012 at 9:59 PM, Alok Singh Mahor alokma...@gmail.com
 wrote:
  Hi all,
  I have set up standalone Hbase on my laptop. HBase shell is working fine.
  and I am not using hadoop and zookeeper
  I found one frontend for HBase
  https://sourceforge.net/projects/hbasemanagergui/
  but i am not able to use this
 
  to set up connection i have to give information
  hbase.zookeeper.quorum:
  hbase.zookeeper.property.clientport:
  hbase.master:
 
  I values I have to set in these fields and I am not using zookeeper.
  did anyone try this GUI?
  thanks in advance :)
 
 
  --
  Alok Singh Mahor
  http://alokmahor.co.cc
  Join the next generation of computing, Open Source and Linux/GNU!!



 --
 Harsh J




-- 
Alok Singh Mahor
http://alokmahor.co.cc
Join the next generation of computing, Open Source and Linux/GNU!!


Runs in Eclipse but not from jar

2012-11-26 Thread Ratner, Alan S (IS)
I am running HBase 0.94.2 running on 6 servers with Zookeeper 3.4.5 running on 
3.  HBase works from its shell and from within Eclipse but not as a jar file.  
When I run within Eclipse I can see it worked properly by using the HBase shell 
commands (such as scan).

I seem to have 2 separate problems.

Problem 1: when I create a jar file from Eclipse it won't run at all:
ngc@hadoop1:~/hadoop-1.0.4$ bin/hadoop jar ../eclipse/CreateBiTable.jar 
HBase/CreateBiTable -classpath /home/ngc/hbase-0.94.2/*
Exception in thread main java.lang.NoClassDefFoundError: 
org/apache/hadoop/hbase/HBaseConfiguration at 
HBase.CreateBiTable.run(CreateBiTable.java:26) [line 26 is: Configuration conf 
= HBaseConfiguration.create();]

Problem 2: when I create a runnable jar file from Eclipse it communicates 
with Zookeeper but then dies with:
Exception in thread main java.lang.IllegalArgumentException: Not a host:port 
pair: \ufffd
  
5...@hadoop1hadoop1.aj.c2fse.northgrum.com,6,1353949574468mailto:5...@hadoop1hadoop1.aj.c2fse.northgrum.com,6,1353949574468

I'd prefer to use a regular jar (5 KBI than a runnable jar (100 MB).  But I 
assume that if I fix Problem 1 then it will proceed until it crashes with 
Problem 2.

Thanks in advance for any suggestions --- Alan.

-
CLASSPATH
ngc@hadoop1:~/hadoop-1.0.4$ env | grep CLASSPATH
CLASSPATH=/home/ngc/hadoop-1.0.4:/home/ngc/hbase-0.94.2/bin:/home/ngc/zookeeper-3.4.5/bin:/home/ngc/accumulo-1.3.5-incubating

-
HBASE PROGRAM
package HBase;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class CreateBiTable extends Configured implements Tool {
public static String TableName = new String (BiIPTable);
public static String cf = cf;  //column family
public static String c1 = c1;  //column1

public static void main(String[] args) throws Exception {
long startTime = System.currentTimeMillis();
int res = ToolRunner.run(new Configuration(), new 
CreateBiTable(), args);
double duration = (System.currentTimeMillis() - 
startTime)/1000.0;
System.out.println( Job Finished in  + duration + 
 seconds);
System.exit(res);
}

public int run(String[] arg0) throws Exception {
Configuration conf = HBaseConfiguration.create();
//  System.out.println(Configuration created);
  System.out.println(\t+conf.toString());
  HBaseAdmin admin = new HBaseAdmin(conf);
//  System.out.println(\t+admin.toString());
  if (admin.tableExists(TableName)) {
  // Disable and delete the table if it exists
  admin.disableTable(TableName);
  admin.deleteTable(TableName);
  System.out.println(TableName+ exists so deleted);
  }
  // Create table
  HTableDescriptor htd = new HTableDescriptor(TableName);
  HColumnDescriptor hcd = new HColumnDescriptor(cf);
  htd.addFamily(hcd);
  admin.createTable(htd);
  System.out.println(Table created: +htd);
 // Does the table exist now?
  if (admin.tableExists(TableName))
System.out.println(TableName+ creation succeeded);
  else
System.out.println(TableName+ creation failed);
return 0;
}
}

-
OUTPUT FROM RUNNING WITHIN ECLIPSE
Configuration: core-default.xml, core-site.xml, hbase-default.xml, 
hbase-site.xml
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/ngc/mahout-distribution-0.7/mahout-examples-0.7-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/ngc/hadoop-1.0.4/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:host.name=hadoop1.aj.c2fse.northgrum.com
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:java.version=1.6.0_25
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun 
Microsystems Inc.
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:java.home=/home/ngc/jdk1.6.0_25/jre
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 

Runs in Eclipse but not from jar

2012-11-26 Thread Ratner, Alan S (IS)
I am running HBase 0.94.2 running on 6 servers with Zookeeper 3.4.5 running on 
3.  HBase works from its shell and from within Eclipse but not as a jar file.  
When I run within Eclipse I can see it worked properly by using the HBase shell 
commands (such as scan).

I seem to have 2 separate problems.

Problem 1: when I create a jar file from Eclipse it won't run at all:
ngc@hadoop1:~/hadoop-1.0.4$ bin/hadoop jar ../eclipse/CreateBiTable.jar 
HBase/CreateBiTable -classpath /home/ngc/hbase-0.94.2/*
Exception in thread main java.lang.NoClassDefFoundError: 
org/apache/hadoop/hbase/HBaseConfiguration at 
HBase.CreateBiTable.run(CreateBiTable.java:26) [line 26 is: Configuration conf 
= HBaseConfiguration.create();]

Problem 2: when I create a runnable jar file from Eclipse it communicates 
with Zookeeper but then dies with:
Exception in thread main java.lang.IllegalArgumentException: Not a host:port 
pair: \ufffd
  5...@hadoop1hadoop1.aj.c2fse.northgrum.com,6,1353949574468

I'd prefer to use a regular jar (5 KB) rather than a runnable jar (100 MB).  
But I assume that if I fix Problem 1 then it will proceed until it crashes with 
Problem 2.

Thanks in advance for any suggestions --- Alan.

-
CLASSPATH
ngc@hadoop1:~/hadoop-1.0.4$ env | grep CLASSPATH
CLASSPATH=/home/ngc/hadoop-1.0.4:/home/ngc/hbase-0.94.2/bin:/home/ngc/zookeeper-3.4.5/bin:/home/ngc/accumulo-1.3.5-incubating

-
HBASE PROGRAM
package HBase;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class CreateBiTable extends Configured implements Tool {
    public static String TableName = new String (BiIPTable);
    public static String cf = cf;  //column family
    public static String c1 = c1;  //column1

    public static void main(String[] args) throws Exception {
    long startTime = System.currentTimeMillis();
    int res = ToolRunner.run(new Configuration(), new 
CreateBiTable(), args);
    double duration = (System.currentTimeMillis() - 
startTime)/1000.0;
    System.out.println( Job Finished in  + duration + 
 seconds);
    System.exit(res);
    }

    public int run(String[] arg0) throws Exception {
    Configuration conf = HBaseConfiguration.create();
//  System.out.println(Configuration created);
  System.out.println(\t+conf.toString());
  HBaseAdmin admin = new HBaseAdmin(conf);
//  System.out.println(\t+admin.toString());
  if (admin.tableExists(TableName)) {
  // Disable and delete the table if it exists
  admin.disableTable(TableName);
  admin.deleteTable(TableName);
  System.out.println(TableName+ exists so deleted);
  }
  // Create table
  HTableDescriptor htd = new HTableDescriptor(TableName);
  HColumnDescriptor hcd = new HColumnDescriptor(cf);
  htd.addFamily(hcd);
  admin.createTable(htd);
  System.out.println(Table created: +htd);
 // Does the table exist now?
  if (admin.tableExists(TableName)) 
    System.out.println(TableName+ creation succeeded);
  else
    System.out.println(TableName+ creation failed);
    return 0;
    }
}

-
OUTPUT FROM RUNNING WITHIN ECLIPSE
    Configuration: core-default.xml, core-site.xml, hbase-default.xml, 
hbase-site.xml
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/ngc/mahout-distribution-0.7/mahout-examples-0.7-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/ngc/hadoop-1.0.4/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:host.name=hadoop1.aj.c2fse.northgrum.com
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:java.version=1.6.0_25
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun 
Microsystems Inc.
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:java.home=/home/ngc/jdk1.6.0_25/jre
12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 

Re: Connecting to standalone HBase from a remote client

2012-11-26 Thread Mohammad Tariq
Hello Nicolas,

  You are right. It has been deprecated. Thank you for updating my
knowledge base..:)

Regards,
Mohammad Tariq



On Tue, Nov 27, 2012 at 12:17 AM, Nicolas Liochon nkey...@gmail.com wrote:

 Hi Mohammad,

 Your answer was right, just that specifying the master address is not
 necessary (anymore I think). But it does no harm.
 Changing the /etc/hosts (as you did) is right too.
 Lastly, if the cluster is standalone and accessed locally, having localhost
 in ZK will not be an issue. However, it's perfectly possible to have a
 standalone cluster accessed remotely, so you don't want to have the master
 to write I'm on the server named localhost in this case. I expect it
 won't be an issue for communications between the region servers or hdfs as
 they would be all on the same localhost...

 Cheers,

 Nicolas

 On Mon, Nov 26, 2012 at 7:16 PM, Mohammad Tariq donta...@gmail.com
 wrote:

  what



Re: HBase manager GUI

2012-11-26 Thread Mohammad Tariq
Hello Alok,

I have seen this project. Good work. But let me tell you one thing, the
way Hbase is used is slightly different from the way you use traditional
relational databases. Rarely people, who are working on real clusters, face
a situation wherein they need to query Hbase directly. Though it can be
done just for a few minor tasks like small gets, scans, puts etc etc. For
that the Hbase shell is more than sufficient.

People either use Hbase API features like filters or co-processors or write
MapReduce jobs to query their Hbase tables or map their tables to Hive
warehouse tables. Having said that, I would suggest you to get yourself
familiar with Hbase API rather than relying on any other thing if you are
planning to adopt Hbase as your primary datastore.

The web interface provided by Hbase is just for visualization and
monitoring and not for performing various table operations. But, that
doesn't mean it is completely useless. Hbase guys have done really a great
work. You can even perform some operation from the webUI as well.

HTH

Regards,
Mohammad Tariq



On Tue, Nov 27, 2012 at 12:55 AM, Alok Singh Mahor alokma...@gmail.comwrote:

 I need frontend for HBase shell like we have phpmyadmin for MySql.

 I tried 127.0.0.1:600010 and 127.0.0.1:60030 these are just giving
 information about master mode and regional server respectively. so I tried
 to use hbasemanagergui but i am unable to connect it

 does HBase web UI have feature of using it as hbase shell GUI alternative ?
 if yes how to run that?

 On Tue, Nov 27, 2012 at 12:16 AM, Harsh J ha...@cloudera.com wrote:

  What are your exact 'manager GUI' needs though? I mean, what are you
  envisioning it will help you perform (over the functionality already
  offered by the HBase Web UI)?
 
  On Mon, Nov 26, 2012 at 9:59 PM, Alok Singh Mahor alokma...@gmail.com
  wrote:
   Hi all,
   I have set up standalone Hbase on my laptop. HBase shell is working
 fine.
   and I am not using hadoop and zookeeper
   I found one frontend for HBase
   https://sourceforge.net/projects/hbasemanagergui/
   but i am not able to use this
  
   to set up connection i have to give information
   hbase.zookeeper.quorum:
   hbase.zookeeper.property.clientport:
   hbase.master:
  
   I values I have to set in these fields and I am not using zookeeper.
   did anyone try this GUI?
   thanks in advance :)
  
  
   --
   Alok Singh Mahor
   http://alokmahor.co.cc
   Join the next generation of computing, Open Source and Linux/GNU!!
 
 
 
  --
  Harsh J
 



 --
 Alok Singh Mahor
 http://alokmahor.co.cc
 Join the next generation of computing, Open Source and Linux/GNU!!



Runs in Eclipse but not as a Jar

2012-11-26 Thread Ratner, Alan S (IS)
I am running HBase 0.94.2 running on 6 servers with Zookeeper 3.4.5 running on 
3.  HBase works from its shell and from within Eclipse but not as a jar file.  
When I run within Eclipse I can see it worked properly by using the HBase shell 
commands (such as scan).



I seem to have 2 separate problems.



Problem 1: when I create a jar file from Eclipse it won't run at all:

ngc@hadoop1:~/hadoop-1.0.4$mailto:ngc@hadoop1:~/hadoop-1.0.4$ bin/hadoop jar 
../eclipse/CreateBiTable.jar HBase/CreateBiTable -classpath 
/home/ngc/hbase-0.94.2/*

Exception in thread main java.lang.NoClassDefFoundError: 
org/apache/hadoop/hbase/HBaseConfiguration at 
HBase.CreateBiTable.run(CreateBiTable.java:26) [line 26 is: Configuration conf 
= HBaseConfiguration.create();]



Problem 2: when I create a runnable jar file from Eclipse it communicates 
with Zookeeper but then dies with:

Exception in thread main java.lang.IllegalArgumentException: Not a host:port 
pair: \ufffd

  
5...@hadoop1hadoop1.aj.c2fse.northgrum.com,6,1353949574468mailto:5...@hadoop1hadoop1.aj.c2fse.northgrum.com,6,1353949574468



I'd prefer to use a regular jar (5 KB) rather than a runnable jar (100 MB).  
But I assume that if I fix Problem 1 then it will proceed until it crashes with 
Problem 2.



Thanks in advance for any suggestions --- Alan.



-

CLASSPATH

ngc@hadoop1:~/hadoop-1.0.4$mailto:ngc@hadoop1:~/hadoop-1.0.4$ env | grep 
CLASSPATH 
CLASSPATH=/home/ngc/hadoop-1.0.4:/home/ngc/hbase-0.94.2/bin:/home/ngc/zookeeper-3.4.5/bin:/home/ngc/accumulo-1.3.5-incubating



-

HBASE PROGRAM

package HBase;



import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.conf.Configured;

import org.apache.hadoop.hbase.HBaseConfiguration;

import org.apache.hadoop.hbase.HColumnDescriptor;

import org.apache.hadoop.hbase.HTableDescriptor;

import org.apache.hadoop.hbase.client.HBaseAdmin;

import org.apache.hadoop.util.Tool;

import org.apache.hadoop.util.ToolRunner;



public class CreateBiTable extends Configured implements Tool {

public static String TableName = new String (BiIPTable);

public static String cf = cf;  //column family

public static String c1 = c1;  //column1



public static void main(String[] args) throws Exception {

long startTime = System.currentTimeMillis();

int res = ToolRunner.run(new Configuration(), new 
CreateBiTable(), args);

double duration = (System.currentTimeMillis() - 
startTime)/1000.0;

System.out.println( Job Finished in  + duration + 
 seconds);

System.exit(res);

}



public int run(String[] arg0) throws Exception {

Configuration conf = HBaseConfiguration.create(); //  
System.out.println(Configuration created);

  System.out.println(\t+conf.toString());

  HBaseAdmin admin = new HBaseAdmin(conf); //  
System.out.println(\t+admin.toString());

  if (admin.tableExists(TableName)) {

  // Disable and delete the table if it exists

  admin.disableTable(TableName);

  admin.deleteTable(TableName);

  System.out.println(TableName+ exists so deleted);

  }

  // Create table

  HTableDescriptor htd = new HTableDescriptor(TableName);

  HColumnDescriptor hcd = new HColumnDescriptor(cf);

  htd.addFamily(hcd);

  admin.createTable(htd);

  System.out.println(Table created: +htd);

 // Does the table exist now?

  if (admin.tableExists(TableName))

System.out.println(TableName+ creation succeeded);

  else

System.out.println(TableName+ creation failed);

return 0;

}

}



-

OUTPUT FROM RUNNING WITHIN ECLIPSE

Configuration: core-default.xml, core-site.xml, hbase-default.xml, 
hbase-site.xml

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in 
[jar:file:/home/ngc/mahout-distribution-0.7/mahout-examples-0.7-job.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/home/ngc/hadoop-1.0.4/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT

12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:host.name=hadoop1.aj.c2fse.northgrum.com

12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:java.version=1.6.0_25

12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun 
Microsystems Inc.

12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 
environment:java.home=/home/ngc/jdk1.6.0_25/jre

12/11/26 13:48:54 INFO zookeeper.ZooKeeper: Client 

Re: Configuration setup

2012-11-26 Thread Stack
On Mon, Nov 26, 2012 at 2:16 PM, Mohit Anchlia mohitanch...@gmail.com wrote:
 I have a need to move hbas-site.xml to an external location. So in order to
 do that I changed my configuration as shown below. But this doesn't seem to
 be working. It picks up the file but I get error, seems like it's going to
 the localhost. I checked hbase-site.xml in the directory and the zookeeper
 nodes are correctly listed.


 [11/26/2012 14:09:31,480] INFO apache.zookeeper.ClientCnxn
 [[web-analytics-ci-1.0.0-SNAPSHOT].AsyncFlow.async2.02-SendThread(localhost.localdomain:2181)]():
 Opening socket connection to server localhost.localdomain/127.0.0.1:2181

 -

 changed from

 HBaseConfiguration.create()

 to


 config = *new* Configuration();

 config.addResource(*new* Path(*CONF_FILE_PROP_NAME*));

 *log*.info(Config location picked from: + prop);


The above looks basically right but IIRC, this stuff can be tricky
adding in new resources and making sure stuff is applied in order --
and then there is 'final' configs that are applied after yours.

You could try copying the hbase conf dir to wherever, amending it to
suit your needs and then when starting hbase, add '--config
ALTERNATE_CONF_DIR'.

St.Ack


Re: Runs in Eclipse but not as a Jar

2012-11-26 Thread Suraj Varma
The difference is your classpath.
So -for problem 1, you need to specify jars under /hbase-0.94.2/lib to
your classpath. You only need a subset ... but first to get over the
problem set your classpath with all these jars. I don't think
specifying a wildcard * works ... like below

ngc@hadoop1:~/hadoop-1.0.4$mailto:ngc@hadoop1:~/hadoop-1.0.4$
bin/hadoop jar ../eclipse/CreateBiTable.jar HBase/CreateBiTable
-classpath /home/ngc/hbase-0.94.2/*

you can use bin/hbase classpath to print out full classpath that you
can include in your command line script ...

In addition to the jars,you also need to add your hbase-site.xml
(client side) to the classpath. This would be your problem 2.

Hope that helps.
--Suraj

On Mon, Nov 26, 2012 at 1:03 PM, Ratner, Alan S (IS)
alan.rat...@ngc.com wrote:
 I am running HBase 0.94.2 running on 6 servers with Zookeeper 3.4.5 running 
 on 3.  HBase works from its shell and from within Eclipse but not as a jar 
 file.  When I run within Eclipse I can see it worked properly by using the 
 HBase shell commands (such as scan).



 I seem to have 2 separate problems.



 Problem 1: when I create a jar file from Eclipse it won't run at all:

 ngc@hadoop1:~/hadoop-1.0.4$mailto:ngc@hadoop1:~/hadoop-1.0.4$ bin/hadoop 
 jar ../eclipse/CreateBiTable.jar HBase/CreateBiTable -classpath 
 /home/ngc/hbase-0.94.2/*

 Exception in thread main java.lang.NoClassDefFoundError: 
 org/apache/hadoop/hbase/HBaseConfiguration at 
 HBase.CreateBiTable.run(CreateBiTable.java:26) [line 26 is: Configuration 
 conf = HBaseConfiguration.create();]



 Problem 2: when I create a runnable jar file from Eclipse it communicates 
 with Zookeeper but then dies with:

 Exception in thread main java.lang.IllegalArgumentException: Not a 
 host:port pair: \ufffd

   
 5...@hadoop1hadoop1.aj.c2fse.northgrum.com,6,1353949574468mailto:5...@hadoop1hadoop1.aj.c2fse.northgrum.com,6,1353949574468



 I'd prefer to use a regular jar (5 KB) rather than a runnable jar (100 MB).  
 But I assume that if I fix Problem 1 then it will proceed until it crashes 
 with Problem 2.



 Thanks in advance for any suggestions --- Alan.



 -

 CLASSPATH

 ngc@hadoop1:~/hadoop-1.0.4$mailto:ngc@hadoop1:~/hadoop-1.0.4$ env | grep 
 CLASSPATH 
 CLASSPATH=/home/ngc/hadoop-1.0.4:/home/ngc/hbase-0.94.2/bin:/home/ngc/zookeeper-3.4.5/bin:/home/ngc/accumulo-1.3.5-incubating



 -

 HBASE PROGRAM

 package HBase;



 import org.apache.hadoop.conf.Configuration;

 import org.apache.hadoop.conf.Configured;

 import org.apache.hadoop.hbase.HBaseConfiguration;

 import org.apache.hadoop.hbase.HColumnDescriptor;

 import org.apache.hadoop.hbase.HTableDescriptor;

 import org.apache.hadoop.hbase.client.HBaseAdmin;

 import org.apache.hadoop.util.Tool;

 import org.apache.hadoop.util.ToolRunner;



 public class CreateBiTable extends Configured implements Tool {

 public static String TableName = new String (BiIPTable);

 public static String cf = cf;  //column family

 public static String c1 = c1;  //column1



 public static void main(String[] args) throws Exception {

 long startTime = System.currentTimeMillis();

 int res = ToolRunner.run(new Configuration(), new 
 CreateBiTable(), args);

 double duration = (System.currentTimeMillis() - 
 startTime)/1000.0;

 System.out.println( Job Finished in  + duration 
 +  seconds);

 System.exit(res);

 }



 public int run(String[] arg0) throws Exception {

 Configuration conf = HBaseConfiguration.create(); //  
 System.out.println(Configuration created);

   System.out.println(\t+conf.toString());

   HBaseAdmin admin = new HBaseAdmin(conf); //  
 System.out.println(\t+admin.toString());

   if (admin.tableExists(TableName)) {

   // Disable and delete the table if it exists

   admin.disableTable(TableName);

   admin.deleteTable(TableName);

   System.out.println(TableName+ exists so deleted);

   }

   // Create table

   HTableDescriptor htd = new HTableDescriptor(TableName);

   HColumnDescriptor hcd = new HColumnDescriptor(cf);

   htd.addFamily(hcd);

   admin.createTable(htd);

   System.out.println(Table created: +htd);

  // Does the table exist now?

   if (admin.tableExists(TableName))

 System.out.println(TableName+ creation succeeded);

   else

 System.out.println(TableName+ creation failed);

 return 0;

 }

 }



 -

 OUTPUT FROM RUNNING WITHIN ECLIPSE

 Configuration: core-default.xml, core-site.xml, 
 hbase-default.xml, hbase-site.xml

 SLF4J: Class path contains multiple SLF4J bindings.

 SLF4J: Found binding in 
 

Re: Configuration setup

2012-11-26 Thread Mohit Anchlia
Thanks! This is the client code I was referring to. The below code doesn't
seem to work. Also I tried HBaseConfiguration.addHBaseResrouce and that
didn't work either. Is there any other way to make it configurable outside
the resource?

On Mon, Nov 26, 2012 at 2:39 PM, Stack st...@duboce.net wrote:

 On Mon, Nov 26, 2012 at 2:16 PM, Mohit Anchlia mohitanch...@gmail.com
 wrote:
  I have a need to move hbas-site.xml to an external location. So in order
 to
  do that I changed my configuration as shown below. But this doesn't seem
 to
  be working. It picks up the file but I get error, seems like it's going
 to
  the localhost. I checked hbase-site.xml in the directory and the
 zookeeper
  nodes are correctly listed.
 
 
  [11/26/2012 14:09:31,480] INFO apache.zookeeper.ClientCnxn
 
 [[web-analytics-ci-1.0.0-SNAPSHOT].AsyncFlow.async2.02-SendThread(localhost.localdomain:2181)]():
  Opening socket connection to server localhost.localdomain/127.0.0.1:2181
 
  -
 
  changed from
 
  HBaseConfiguration.create()
 
  to
 
 
  config = *new* Configuration();
 
  config.addResource(*new* Path(*CONF_FILE_PROP_NAME*));
 
  *log*.info(Config location picked from: + prop);


 The above looks basically right but IIRC, this stuff can be tricky
 adding in new resources and making sure stuff is applied in order --
 and then there is 'final' configs that are applied after yours.

 You could try copying the hbase conf dir to wherever, amending it to
 suit your needs and then when starting hbase, add '--config
 ALTERNATE_CONF_DIR'.

 St.Ack



Re: HBase manager GUI

2012-11-26 Thread Alok Singh Mahor
thanks a lot Mahammad for this very complete and so mature reply :)
I am very new and just started playing with HBase for my college project
work. I will try to play with API's
thanks :)

On Tue, Nov 27, 2012 at 2:31 AM, Mohammad Tariq donta...@gmail.com wrote:

 Hello Alok,

 I have seen this project. Good work. But let me tell you one thing, the
 way Hbase is used is slightly different from the way you use traditional
 relational databases. Rarely people, who are working on real clusters, face
 a situation wherein they need to query Hbase directly. Though it can be
 done just for a few minor tasks like small gets, scans, puts etc etc. For
 that the Hbase shell is more than sufficient.

 People either use Hbase API features like filters or co-processors or write
 MapReduce jobs to query their Hbase tables or map their tables to Hive
 warehouse tables. Having said that, I would suggest you to get yourself
 familiar with Hbase API rather than relying on any other thing if you are
 planning to adopt Hbase as your primary datastore.

 The web interface provided by Hbase is just for visualization and
 monitoring and not for performing various table operations. But, that
 doesn't mean it is completely useless. Hbase guys have done really a great
 work. You can even perform some operation from the webUI as well.

 HTH

 Regards,
 Mohammad Tariq



 On Tue, Nov 27, 2012 at 12:55 AM, Alok Singh Mahor alokma...@gmail.com
 wrote:

  I need frontend for HBase shell like we have phpmyadmin for MySql.
 
  I tried 127.0.0.1:600010 and 127.0.0.1:60030 these are just giving
  information about master mode and regional server respectively. so I
 tried
  to use hbasemanagergui but i am unable to connect it
 
  does HBase web UI have feature of using it as hbase shell GUI
 alternative ?
  if yes how to run that?
 
  On Tue, Nov 27, 2012 at 12:16 AM, Harsh J ha...@cloudera.com wrote:
 
   What are your exact 'manager GUI' needs though? I mean, what are you
   envisioning it will help you perform (over the functionality already
   offered by the HBase Web UI)?
  
   On Mon, Nov 26, 2012 at 9:59 PM, Alok Singh Mahor alokma...@gmail.com
 
   wrote:
Hi all,
I have set up standalone Hbase on my laptop. HBase shell is working
  fine.
and I am not using hadoop and zookeeper
I found one frontend for HBase
https://sourceforge.net/projects/hbasemanagergui/
but i am not able to use this
   
to set up connection i have to give information
hbase.zookeeper.quorum:
hbase.zookeeper.property.clientport:
hbase.master:
   
I values I have to set in these fields and I am not using zookeeper.
did anyone try this GUI?
thanks in advance :)
   
   
--
Alok Singh Mahor
http://alokmahor.co.cc
Join the next generation of computing, Open Source and Linux/GNU!!
  
  
  
   --
   Harsh J
  
 
 
 
  --
  Alok Singh Mahor
  http://alokmahor.co.cc
  Join the next generation of computing, Open Source and Linux/GNU!!
 




-- 
Alok Singh Mahor
http://alokmahor.co.cc
Join the next generation of computing, Open Source and Linux/GNU!!


Re: Unable to Create Table in Hbase

2012-11-26 Thread shyam kumar
There is no exception or warnings in the log and the console prints the
following 

12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:host.name=localhost
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:java.version=1.7.0_09
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:java.vendor=Oracle Corporation
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:java.home=/home/shyam/jdk1.7.0_09/jre
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:java.class.path=lib/setooz-ir-core.jar:lib/guava-12.0.jar:lib/carrot2-core-3.7.0-SNAPSHOT.jar:lib/commons-codec-1.4.jar:lib/commons-configuration-1.7.jar:lib/hadoop-core-1.0.2.jar:lib/tika-app-1.0.jar:lib/httpclient-4.0.3.jar:lib/ezmorph.jar:lib/geoip.jar:lib/xercesImpl.jar:lib/attributes-binder-1.0.1.jar:lib/jackson-core-asl-1.7.4.jar:lib/veooz-analysis.jar:lib/log4j-1.2.17.jar:lib/maxent-3.0.0.jar:lib/liblinear-1.7.jar:lib/semantifire-1.0.jar:lib/ritaWN.jar:lib/slf4j-log4j12-1.6.1.jar:lib/commons-logging-1.1.1.jar:lib/slf4j-api-1.6.1.jar:lib/bzip2.jar:lib/langdetect.jar:lib/mahout-math-0.6.jar:lib/zookeeper-3.4.3.jar:lib/commons-lang-2.5.jar:lib/wikixmlj-r43.jar:lib/commons-collections-3.1.jar:lib/hppc-0.4.1.jar:lib/mahout-collections-1.0.jar:lib/jackson-mapper-asl-1.7.4.jar:lib/supportWN.jar:lib/simple-xml-2.6.4.jar:lib/commons-beanutils-1.7.jar:lib/opennlp-tools-1.5.0.jar:lib/setooz-core-3.5-SNAPSHOT.jar:lib/json-lib-2.4-jdk15.jar:lib/gson-2.2.2.jar:lib/jsoup-1.6.0.jar:lib/jsonic-1.2.4.jar:lib/lucene-analyzers-3.6.0.jar:lib/hbase-0.92.1.jar:lib/xml-apis.jar:conf/:dist/Veooz-Core.jar:.
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:java.library.path=/usr/java/packages/lib/i386:/lib:/usr/lib
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/tmp
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:java.compiler=NA
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:os.version=3.2.0-33-generic-pae
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:user.name=shyam
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:user.home=/home/shyam
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Client
environment:user.dir=/home/shyam/workspace/Veooz/Veooz-Core
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Initiating client connection,
connectString=localhost:2181 sessionTimeout=18 watcher=hconnection
12/11/27 11:03:42 INFO zookeeper.ClientCnxn: Opening socket connection to
server /127.0.0.1:2181
12/11/27 11:03:42 INFO client.ZooKeeperSaslClient: Client will not
SASL-authenticate because the default JAAS configuration section 'Client'
could not be found. If you are not using SASL, you may ignore this. On the
other hand, if you expected SASL to work, please fix your JAAS
configuration.
12/11/27 11:03:42 INFO zookeeper.RecoverableZooKeeper: The identifier of
this process is 6296@setu-M68MT-S2
12/11/27 11:03:42 INFO zookeeper.ClientCnxn: Socket connection established
to localhost/127.0.0.1:2181, initiating session
12/11/27 11:03:42 INFO zookeeper.ClientCnxn: Session establishment complete
on server localhost/127.0.0.1:2181, sessionid = 0x13b405aac590004,
negotiated timeout = 4
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Initiating client connection,
connectString=localhost:2181 sessionTimeout=18
watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@16f77b6
12/11/27 11:03:42 INFO zookeeper.ClientCnxn: Opening socket connection to
server /127.0.0.1:2181
12/11/27 11:03:42 INFO client.ZooKeeperSaslClient: Client will not
SASL-authenticate because the default JAAS configuration section 'Client'
could not be found. If you are not using SASL, you may ignore this. On the
other hand, if you expected SASL to work, please fix your JAAS
configuration.
12/11/27 11:03:42 INFO zookeeper.ClientCnxn: Socket connection established
to localhost/127.0.0.1:2181, initiating session
12/11/27 11:03:42 INFO zookeeper.RecoverableZooKeeper: The identifier of
this process is 6296@setu-M68MT-S2
12/11/27 11:03:42 INFO zookeeper.ClientCnxn: Session establishment complete
on server localhost/127.0.0.1:2181, sessionid = 0x13b405aac590005,
negotiated timeout = 4
12/11/27 11:03:42 INFO zookeeper.ClientCnxn: EventThread shut down
12/11/27 11:03:42 INFO zookeeper.ZooKeeper: Session: 0x13b405aac590005
closed
Creating HBase Table: Posts



and finally the process is not terminating ... it is staying at this line
only after one hour the process is endeded with the following error 

ERROR: org.apache.hadoop.hbase.client.NoServerForRegionException: No server
address listed in .META. for region