Re: HTableDescriptor class is deprecated

2017-11-06 Thread beeshma r
Thanks Ted ,Got it :)







On Mon, Nov 6, 2017 at 8:33 PM, Ted Yu  wrote:

> Please take a look
> at hbase-client/src/main/java/org/apache/hadoop/hbase/client/
> TableDescriptorBuilder.java
>
> On Mon, Nov 6, 2017 at 7:29 PM, beeshma r  wrote:
>
> > Hi folks
> >
> > HTableDescriptor class got deprecated in Latest client API .So any
> > alternative way to create a HTable  for testing ?
> >
> > cheers
> > Beeshma
> >
> > --
> >
>



--


HTableDescriptor class is deprecated

2017-11-06 Thread beeshma r
Hi folks

HTableDescriptor class got deprecated in Latest client API .So any
alternative way to create a HTable  for testing ?

cheers
Beeshma

--


Re: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray error in nelwy build Hbase

2016-04-03 Thread beeshma r
HI Ted,

Any modification need in configuration to solve this issue? or i need to
 upgrade hadoop version?

Please advise this:)

Thanks
Beeshma

On Sat, Apr 2, 2016 at 4:05 PM, beeshma r  wrote:

> HI Ted/jeremy
>
> My Hbase verion is HBase 2.0.0-SNAPSHOT
> Hadoop version is hadoop-2.5.1
>
> is this Hadoop version  fine?
>
> And i did config like this
>
> 
>   hbase.hstore.checksum.algorithm
>   CRC32
> 
>
>
>  is this not worked .With the same error master is still failing
>
> Note : i build Hbase from source.
>
>
>
>
> On Mon, Mar 28, 2016 at 12:36 PM, Jeremy Carroll 
> wrote:
>
>> Check your Native Library path. If you do not want to use Native
>> Checksumming, you can also turn that off. (
>> https://blogs.apache.org/hbase/entry/saving_cpu_using_native_hadoop)
>>
>> hbase.hstore.checksum.algorithm
>>
>> Change to CRC32 instead of CRC32C
>>
>> On Mon, Mar 28, 2016 at 10:30 AM, beeshma r  wrote:
>>
>> > Hi
>> > i am  testing with newly build Hbase .Initially  table has been created
>> and
>> > am able to insert data's in standalone mode.But suddenly i am getting
>> error
>> > like this below log
>> >
>> > http://pastebin.com/e6HW0zbu
>> >
>> >
>> > This is my Hbase-site.xml
>> > 
>> > 
>> > hbase.rootdir
>> > file:///home/beeshma/Hbase_9556/Build_hbase/root
>> >   
>> >   
>> > hbase.zookeeper.property.dataDir
>> > /home/beeshma/Hbase_9556/Build_hbase/zk
>> >   
>> > 
>> >
>> > i havn't change anything with other settings.Any one suggest me that
>> what
>> > could be an issue ?
>> >
>> >
>> >
>> > Cheers
>> > Beesh
>> >
>>
>
>
>
> --
>
>
>
>
>
>


--


Re: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray error in nelwy build Hbase

2016-04-02 Thread beeshma r
HI Ted/jeremy

My Hbase verion is HBase 2.0.0-SNAPSHOT
Hadoop version is hadoop-2.5.1

is this Hadoop version  fine?

And i did config like this


  hbase.hstore.checksum.algorithm
  CRC32



 is this not worked .With the same error master is still failing

Note : i build Hbase from source.




On Mon, Mar 28, 2016 at 12:36 PM, Jeremy Carroll 
wrote:

> Check your Native Library path. If you do not want to use Native
> Checksumming, you can also turn that off. (
> https://blogs.apache.org/hbase/entry/saving_cpu_using_native_hadoop)
>
> hbase.hstore.checksum.algorithm
>
> Change to CRC32 instead of CRC32C
>
> On Mon, Mar 28, 2016 at 10:30 AM, beeshma r  wrote:
>
> > Hi
> > i am  testing with newly build Hbase .Initially  table has been created
> and
> > am able to insert data's in standalone mode.But suddenly i am getting
> error
> > like this below log
> >
> > http://pastebin.com/e6HW0zbu
> >
> >
> > This is my Hbase-site.xml
> > 
> > 
> > hbase.rootdir
> > file:///home/beeshma/Hbase_9556/Build_hbase/root
> >   
> >   
> > hbase.zookeeper.property.dataDir
> > /home/beeshma/Hbase_9556/Build_hbase/zk
> >   
> > 
> >
> > i havn't change anything with other settings.Any one suggest me that
> what
> > could be an issue ?
> >
> >
> >
> > Cheers
> > Beesh
> >
>



--


org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray error in nelwy build Hbase

2016-03-28 Thread beeshma r
Hi
i am  testing with newly build Hbase .Initially  table has been created and
am able to insert data's in standalone mode.But suddenly i am getting error
like this below log

http://pastebin.com/e6HW0zbu


This is my Hbase-site.xml


hbase.rootdir
file:///home/beeshma/Hbase_9556/Build_hbase/root
  
  
hbase.zookeeper.property.dataDir
/home/beeshma/Hbase_9556/Build_hbase/zk
  


i havn't change anything with other settings.Any one suggest me that  what
could be an issue ?



Cheers
Beesh


Getting Htable first key and last key

2016-02-16 Thread beeshma r
Hi  ,

I wanna get Regions first key and last key for Htable.i did code like
this.  please suggest am i doing right way

// people is table name
//con is Hbase configration
HTable ht=new HTable(con,"people");
NavigableMap np=ht.getRegionLocations();
Set setinfo=np.keySet();
List lis=new ArrayList();
lis.addAll(setinfo);

for(org.apache.hadoop.hbase.HRegionInfo h :lis)
{
System.out.println(h.getRegionId() + "getRegionId");

String s = new String(h.getStartKey());
System.out.println(s.toString()+"---start key");
//System.out.println(new String(h.getStartKey())
+"--end key");

}
This functionality returns 0 .Means startkey returns as  0(s.length()).But
am able to get regionid

so anything i misunderstood?

cheers
Beeshma
--


Getting Htable first key and last key

2016-02-15 Thread beeshma r
Hi

I wanna get Regions first key and last key for Htable

i did code like this.  please suggest am i doing right way


// people is table name
HTable ht=new HTable(con,"people");
NavigableMap np=ht.getRegionLocations();
Set setinfo=np.keySet();
List lis=new ArrayList();
lis.addAll(setinfo);

for(org.apache.hadoop.hbase.HRegionInfo h :lis)
{
System.out.println(h.getRegionId() + "getRegionId");

String s = new String(h.getStartKey());
System.out.println(s.toString()+"---start key");
//System.out.println(new String(h.getStartKey())
+"--end key");

}

--


RE: Type of Scan to be used for real time analysis

2015-12-18 Thread beeshma r
Hi Rajesh,

Why you can't index all rows  using Solr.
Check this out Hbase indexer(NG data)

Regards
Beeshma Ramakrishnan

-Original Message-
From: Rajeshkumar J
Sent: 18-12-2015 PM 05:59
To: user@hbase.apache.org
Subject: Re: Type of Scan to be used for real time analysis

Hi Anil,

   I have about 10 million rows with each rows having more than 10k
 columns. I need to query this table based on row key and which will be the
 apt query process for this

Thanks

On Fri, Dec 18, 2015 at 5:43 PM, anil gupta  wrote:

> Hi RajeshKumar,
 >
 > IMO, type of scan is not decided on the basis of response time. Its decided
 > on the basis of your query logic and data model.
 > Also, Response time cannot be directly correlated to any filter or scan.
 > Response time is more about how much data needs to read, cpu, network IO,
 > etc to suffice requirement of your query.
 > So, you will need to look at your data model and pick the best query.
 >
 > HTH,
 > Anil
 >
 > On Thu, Dec 17, 2015 at 10:17 PM, Rajeshkumar J <
 > rajeshkumarit8...@gmail.com
 > > wrote:
 >
 > > Hi,
 > >
 > >My hbase table holds 10 million rows and I need to query it and I want
 > > hbase to return the query within one or two seconds. Help me to choose
 > > which type of scan do I have to use for this - range scan or rowfilter
 > scan
 > >
 > > Thanks
 > >
 >
 >
 >
 > --
 > Thanks & Regards,
 > Anil Gupta
 >


Re: Hbase Row Key Scan

2015-11-23 Thread beeshma r
HI ,

You can use prefixfilter class while scan



byte[] prefix = Bytes.toBytes("key");   Scan scan = new Scan(); 
Filter
prefixFilter = new PrefixFilter(prefix);
scan.setFilter(prefixFilter);   ResultScanner resultScanner =
Main_Table.getScanner(scan);




Thanks

Beeshma


On Tue, Nov 24, 2015 at 9:34 AM, beeshma r  wrote:

> HI
>
> You can use prefixfilter class while scan
>
>
>
>
> On Thu, Nov 19, 2015 at 10:25 AM, dheeraj kavalur <
> dheerajkavalu...@gmail.com> wrote:
>
>> Hi,
>>
>> Can someone help how to query on partial rowkey.
>>
>> *Table Name :* URLdata
>>
>>
>> *Column Family:* BaseID
>>
>>
>> *Columns:*1.   userId
>>
>> 2.   userIdType
>>
>> 3.   username
>>
>> 4.   country
>>
>> *RowKey Design :*
>>
>>userId | useridType|country  (Pipe
>> separated
>> columns concatenated)
>>
>>
>> *Requirement*: :
>>
>> ·   Count Distinct
>> (userId| useridType) combination from the table.
>> Have to do partial scan (read ) on the
>> composite rowkey.
>>
>> The count has to been done on Map-side only by
>> reading partial key and actual userid from column in hbase and count the
>> distinct.
>>
>
>
>
> --
>
>
>
>
>
>


--


Re: Hbase Row Key Scan

2015-11-23 Thread beeshma r
HI

You can use prefixfilter class while scan




On Thu, Nov 19, 2015 at 10:25 AM, dheeraj kavalur <
dheerajkavalu...@gmail.com> wrote:

> Hi,
>
> Can someone help how to query on partial rowkey.
>
> *Table Name :* URLdata
>
>
> *Column Family:* BaseID
>
>
> *Columns:*1.   userId
>
> 2.   userIdType
>
> 3.   username
>
> 4.   country
>
> *RowKey Design :*
>
>userId | useridType|country  (Pipe separated
> columns concatenated)
>
>
> *Requirement*: :
>
> ·   Count Distinct
> (userId| useridType) combination from the table.
> Have to do partial scan (read ) on the
> composite rowkey.
>
> The count has to been done on Map-side only by
> reading partial key and actual userid from column in hbase and count the
> distinct.
>



--


Re: Start hbase with replication mode

2015-10-23 Thread beeshma r
HI Ted ,

Can you please advice what changes  that i need in Hbase?  because hbase
starts with own zookeeper.
I need hbase should run with external zookeeper

Thanks
Beeshma

On Wed, Oct 21, 2015 at 9:51 AM, beeshma r  wrote:

> Hi
>
> i just want to hbase as a replication mode.As per documentation zookeeper
> must not be managed by HBase
>
> so created below settings
>
> *zookeeper zoo.cfg(/home/beeshma/zookeeper-3.4.6/cfg)*
>
> tickTime=2000
> dataDir=/home/beeshma/zookeeper
> clientPort=2181
> initLimit=5
> syncLimit=2
>
> *hbase-site.xml*
>
> 
> 
> hbase.master
> master:9000
> 
> 
> hbase.rootdir
> hdfs://localhost:9000/hbase
>   
>   
> hbase.zookeeper.property.dataDir
> /home/beeshma/zookeeper-3.4.6/conf
>   
> 
> 
>   hbase.cluster.distributed
>   true
> 
> 
> 
> hbase.zookeeper.property.clientPort
> 2181
> 
> 
> hbase.zookeeper.quorum
> localhost
> 
> 
>   
>   
> hbase.replication
> true
>   
>   
>   
> replication.source.ratio
> 1.0
>   
>   
>   
> replication.source.nb.capacity
> 1000
>   
>   
>   
> replication.replicationsource.implementation
> com.ngdata.sep.impl.SepReplicationSource
>   
> 
>
>
> *in hbase-env.shexport HBASE_MANAGES_ZK=false*
>
> when i start zookeeper and hbase ,i abale to see fallowing confusions
> zookeeper started with fallowing specifications
> 2015-10-21 04:22:13,810 [myid:] - INFO  [main:Environment@100] - Server
> environment:java.io.tmpdir=/tmp
> 2015-10-21 04:22:13,810 [myid:] - INFO  [main:Environment@100] - Server
> environment:java.compiler=
> 2015-10-21 04:22:13,813 [myid:] - INFO  [main:Environment@100] - Server
> environment:os.name=Linux
> 2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
> environment:os.arch=amd64
> 2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
> environment:os.version=3.11.0-12-generic
> 2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
> environment:user.name=beeshma
> 2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
> environment:user.home=/home/beeshma
> 2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
> environment:user.dir=/home/beeshma/zookeeper-3.4.6/bin
> 2015-10-21 04:22:13,827 [myid:] - INFO  [main:ZooKeeperServer@755] -
> tickTime set to 2000
> 2015-10-21 04:22:13,827 [myid:] - INFO  [main:ZooKeeperServer@764] -
> minSessionTimeout set to -1
> 2015-10-21 04:22:13,827 [myid:] - INFO  [main:ZooKeeperServer@773] -
> maxSessionTimeout set to -1
> 2015-10-21 04:22:13,893 [myid:] - INFO  [main:NIOServerCnxnFactory@94] -
> binding to port 0.0.0.0/0.0.0.0:2181
>
> But Hbase starts with own zookeeper
> in hbase zookeeper log
> 2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
> environment:java.io.tmpdir=/tmp
> 2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
> environment:java.compiler=
> 2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
> environment:os.name=Linux
> 2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
> environment:os.arch=amd64
> 2015-10-21 04:25:12,357 INFO  [main] server.ZooKeeperServer: Server
> environment:os.version=3.11.0-12-generic
> 2015-10-21 04:25:12,358 INFO  [main] server.ZooKeeperServer: Server
> environment:user.name=beeshma
> 2015-10-21 04:25:12,358 INFO  [main] server.ZooKeeperServer: Server
> environment:user.home=/home/beeshma
> 2015-10-21 04:25:12,358 INFO  [main] server.ZooKeeperServer: Server
> environment:user.dir=/home/beeshma/hbase-0.98.6.1-hadoop2
> 2015-10-21 04:25:12,423 INFO  [main] server.ZooKeeperServer: tickTime set
> to 3000
> 2015-10-21 04:25:12,423 INFO  [main] server.ZooKeeperServer:
> minSessionTimeout set to -1
> 2015-10-21 04:25:12,423 INFO  [main] server.ZooKeeperServer:
> maxSessionTimeout set to 9
> 2015-10-21 04:25:12,493 INFO  [main] server.NIOServerCnxnFactory: binding
> to port 0.0.0.0/0.0.0.0:2181
>
> Also already  port 0.0.0.0/0.0.0.0:2181 binded with zookeeper so in hbase
> fallowing error occuring
>
> *hbase-beeshma-zookeeper-ubuntu.out*
>
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:444)
> at sun.nio.ch.Net.bind(Net.java:436)
> at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at
> org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
> at
> org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:111)
> at
> org.apache.hadoop.hbase.zookeeper.HQuorumPeer.runZKServer(HQuorumPeer.java:91)
> at
> org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:76)
>
>
> So what are settings i need to change?
>
>
> Thanks
> Beeshma
>
>



--


Re: org.apache.hadoop.hbase.exceptions.DeserializationException: Missing pb magic PBUF prefix

2015-10-23 Thread beeshma r
Hi Pankil,

Are you sure your hbase is running with external zookeeper ensemble ?

As per documentation on Hbase Replication

http://www.cloudera.com/content/www/en-us/documentation/archive/cdh/4-x/4-2-0/CDH4-Installation-Guide/cdh4ig_topic_20_11.html

zookeeper must not be managed by HBase,.But i havent tried this

On Fri, Oct 23, 2015 at 9:55 AM, Ashish Singhi <
ashish.singhi.apa...@gmail.com> wrote:

> Hi Pankil.
>
> A similar issue was reported few days back (
>
> http://search-hadoop.com/m/YGbbknQt52rKBDS1&subj=HRegionServer+failed+due+to+replication
> ).
>
> May be this is due to hbase-indexer code ?
> One more Q, did you upgrade hbase from 0.94 and you see this issue ?
>
> Regards,
> Ashish Singhi
>
> On Fri, Oct 23, 2015 at 2:47 AM, Pankil Doshi  wrote:
>
> > Hi,
> >
> > I am using hbase-0.98.15-hadoop2 and hbase-indexer from lily (
> > http://ngdata.github.io/hbase-indexer/).
> >
> > I am seeing below error when I add my indexer:
> >
> >
> > 2015-10-22 14:08:27,468 INFO  [regionserver60020-EventThread]
> > replication.ReplicationTrackerZKImpl: /hbase/replication/peers znode
> > expired, triggering peerListChanged event
> >
> > 2015-10-22 14:08:27,473 ERROR [regionserver60020-EventThread]
> > regionserver.ReplicationSourceManager: Error while adding a new peer
> >
> > org.apache.hadoop.hbase.replication.ReplicationException: Error adding
> peer
> > with id=Indexer_newtest2
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createAndAddPeer(ReplicationPeersZKImpl.java:386)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.peerAdded(ReplicationPeersZKImpl.java:358)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.peerListChanged(ReplicationSourceManager.java:514)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl$PeersWatcher.nodeChildrenChanged(ReplicationTrackerZKImpl.java:189)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:468)
> >
> > at
> >
> >
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> >
> > at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> >
> > Caused by: org.apache.hadoop.hbase.replication.ReplicationException:
> Error
> > starting the peer state tracker for peerId=Indexer_newtest2
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createPeer(ReplicationPeersZKImpl.java:454)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createAndAddPeer(ReplicationPeersZKImpl.java:384)
> >
> > ... 6 more
> >
> > Caused by:
> org.apache.zookeeper.KeeperException$DataInconsistencyException:
> > KeeperErrorCode = DataInconsistency
> >
> > at org.apache.hadoop.hbase.zookeeper.ZKUtil.convert(ZKUtil.java:2063)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.startStateTracker(ReplicationPeerZKImpl.java:85)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.createPeer(ReplicationPeersZKImpl.java:452)
> >
> > ... 7 more
> >
> > Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException:
> > Missing pb magic PBUF prefix
> >
> > at
> >
> >
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.expectPBMagicPrefix(ProtobufUtil.java:270)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.parseStateFrom(ReplicationPeerZKImpl.java:243)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.isStateEnabled(ReplicationPeerZKImpl.java:232)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.readPeerStateZnode(ReplicationPeerZKImpl.java:90)
> >
> > at
> >
> >
> org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl.startStateTracker(ReplicationPeerZKImpl.java:83)
> >
> > ... 8 more
> >
> >
> >
> > My Hbase-site.xml:
> >
> >
> > 
> >
> > 
> >
> > 
> >
> > 
> >
> > 
> >
> >
> > 
> >
> > hbase.cluster.distributed
> >
> > true
> >
> > 
> >
> > //Here you have to set the path where you want HBase to store its files.
> >
> >
> >
> >   hbase.rootdir
> >
> >   file:/tmp/HBase/HFiles
> >
> >
> >
> > 
> >
> >   hbase.zookeeper.property.clientPort
> >
> >   2181
> >
> >   Property from ZooKeeper's config zoo.cfg.
> >
> >   The port at which the clients will connect.
> >
> >   
> >
> > 
> >
> > 
> >
> >   hbase.zookeeper.quorum
> >
> >   localhost
> >
> >   Comma separated list of servers in the ZooKeeper
> Quorum.
> >
> >   For example, "host1.mydomain.com,host2.mydomain.com,
> > host3.mydomain.com
> > ".
> >
> >   By default this is set to localhost for local and
> pseudo-distributed
> > modes
> >
> >   of operation. For a fully-distributed setup, this should be set to
> a
> > full
> >
> >   list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in
> > hbase-

Start hbase with replication mode

2015-10-21 Thread beeshma r
Hi

i just want to hbase as a replication mode.As per documentation zookeeper
must not be managed by HBase

so created below settings

*zookeeper zoo.cfg(/home/beeshma/zookeeper-3.4.6/cfg)*

tickTime=2000
dataDir=/home/beeshma/zookeeper
clientPort=2181
initLimit=5
syncLimit=2

*hbase-site.xml*



hbase.master
master:9000


hbase.rootdir
hdfs://localhost:9000/hbase
  
  
hbase.zookeeper.property.dataDir
/home/beeshma/zookeeper-3.4.6/conf
  


  hbase.cluster.distributed
  true



hbase.zookeeper.property.clientPort
2181


hbase.zookeeper.quorum
localhost


  
  
hbase.replication
true
  
  
  
replication.source.ratio
1.0
  
  
  
replication.source.nb.capacity
1000
  
  
  
replication.replicationsource.implementation
com.ngdata.sep.impl.SepReplicationSource
  



*in hbase-env.shexport HBASE_MANAGES_ZK=false*

when i start zookeeper and hbase ,i abale to see fallowing confusions
zookeeper started with fallowing specifications
2015-10-21 04:22:13,810 [myid:] - INFO  [main:Environment@100] - Server
environment:java.io.tmpdir=/tmp
2015-10-21 04:22:13,810 [myid:] - INFO  [main:Environment@100] - Server
environment:java.compiler=
2015-10-21 04:22:13,813 [myid:] - INFO  [main:Environment@100] - Server
environment:os.name=Linux
2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
environment:os.arch=amd64
2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
environment:os.version=3.11.0-12-generic
2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
environment:user.name=beeshma
2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
environment:user.home=/home/beeshma
2015-10-21 04:22:13,814 [myid:] - INFO  [main:Environment@100] - Server
environment:user.dir=/home/beeshma/zookeeper-3.4.6/bin
2015-10-21 04:22:13,827 [myid:] - INFO  [main:ZooKeeperServer@755] -
tickTime set to 2000
2015-10-21 04:22:13,827 [myid:] - INFO  [main:ZooKeeperServer@764] -
minSessionTimeout set to -1
2015-10-21 04:22:13,827 [myid:] - INFO  [main:ZooKeeperServer@773] -
maxSessionTimeout set to -1
2015-10-21 04:22:13,893 [myid:] - INFO  [main:NIOServerCnxnFactory@94] -
binding to port 0.0.0.0/0.0.0.0:2181

But Hbase starts with own zookeeper
in hbase zookeeper log
2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
environment:java.io.tmpdir=/tmp
2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
environment:java.compiler=
2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
environment:os.name=Linux
2015-10-21 04:25:12,345 INFO  [main] server.ZooKeeperServer: Server
environment:os.arch=amd64
2015-10-21 04:25:12,357 INFO  [main] server.ZooKeeperServer: Server
environment:os.version=3.11.0-12-generic
2015-10-21 04:25:12,358 INFO  [main] server.ZooKeeperServer: Server
environment:user.name=beeshma
2015-10-21 04:25:12,358 INFO  [main] server.ZooKeeperServer: Server
environment:user.home=/home/beeshma
2015-10-21 04:25:12,358 INFO  [main] server.ZooKeeperServer: Server
environment:user.dir=/home/beeshma/hbase-0.98.6.1-hadoop2
2015-10-21 04:25:12,423 INFO  [main] server.ZooKeeperServer: tickTime set
to 3000
2015-10-21 04:25:12,423 INFO  [main] server.ZooKeeperServer:
minSessionTimeout set to -1
2015-10-21 04:25:12,423 INFO  [main] server.ZooKeeperServer:
maxSessionTimeout set to 9
2015-10-21 04:25:12,493 INFO  [main] server.NIOServerCnxnFactory: binding
to port 0.0.0.0/0.0.0.0:2181

Also already  port 0.0.0.0/0.0.0.0:2181 binded with zookeeper so in hbase
fallowing error occuring

*hbase-beeshma-zookeeper-ubuntu.out*

java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at
org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:111)
at
org.apache.hadoop.hbase.zookeeper.HQuorumPeer.runZKServer(HQuorumPeer.java:91)
at
org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:76)


So what are settings i need to change?


Thanks
Beeshma


Re: HRegionServer failed due to replication

2015-10-11 Thread beeshma r
HI Ted,

I am using
hbase version :hbase-0.98.6.1-hadoop2
Hadoop version :hadoop-2.5.1

Actual error is

hbase(main):001:0>* list_peers*
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/home/beeshma/hbase-0.98.6.1-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/home/beeshma/hadoop-2.5.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.

*ERROR: Missing pb magic PBUF prefix*

Here is some help for this command:
List all replication peer clusters.

hbase> list_peers


Thanks
Beeshma




On Sat, Oct 10, 2015 at 5:15 AM, Ted Yu  wrote:

> The exception was due to un-protobuf'ed data in peer state znode.
>
> Which release of hbase are you using ?
>
> Consider posting the question on ngdata forum.
>
> Cheers
>
> > On Oct 10, 2015, at 3:24 AM, beeshma r  wrote:
> >
> > Hi
> >
> > i created Solr index using *HBase-indexer(NGDATA/hbase-indexer*)
> > <https://github.com/NGDATA/hbase-indexer/wiki> . Afer that Regionserver
> is
> > failed due to below error
> >
> > 2015-10-08 09:33:17,115 INFO
> > [regionserver60020-SendThread(localhost:2181)] zookeeper.ClientCnxn:
> > Session establishment complete on server localhost/127.0.0.1:2181,
> > sessionid = 0x15048410a180007, negotiated timeout = 9
> > 2015-10-08 09:33:17,120 INFO  [regionserver60020]
> > regionserver.HRegionServer: STOPPED: Failed initialization
> > 2015-10-08 09:33:17,122 ERROR [regionserver60020]
> > regionserver.HRegionServer: Failed init
> > java.io.IOException: Failed replication handler create
> >at
> >
> org.apache.hadoop.hbase.replication.regionserver.Replication.initialize(Replication.java:125)
> >at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.newReplicationInstance(HRegionServer.java:2427)
> >at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.createNewReplicationInstance(HRegionServer.java:2397)
> >at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1529)
> >at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1286)
> >at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:862)
> >at java.lang.Thread.run(Thread.java:724)
> > Caused by: org.apache.hadoop.hbase.replication.ReplicationException:
> Error
> > connecting to peer with id=Indexer_myindexer1
> >at
> >
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.connectToPeer(ReplicationPeersZKImpl.java:248)
> >at
> >
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.connectExistingPeers(ReplicationPeersZKImpl.java:416)
> >at
> >
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.init(ReplicationPeersZKImpl.java:103)
> >at
> >
> org.apache.hadoop.hbase.replication.regionserver.Replication.initialize(Replication.java:120)
> >... 6 more
> > Caused by: org.apache.hadoop.hbase.replication.ReplicationException:
> Error
> > starting the peer state tracker for peerId=Indexer_myindexer1
> >at
> >
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getPeer(ReplicationPeersZKImpl.java:513)
> >at
> >
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.connectToPeer(ReplicationPeersZKImpl.java:246)
> >... 9 more
> > Caused by:
> org.apache.zookeeper.KeeperException$DataInconsistencyException:
> > KeeperErrorCode = DataInconsistency
> >at org.apache.hadoop.hbase.zookeeper.ZKUtil.convert(ZKUtil.java:1859)
> >at
> >
> org.apache.hadoop.hbase.replication.ReplicationPeer.startStateTracker(ReplicationPeer.java:102)
> >at
> >
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getPeer(ReplicationPeersZKImpl.java:511)
> >... 10 more
> > Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException:
> > Missing pb magic PBUF prefix
> >at
> >
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.expectPBMagicPrefix(ProtobufUtil.java:256)
> >at
> >
> org.apache.hadoop.hbase.replication.ReplicationPeer.parseStateFrom(ReplicationPeer.java:304)
> >at
> >
> org.apache.hadoop.hbase.replication.ReplicationPeer.isStateEnabled(ReplicationPeer.java:293)
> >at
> >
> org.apache.hadoop.hbase.replication.ReplicationPeer.readPeerStateZnode(ReplicationPeer.java:107)
> >at
> >
> org.apache.hadoop.hbas

HRegionServer failed due to replication

2015-10-10 Thread beeshma r
Hi

i created Solr index using *HBase-indexer(NGDATA/hbase-indexer*)
 . Afer that Regionserver is
failed due to below error

2015-10-08 09:33:17,115 INFO
[regionserver60020-SendThread(localhost:2181)] zookeeper.ClientCnxn:
Session establishment complete on server localhost/127.0.0.1:2181,
sessionid = 0x15048410a180007, negotiated timeout = 9
2015-10-08 09:33:17,120 INFO  [regionserver60020]
regionserver.HRegionServer: STOPPED: Failed initialization
2015-10-08 09:33:17,122 ERROR [regionserver60020]
regionserver.HRegionServer: Failed init
java.io.IOException: Failed replication handler create
at
org.apache.hadoop.hbase.replication.regionserver.Replication.initialize(Replication.java:125)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.newReplicationInstance(HRegionServer.java:2427)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.createNewReplicationInstance(HRegionServer.java:2397)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1529)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1286)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:862)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.hadoop.hbase.replication.ReplicationException: Error
connecting to peer with id=Indexer_myindexer1
at
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.connectToPeer(ReplicationPeersZKImpl.java:248)
at
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.connectExistingPeers(ReplicationPeersZKImpl.java:416)
at
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.init(ReplicationPeersZKImpl.java:103)
at
org.apache.hadoop.hbase.replication.regionserver.Replication.initialize(Replication.java:120)
... 6 more
Caused by: org.apache.hadoop.hbase.replication.ReplicationException: Error
starting the peer state tracker for peerId=Indexer_myindexer1
at
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getPeer(ReplicationPeersZKImpl.java:513)
at
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.connectToPeer(ReplicationPeersZKImpl.java:246)
... 9 more
Caused by: org.apache.zookeeper.KeeperException$DataInconsistencyException:
KeeperErrorCode = DataInconsistency
at org.apache.hadoop.hbase.zookeeper.ZKUtil.convert(ZKUtil.java:1859)
at
org.apache.hadoop.hbase.replication.ReplicationPeer.startStateTracker(ReplicationPeer.java:102)
at
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getPeer(ReplicationPeersZKImpl.java:511)
... 10 more
Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException:
Missing pb magic PBUF prefix
at
org.apache.hadoop.hbase.protobuf.ProtobufUtil.expectPBMagicPrefix(ProtobufUtil.java:256)
at
org.apache.hadoop.hbase.replication.ReplicationPeer.parseStateFrom(ReplicationPeer.java:304)
at
org.apache.hadoop.hbase.replication.ReplicationPeer.isStateEnabled(ReplicationPeer.java:293)
at
org.apache.hadoop.hbase.replication.ReplicationPeer.readPeerStateZnode(ReplicationPeer.java:107)
at
org.apache.hadoop.hbase.replication.ReplicationPeer.startStateTracker(ReplicationPeer.java:100)
... 11 more


*This is my Hbase-site.xml configration*





hbase.master
master:9000


hbase.rootdir
hdfs://localhost:9000/hbase
  
  
hbase.zookeeper.property.dataDir
/home/beeshma/zookeeper
  

  hbase.cluster.distributed
  true

  
  
hbase.replication
true
  
  
  
replication.source.ratio
1.0
  
  
  
replication.source.nb.capacity
1000
  
  
  
replication.replicationsource.implementation
com.ngdata.sep.impl.SepReplicationSource
  


May i know what cloud be the remedy for this?

Thanks
Beeshma


Re: Hbase Master error

2015-09-01 Thread beeshma r
Hi Ted

in  http://localhost:60010/master-status

Problem accessing /master-status. Reason:

Master not ready

Finally this command worked out

bin/hadoop dfsadmin -safemode leave

Now i able to see list tables and data's

may i know why this sudden changes in hbase safemode



On Tue, Sep 1, 2015 at 10:38 AM, Ted Yu  wrote:

> Dropping dev@
>
> You can check namenode Web UI, namenode log, etc
>
> You can also use command line, e.g.:
>
> hdfs dfs -ls 
>
> On Tue, Sep 1, 2015 at 10:34 AM, beeshma r  wrote:
>
> > Hi Ted,
> >
> > in hadoop i couldn't find any issue with logs and i havn't change any
> > change configuration in hadoop set up
> >
> > beeshma@ubuntu:~/hadoop-2.5.1/sbin$ jps
> > 3287 SecondaryNameNode
> > 3599 NodeManager
> > 3478 ResourceManager
> > 3897 Jps
> > 3133 DataNode
> > 3014 NameNode
> >
> >
> > is that any way or command to check hdfs is working fine?
> >
> > On Tue, Sep 1, 2015 at 9:52 AM, Ted Yu  wrote:
> >
> > > Have you checked hdfs ?
> > >
> > > Master was waiting for namenode to exit safe mode.
> > >
> > >
> > >
> > > > On Sep 1, 2015, at 9:44 AM, beeshma r  wrote:
> > > >
> > > > HI
> > > >
> > > > i have issue with Hbase master
> > > >
> > > > Below is actual error
> > > >
> > > >  hbase(main):001:0> list
> > > > TABLE
> > > >
> > > > SLF4J: Class path contains multiple SLF4J bindings.
> > > > SLF4J: Found binding in
> > > >
> > >
> >
> [jar:file:/home/beeshma/hbase-0.98.6.1-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > > > SLF4J: Found binding in
> > > >
> > >
> >
> [jar:file:/home/beeshma/hadoop-2.5.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > > > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > > > explanation.
> > > >
> > > > ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException:
> Server
> > > is
> > > > not running yet
> > > >at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:100)
> > > >at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
> > > >at
> > > >
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > > >at
> > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> > > >at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> > > >at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > > >at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > >at java.lang.Thread.run(Thread.java:724)
> > > >
> > > >
> > > >
> > > > But i able to see my Master is running
> > > >
> > > > 2913 NameNode
> > > > 3419 ResourceManager
> > > > 4105 HQuorumPeer
> > > > 3224 SecondaryNameNode
> > > > 3040 DataNode
> > > > 3548 NodeManager
> > > > 4325 HRegionServer
> > > > 4170 HMaster
> > > > 4703 Jps
> > > >
> > > >
> > > > In HMaster log i able to see  below issue
> > > >
> > > >
> > > > 2015-09-01 09:34:46,481 INFO  [master:ubuntu:6] mortbay.log:
> > > > jetty-6.1.26
> > > > 2015-09-01 09:34:48,293 INFO  [master:ubuntu:6] mortbay.log:
> > Started
> > > > SelectChannelConnector@0.0.0.0:60010
> > > > 2015-09-01 09:34:48,644 INFO  [master:ubuntu:6]
> > > > zookeeper.RecoverableZooKeeper: Node /hbase/master already exists and
> > > this
> > > > is not a retry
> > > > 2015-09-01 09:34:48,645 INFO  [master:ubuntu:6]
> > > > master.ActiveMasterManager: Adding ZNode for
> > > > /hbase/backup-masters/ubuntu.ubuntu-domain,6,1441125282273 in
> > backup
> > > > master directory
> > > > 2015-09-01 09:34:48,737 INFO  [master:ubuntu:6]
> > > > master.ActiveMasterManager: Current master has this master's address,
> > > > ubuntu.ubuntu-domain,6,1441124623841; master was restarted?
> > Deleting

Re: Hbase Master error

2015-09-01 Thread beeshma r
HI Ted,

below is report for hdfs


beeshma@ubuntu:~$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Safe mode is ON
Configured Capacity: 24473477120 (22.79 GB)
Present Capacity: 3870871552 (3.61 GB)
DFS Remaining: 3867914240 (3.60 GB)
DFS Used: 2957312 (2.82 MB)
DFS Used%: 0.08%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-
Live datanodes (1):

Name: 127.0.0.1:50010 (localhost)
Hostname: ubuntu.ubuntu-domain
Decommission Status : Normal
Configured Capacity: 24473477120 (22.79 GB)
DFS Used: 2957312 (2.82 MB)
Non DFS Used: 20602605568 (19.19 GB)
DFS Remaining: 3867914240 (3.60 GB)
DFS Used%: 0.01%
DFS Remaining%: 15.80%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Sep 01 10:37:52 PDT 2015


On Tue, Sep 1, 2015 at 10:34 AM, beeshma r  wrote:

> Hi Ted,
>
> in hadoop i couldn't find any issue with logs and i havn't change any
> change configuration in hadoop set up
>
> beeshma@ubuntu:~/hadoop-2.5.1/sbin$ jps
> 3287 SecondaryNameNode
> 3599 NodeManager
> 3478 ResourceManager
> 3897 Jps
> 3133 DataNode
> 3014 NameNode
>
>
> is that any way or command to check hdfs is working fine?
>
> On Tue, Sep 1, 2015 at 9:52 AM, Ted Yu  wrote:
>
>> Have you checked hdfs ?
>>
>> Master was waiting for namenode to exit safe mode.
>>
>>
>>
>> > On Sep 1, 2015, at 9:44 AM, beeshma r  wrote:
>> >
>> > HI
>> >
>> > i have issue with Hbase master
>> >
>> > Below is actual error
>> >
>> >  hbase(main):001:0> list
>> > TABLE
>> >
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> >
>> [jar:file:/home/beeshma/hbase-0.98.6.1-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> >
>> [jar:file:/home/beeshma/hadoop-2.5.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> >
>> > ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server
>> is
>> > not running yet
>> >at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:100)
>> >at
>> >
>> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>> >at
>> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>> >at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>> >at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>> >at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >at java.lang.Thread.run(Thread.java:724)
>> >
>> >
>> >
>> > But i able to see my Master is running
>> >
>> > 2913 NameNode
>> > 3419 ResourceManager
>> > 4105 HQuorumPeer
>> > 3224 SecondaryNameNode
>> > 3040 DataNode
>> > 3548 NodeManager
>> > 4325 HRegionServer
>> > 4170 HMaster
>> > 4703 Jps
>> >
>> >
>> > In HMaster log i able to see  below issue
>> >
>> >
>> > 2015-09-01 09:34:46,481 INFO  [master:ubuntu:6] mortbay.log:
>> > jetty-6.1.26
>> > 2015-09-01 09:34:48,293 INFO  [master:ubuntu:6] mortbay.log: Started
>> > SelectChannelConnector@0.0.0.0:60010
>> > 2015-09-01 09:34:48,644 INFO  [master:ubuntu:6]
>> > zookeeper.RecoverableZooKeeper: Node /hbase/master already exists and
>> this
>> > is not a retry
>> > 2015-09-01 09:34:48,645 INFO  [master:ubuntu:6]
>> > master.ActiveMasterManager: Adding ZNode for
>> > /hbase/backup-masters/ubuntu.ubuntu-domain,6,1441125282273 in backup
>> > master directory
>> > 2015-09-01 09:34:48,737 INFO  [master:ubuntu:6]
>> > master.ActiveMasterManager: Current master has this master's address,
>> > ubuntu.ubuntu-domain,6,1441124623841; master was restarted? Deleting
>> > node.
>> > 2015-09-01 09:34:48,745 DEBUG [main-EventThread]
>> > master.ActiveMasterManager: No master available. Notifying waiting
>> threads
>> > 2015-09-01 09:34:48

Re: Hbase Master error

2015-09-01 Thread beeshma r
Hi Ted,

in hadoop i couldn't find any issue with logs and i havn't change any
change configuration in hadoop set up

beeshma@ubuntu:~/hadoop-2.5.1/sbin$ jps
3287 SecondaryNameNode
3599 NodeManager
3478 ResourceManager
3897 Jps
3133 DataNode
3014 NameNode


is that any way or command to check hdfs is working fine?

On Tue, Sep 1, 2015 at 9:52 AM, Ted Yu  wrote:

> Have you checked hdfs ?
>
> Master was waiting for namenode to exit safe mode.
>
>
>
> > On Sep 1, 2015, at 9:44 AM, beeshma r  wrote:
> >
> > HI
> >
> > i have issue with Hbase master
> >
> > Below is actual error
> >
> >  hbase(main):001:0> list
> > TABLE
> >
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> >
> [jar:file:/home/beeshma/hbase-0.98.6.1-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> >
> [jar:file:/home/beeshma/hadoop-2.5.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> >
> > ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server
> is
> > not running yet
> >at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:100)
> >at
> >
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
> >at
> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> >at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> >at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> >at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >at java.lang.Thread.run(Thread.java:724)
> >
> >
> >
> > But i able to see my Master is running
> >
> > 2913 NameNode
> > 3419 ResourceManager
> > 4105 HQuorumPeer
> > 3224 SecondaryNameNode
> > 3040 DataNode
> > 3548 NodeManager
> > 4325 HRegionServer
> > 4170 HMaster
> > 4703 Jps
> >
> >
> > In HMaster log i able to see  below issue
> >
> >
> > 2015-09-01 09:34:46,481 INFO  [master:ubuntu:6] mortbay.log:
> > jetty-6.1.26
> > 2015-09-01 09:34:48,293 INFO  [master:ubuntu:6] mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:60010
> > 2015-09-01 09:34:48,644 INFO  [master:ubuntu:6]
> > zookeeper.RecoverableZooKeeper: Node /hbase/master already exists and
> this
> > is not a retry
> > 2015-09-01 09:34:48,645 INFO  [master:ubuntu:6]
> > master.ActiveMasterManager: Adding ZNode for
> > /hbase/backup-masters/ubuntu.ubuntu-domain,6,1441125282273 in backup
> > master directory
> > 2015-09-01 09:34:48,737 INFO  [master:ubuntu:6]
> > master.ActiveMasterManager: Current master has this master's address,
> > ubuntu.ubuntu-domain,6,1441124623841; master was restarted? Deleting
> > node.
> > 2015-09-01 09:34:48,745 DEBUG [main-EventThread]
> > master.ActiveMasterManager: No master available. Notifying waiting
> threads
> > 2015-09-01 09:34:48,789 INFO  [master:ubuntu:6]
> > master.ActiveMasterManager: Deleting ZNode for
> > /hbase/backup-masters/ubuntu.ubuntu-domain,6,1441125282273 from
> backup
> > master directory
> > 2015-09-01 09:34:48,791 DEBUG [main-EventThread]
> > master.ActiveMasterManager: A master is now available
> > 2015-09-01 09:34:48,833 INFO  [master:ubuntu:6]
> > master.ActiveMasterManager: Registered Active
> > Master=ubuntu.ubuntu-domain,6,1441125282273
> > 2015-09-01 09:34:48,863 INFO  [master:ubuntu:6]
> > Configuration.deprecation: fs.default.name is deprecated. Instead, use
> > fs.defaultFS
> > 2015-09-01 09:34:49,072 INFO  [master:ubuntu:6] util.FSUtils: Waiting
> > for dfs to exit safe mode...
> > 2015-09-01 09:34:59,079 INFO  [master:ubuntu:6] util.FSUtils: Waiting
> > for dfs to exit safe mode...
> > 2015-09-01 09:35:09,083 INFO  [master:ubuntu:6] util.FSUtils: Waiting
> > for dfs to exit safe mode...
> > 2015-09-01 09:35:19,091 INFO  [master:ubuntu:6] util.FSUtils: Waiting
> > for dfs to exit safe mode...
> >
> >
> > Very recently i am getting this error .Please suggest if any changes need
>



--


Re: Iterate hbase resultscanner

2015-06-10 Thread beeshma r
HI Devaraj

 Thanks for your suggestion.

Yes i coded like this as per your suggestion.

public static void put_result(ResultScanner input) throws IOException
{

Iterator iterator = input.iterator();
while(iterator.hasNext())
{

Result next = iterator.next();

Listclass.add(Conver(next));


}
}


 But still have same problem:( .can you please suggest any changes in this
? or how do i overcome this?

Thanks
Beeshma











On Tue, Jun 9, 2015 at 10:31 AM, Devaraja Swami 
wrote:

> Beeshma,
>
> HBase recycles the same Result instance in the ResultScanner iterator, to
> save on memory allocation costs.
> With each iteration, you get the same Result object reference, re-populated
> internally by HBase with the new values for each iteration.
> If you add the Result loop variable instance to your list during the
> iteration, you are adding the same instance each time to your list, but
> internally the values change. At the end of your loop, all the elements
> will therefore be the same, and the values will be that of the last
> iteration.
> The correct way to use the ResultScanner iteration is to extract the data
> you want from the Result loop variable within the iteration and collect the
> extracted data in your list, or alternately to create a new Result instance
> from the Result loop variable, and add the new instance to your list.
>
>
> On Mon, Jun 8, 2015 at 10:03 AM, beeshma r  wrote:
>
> > Hi Ted
> >
> > I declared Listclass as
> > public static List map_list_main=new ArrayList();
> >
> > i know my logic is correct .only issue is adding my result to this
> > Listclass.Also my conversion works perfectly .i checked  this based on
> > print out put results.
> >
> > only issue is why final element of Listclass updated for all elements in
> > list
> >
> > I am using hbase version hbase-0.98.6.1
> > Hadoop -2.5.1
> >
> > Also i using finagle client ,server module.So can u advise  How do i
> debug
> > this?
> >
> > Thanks
> > Beeshma
> >
> >
> >
> >
> >
> > On Mon, Jun 8, 2015 at 9:24 AM, Ted Yu  wrote:
> >
> > > From your description, the conversion inside for(Result
> rs:ListofResult)
> > > loop was correct.
> > >
> > > Since Listclass is custom, probably you need to show us how it is
> > > implemented.
> > >
> > > Which hbase release are you using ?
> > >
> > > On Mon, Jun 8, 2015 at 9:19 AM, beeshma r  wrote:
> > >
> > > > HI
> > > >
> > > > I have weired issue with Hbase Result Scanner
> > > >
> > > > This is my scenario
> > > >
> > > > i have a list of Resultscanner(ListofScanner)
> > > > from this Resultscanner list i want extract all results as list
> of
> > > > result(ListofResult)
> > > > and from result list i want iterate all cell values add to custom
> > > class
> > > > list (Listclass)
> > > >
> > > > So i coded like this
> > > >
> > > > for(ResultScanner resca:ListofScanner)
> > > > {
> > > > for(Result Res:resca)
> > > > {
> > > >
> > > > ListofResult.add(Res);
> > > >
> > > >
> > > > }
> > > > }
> > > >
> > > >
> > > > for(Result rs:ListofResult)
> > > > {
> > > >
> > > >Listclass.add(Conver(rs));//Conver is function that converts
> results
> > > and
> > > > return as a my class object
> > > >
> > > > }
> > > >
> > > > Here is the O/p
> > > >
> > > > suppose i expect this result form Listclass if a print a all values
> > > >
> > > > gattner
> > > > lisa
> > > > Miely
> > > > luzz
> > > >
> > > > But actual list i got
> > > >
> > > > luzz
> > > > luzz
> > > > luzz
> > > > luzz
> > > >
> > > > The last element of Listclass is got updated to all values
> > > >
> > > > I checked for each Result output after conversion ( Conver(rs) ) it
> > > returns
> > > > as expected. But only issue adding Listofclass.
> > > >
> > > > Also i run with maven exec:java  command(org.codehaus.mojo) .Break
> > point
> > > > also not working for me  :(
> > > > Please give me advice how to debug this.
> > > >
> > > >
> > > >
> > > > Thanks
> > > > Beeshma
> > > >
> > >
> >
> >
> >
> > --
> >
>



--


Re: Iterate hbase resultscanner

2015-06-08 Thread beeshma r
Hi Ted

I declared Listclass as
public static List map_list_main=new ArrayList();

i know my logic is correct .only issue is adding my result to this
Listclass.Also my conversion works perfectly .i checked  this based on
print out put results.

only issue is why final element of Listclass updated for all elements in
list

I am using hbase version hbase-0.98.6.1
Hadoop -2.5.1

Also i using finagle client ,server module.So can u advise  How do i debug
this?

Thanks
Beeshma





On Mon, Jun 8, 2015 at 9:24 AM, Ted Yu  wrote:

> From your description, the conversion inside for(Result rs:ListofResult)
> loop was correct.
>
> Since Listclass is custom, probably you need to show us how it is
> implemented.
>
> Which hbase release are you using ?
>
> On Mon, Jun 8, 2015 at 9:19 AM, beeshma r  wrote:
>
> > HI
> >
> > I have weired issue with Hbase Result Scanner
> >
> > This is my scenario
> >
> > i have a list of Resultscanner(ListofScanner)
> > from this Resultscanner list i want extract all results as list of
> > result(ListofResult)
> > and from result list i want iterate all cell values add to custom
> class
> > list (Listclass)
> >
> > So i coded like this
> >
> > for(ResultScanner resca:ListofScanner)
> > {
> > for(Result Res:resca)
> > {
> >
> > ListofResult.add(Res);
> >
> >
> > }
> > }
> >
> >
> > for(Result rs:ListofResult)
> > {
> >
> >Listclass.add(Conver(rs));//Conver is function that converts results
> and
> > return as a my class object
> >
> > }
> >
> > Here is the O/p
> >
> > suppose i expect this result form Listclass if a print a all values
> >
> > gattner
> > lisa
> > Miely
> > luzz
> >
> > But actual list i got
> >
> > luzz
> > luzz
> > luzz
> > luzz
> >
> > The last element of Listclass is got updated to all values
> >
> > I checked for each Result output after conversion ( Conver(rs) ) it
> returns
> > as expected. But only issue adding Listofclass.
> >
> > Also i run with maven exec:java  command(org.codehaus.mojo) .Break point
> > also not working for me  :(
> > Please give me advice how to debug this.
> >
> >
> >
> > Thanks
> > Beeshma
> >
>



--


Iterate hbase resultscanner

2015-06-08 Thread beeshma r
HI

I have weired issue with Hbase Result Scanner

This is my scenario

i have a list of Resultscanner(ListofScanner)
from this Resultscanner list i want extract all results as list of
result(ListofResult)
and from result list i want iterate all cell values add to custom class
list (Listclass)

So i coded like this

for(ResultScanner resca:ListofScanner)
{
for(Result Res:resca)
{

ListofResult.add(Res);


}
}


for(Result rs:ListofResult)
{

   Listclass.add(Conver(rs));//Conver is function that converts results and
return as a my class object

}

Here is the O/p

suppose i expect this result form Listclass if a print a all values

gattner
lisa
Miely
luzz

But actual list i got

luzz
luzz
luzz
luzz

The last element of Listclass is got updated to all values

I checked for each Result output after conversion ( Conver(rs) ) it returns
as expected. But only issue adding Listofclass.

Also i run with maven exec:java  command(org.codehaus.mojo) .Break point
also not working for me  :(
Please give me advice how to debug this.



Thanks
Beeshma


to get all column qualifiers

2014-12-09 Thread beeshma r
Hi
i want to get all column qualifiers and  corresponding cell values for the
Rowkey
for example below is table structure

hbase(main):002:0> scan "people"
ROW
COLUMN+CELL
 ana...@hotmail.com   column=colmn_fam:ana...@hotmail.com,
timestamp=14160315498
  33,
value=1
 bees...@gmail.comcolumn=colmn_fam:bees...@gmail.com,
timestamp=141590081652
  2,
value=\x00\x00\x00\x01
 e...@gmail.com column=colmn_fam:e...@gmail.com,
timestamp=1415900817028, va

lue=\x00\x00\x00\x01
 gar...@gmail.com column=colmn_fam:gar...@gmail.com,
timestamp=1416031549845
  ,
value=1
 ja...@gmail.com  column=colmn_fam:ja...@gmail.com,
timestamp=1415900817017,

value=\x00\x00\x00\x01
 kalee...@gmail.com   column=colmn_fam:kalee...@gmail.com,
timestamp=14160315498
  40,
value=1
 kr...@gmail.com  column=colmn_fam:kr...@gmail.com,
timestamp=1416148432981,

value=1
 p...@gmail.com   column=colmn_fam:p...@gmail.com,
timestamp=1416031549850,

value=1
 r...@gmail.comcolumn=colmn_fam:r...@gmail.com,
timestamp=1416722434614, v

alue=1
 y...@gmail.com   column=colmn_fam:y...@gmail.com,
timestamp=1415900817022,

value=\x00\x00\x00\x01
 y...@gmail.comcolumn=colmn_fam:y...@gmail.com,
timestamp=1415900817035, v
  alue=\x00\x00\x00\x01

for key =>ana...@hotmail.com i have to get  all columnqualifiers in
"colmn_fam" column family

so what is the way to get that?i tried below samples but i didnt get  what
i expect..please suggest the best way to get that

here i tried some methods


Get get_colums=new Get(ROWKEY);

Result result_of_coumns=testTable.get(get_colums);


Map
family=result_of_coumns.getFamilyMap(colmnfamily);

for (Map.Entry entry:family.entrySet())
{
System.out.println(entry.getKey());
System.out.println(entry.getValue());
}


O/P

[B@35e023f0
[B@e577d32


LiST METHOD


List  mail_list = new ArrayList();

Result result_of_coumns=testTable.get(get_colums);

for(Byte kv:result_of_coumns.getRow())
{
System.out.println(kv.toString());
mail_list.add(kv.toString());

}

for(String temp:mail_list)
{
System.out.println(temp.toString());
}



 O/P

97
121
121
111
64
103
109
97
105
108
46
99
111
109

Thanks
Beesh


Re: scan column qualifiers in column family

2014-11-20 Thread beeshma r
Hi anoop


Thanks for idea :)


it worked




Scan scan_col=new Scan ();
scan_col.addColumn(colmnfamily,email_b);
scan_col.setCaching(1);

ResultScanner results = testTable.getScanner(scan_col);

for (Result result = results.next(); result != null; result =
results.next())
 {

 String mail_id=new String(result.getRow());

 System.out.println(mail_id);

 if (mail_id.equals(mail))
{
ret=true;
System.out.println("column is present");
break;
}
 else{
System.out.println("column is not  prrsent");

ret=false;
}


 }

On Thu, Nov 20, 2014 at 4:11 AM, Anoop John  wrote:

> byte[] email_b=Bytes.toBytes(mail);//column qulifier
> byte[] colmnfamily=Bytes.toBytes("colmn_fam");//column family
> Scan scan_col=new Scan (Bytes.toBytes("colmn_fam"),email_b);
>
> Scan constructor is taking start and stop rows (rks). You seem to pass a cf
> and q names.
>
> Scan s = new Scan();
> s.addColumn(byte [] family, byte [] qualifier)
> s.setCaching(1)
>
> Just open scanner and call next() once. If you get a not null result means
> u have the give q in the cf.
>
>
> -Anoop-
>
>
>
> On Wed, Nov 19, 2014 at 11:24 PM, Ted Yu  wrote:
>
> > bq.
> > org.freinds_rep.java.Insert_friend.search_column(Insert_friend.java:106)
> >
> > Does line 106 correspond to result.containsColumn() call ?
> > If so, result was null.
> >
> > On Wed, Nov 19, 2014 at 9:47 AM, beeshma r  wrote:
> >
> > > Hi
> > >
> > > i need to find whether particular column qualifier present in column
> > family
> > >  so i did code like this
> > >
> > > As per document
> > >
> > > public boolean containsColumn(byte[] family,
> > >  byte[] qualifier)
> > >
> > > Checks for existence of a value for the specified column (empty or
> not).
> > > Parameters:family - family namequalifier - column qualifierReturns:true
> > if
> > > at least one value exists in the result, false if not
> > >
> > > // //my code
> > >
> > > public static boolean search_column(String mail) throws IOException
> > > {
> > >
> > > HTable testTable = new HTable(frinds_util.get_config(),
> > > "people");//configuration
> > > byte[] email_b=Bytes.toBytes(mail);//column qulifier
> > > byte[] colmnfamily=Bytes.toBytes("colmn_fam");//column family
> > > Scan scan_col=new Scan (Bytes.toBytes("colmn_fam"),email_b);
> > > ResultScanner results = testTable.getScanner(scan_col);
> > > Result result = results.next();
> > >
> > > if(result.containsColumn(colmnfamily, email_b))//check whether
> > > column presernt
> > >
> > > {
> > > System.out.println("column is present");
> > > ret=true;
> > >
> > > }
> > > return ret;
> > >
> > > }
> > >
> > > my build is failed with below o/p
> > >
> > >
> > >
> > > java.lang.reflect.InvocationTargetException
> > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > at
> > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > at
> > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > at java.lang.reflect.Method.invoke(Method.java:606)
> > > at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
> > > at java.lang.Thread.run(Thread.java:724)
> > > Caused by: java.lang.NullPointerException
> > > at
> > >
> org.freinds_rep.java.Insert_friend.search_column(Insert_friend.java:106)
> > > at org.freinds_rep.java.Insert_friend.main(Insert_friend.java:156)
> > > ... 6 more
> > > [WARNING] thread Thread[org.freinds_rep.java.Insert_friend.main(
> > > 127.0.0.1:2181),5,org.freinds_rep.java.Insert_friend] was interrupted
> > but
> > > is still alive after waiting at least 15000msecs
> > > [WARNING] thread Thread[org.freinds_rep.java.Insert_friend.main(
> > > 127.0.0.1:2181),5,org.freinds_rep.java.I

scan column qualifiers in column family

2014-11-19 Thread beeshma r
Hi

i need to find whether particular column qualifier present in column family
 so i did code like this

As per document

public boolean containsColumn(byte[] family,
 byte[] qualifier)

Checks for existence of a value for the specified column (empty or not).
Parameters:family - family namequalifier - column qualifierReturns:true if
at least one value exists in the result, false if not

// //my code

public static boolean search_column(String mail) throws IOException
{

HTable testTable = new HTable(frinds_util.get_config(),
"people");//configuration
byte[] email_b=Bytes.toBytes(mail);//column qulifier
byte[] colmnfamily=Bytes.toBytes("colmn_fam");//column family
Scan scan_col=new Scan (Bytes.toBytes("colmn_fam"),email_b);
ResultScanner results = testTable.getScanner(scan_col);
Result result = results.next();

if(result.containsColumn(colmnfamily, email_b))//check whether
column presernt

{
System.out.println("column is present");
ret=true;

}
return ret;

}

my build is failed with below o/p



java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.NullPointerException
at
org.freinds_rep.java.Insert_friend.search_column(Insert_friend.java:106)
at org.freinds_rep.java.Insert_friend.main(Insert_friend.java:156)
... 6 more
[WARNING] thread Thread[org.freinds_rep.java.Insert_friend.main(
127.0.0.1:2181),5,org.freinds_rep.java.Insert_friend] was interrupted but
is still alive after waiting at least 15000msecs
[WARNING] thread Thread[org.freinds_rep.java.Insert_friend.main(
127.0.0.1:2181),5,org.freinds_rep.java.Insert_friend] will linger despite
being asked to die via interruption
[WARNING] NOTE: 1 thread(s) did not finish despite being asked to  via
interruption. This is not a problem with exec:java, it is a problem with
the running code. Although not serious, it should be remedied.
[WARNING] Couldn't destroy threadgroup
org.codehaus.mojo.exec.ExecJavaMojo$IsolatedThreadGroup[name=org.freinds_rep.java.Insert_friend,maxpri=10]
java.lang.IllegalThreadStateException
at java.lang.ThreadGroup.destroy(ThreadGroup.java:775)
at org.codehaus.mojo.exec.ExecJavaMojo.execute(ExecJavaMojo.java:328)
at
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 1:23.294s
[INFO] Finished at: Wed Nov 19 09:08:48 PST 2014
[INFO] Final Memory: 10M/137M





Any idea how to solve this?


Fwd: error in starting hbase

2014-11-03 Thread beeshma r
Hi Ted

Any update on this error? i tried Pseudo-Distributed mode But i still have
error

hbase(main):001:0> create 't1','c1'

ERROR: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V

Here is some help for this command:
Creates a table. Pass a table name, and a set of column family
specifications (at least one), and, optionally, table configuration.
Column specification can be a simple string (name), or a dictionary
(dictionaries are described below in main help output), necessarily
including NAME attribute.
Examples:



-- Forwarded message ------
From: beeshma r 
Date: Sun, Nov 2, 2014 at 7:22 AM
Subject: Re: error in starting hbase
To: user@hbase.apache.org


Hi Ted,

Thanks for your reply. Yes i am running  standalone mode
After changing my zookeeper property its resolved .And now i have another
two issues .

2014-11-02 07:06:32,948 DEBUG [main] master.HMaster:
master/ubuntu.ubuntu-domain/127.0.1.1:0 HConnection server-to-server
retries=350
2014-11-02 07:06:33,458 INFO  [main] ipc.RpcServer:
master/ubuntu.ubuntu-domain/127.0.1.1:0: started 10 reader(s).
2014-11-02 07:06:33,670 INFO  [main] impl.MetricsConfig: loaded properties
from hadoop-metrics2-hbase.properties
2014-11-02 07:06:33,766 INFO  [main] impl.MetricsSystemImpl: Scheduled
snapshot period at 10 second(s).
2014-11-02 07:06:33,766 INFO  [main] impl.MetricsSystemImpl: HBase metrics
system started
2014-11-02 07:06:34,592 ERROR [main] master.HMasterCommandLine: Master
exiting
java.lang.RuntimeException: Failed construction of Master: class
org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMasternull
at
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:140)
at
org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:202)
at
org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:152)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:179)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2785)
Caused by: java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.security.Groups.(Groups.java:55)
at
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182)
at
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:235)
at
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:214)
at
org.apache.hadoop.security.UserGroupInformation.isAuthenticationMethodEnabled(UserGroupInformation.java:275)

--

And when i create table

hbase(main):001:0> create 't1','e1'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/home/beeshma/hbase-0.98.6.1-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/home/beeshma/hadoop-1.2.1/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.

ERROR: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V

-

 hbase(main):002:0> list
TABLE


ERROR: Could not initialize class
org.apache.hadoop.security.JniBasedUnixGroupsMapping

Here is some help for this command:
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:

  hbase> list
  hbase> list 'abc.*'
  hbase> list 'ns:abc.*'
  hbase> list 'ns:.*'


hbase(main):003:0> beeshma@ubuntu:~/hbase-0.98.6.1-hadoop2/bin$



On Sun, Nov 2, 2014 at 7:01 AM, Ted Yu  wrote:

> Are you running hbase in standalone mode ?
>
> See http://hbase.apache.org/book.html#zookeeper
>
> bq. To toggle HBase management of ZooKeeper, use the HBASE_MANAGES_ZK
> variable
> in conf/hbase-env.sh.
>
> Cheers
>
> On Sun, Nov 2, 2014 at 6:41 AM, beeshma r  wrote:
>
> > HI
> >
> > When i start hbase fallowing error is occurred .How to  solve this? i
> > haven't add any zokeeper path anywhere?
> >
> > Please suggest this.
> >
> > 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
> > environment:java.io.tmpdir=/tmp
> > 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
> > environment:java.compiler=
> > 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
> > environment:os.name=Lin

Re: error in starting hbase

2014-11-02 Thread beeshma r
Hi Ted,

Thanks for your reply. Yes i am running  standalone mode
After changing my zookeeper property its resolved .And now i have another
two issues .

2014-11-02 07:06:32,948 DEBUG [main] master.HMaster:
master/ubuntu.ubuntu-domain/127.0.1.1:0 HConnection server-to-server
retries=350
2014-11-02 07:06:33,458 INFO  [main] ipc.RpcServer:
master/ubuntu.ubuntu-domain/127.0.1.1:0: started 10 reader(s).
2014-11-02 07:06:33,670 INFO  [main] impl.MetricsConfig: loaded properties
from hadoop-metrics2-hbase.properties
2014-11-02 07:06:33,766 INFO  [main] impl.MetricsSystemImpl: Scheduled
snapshot period at 10 second(s).
2014-11-02 07:06:33,766 INFO  [main] impl.MetricsSystemImpl: HBase metrics
system started
2014-11-02 07:06:34,592 ERROR [main] master.HMasterCommandLine: Master
exiting
java.lang.RuntimeException: Failed construction of Master: class
org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMasternull
at
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:140)
at
org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:202)
at
org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:152)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:179)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2785)
Caused by: java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.security.Groups.(Groups.java:55)
at
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182)
at
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:235)
at
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:214)
at
org.apache.hadoop.security.UserGroupInformation.isAuthenticationMethodEnabled(UserGroupInformation.java:275)

--

And when i create table

hbase(main):001:0> create 't1','e1'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/home/beeshma/hbase-0.98.6.1-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/home/beeshma/hadoop-1.2.1/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.

ERROR: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V

-

 hbase(main):002:0> list
TABLE


ERROR: Could not initialize class
org.apache.hadoop.security.JniBasedUnixGroupsMapping

Here is some help for this command:
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:

  hbase> list
  hbase> list 'abc.*'
  hbase> list 'ns:abc.*'
  hbase> list 'ns:.*'


hbase(main):003:0> beeshma@ubuntu:~/hbase-0.98.6.1-hadoop2/bin$



On Sun, Nov 2, 2014 at 7:01 AM, Ted Yu  wrote:

> Are you running hbase in standalone mode ?
>
> See http://hbase.apache.org/book.html#zookeeper
>
> bq. To toggle HBase management of ZooKeeper, use the HBASE_MANAGES_ZK
> variable
> in conf/hbase-env.sh.
>
> Cheers
>
> On Sun, Nov 2, 2014 at 6:41 AM, beeshma r  wrote:
>
> > HI
> >
> > When i start hbase fallowing error is occurred .How to  solve this? i
> > haven't add any zokeeper path anywhere?
> >
> > Please suggest this.
> >
> > 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
> > environment:java.io.tmpdir=/tmp
> > 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
> > environment:java.compiler=
> > 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
> > environment:os.name=Linux
> > 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
> > environment:os.arch=amd64
> > 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
> > environment:os.version=3.11.0-12-generic
> > 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
> > environment:user.name=beeshma
> > 2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
> > environment:user.home=/home/beeshma
> > 2014-11-01 20:01:51,197 INFO  [main] server.ZooKeeperServer: Server
> > environment:user.dir=/home/beeshma/hbase-0.98.6.1-hadoop2/bin
> > 2014-11-01 20:01:51,202 ERROR [main] master.HMaster

error in starting hbase

2014-11-02 Thread beeshma r
HI

When i start hbase fallowing error is occurred .How to  solve this? i
haven't add any zokeeper path anywhere?

Please suggest this.

2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:java.io.tmpdir=/tmp
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:java.compiler=
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:os.name=Linux
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:os.arch=amd64
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:os.version=3.11.0-12-generic
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:user.name=beeshma
2014-11-01 20:01:51,196 INFO  [main] server.ZooKeeperServer: Server
environment:user.home=/home/beeshma
2014-11-01 20:01:51,197 INFO  [main] server.ZooKeeperServer: Server
environment:user.dir=/home/beeshma/hbase-0.98.6.1-hadoop2/bin
2014-11-01 20:01:51,202 ERROR [main] master.HMasterCommandLine: Master
exiting
java.io.IOException: Unable to create data directory
/home/beesh_hadoop2/zookeeper/zookeeper_0/version-2
at
org.apache.zookeeper.server.persistence.FileTxnSnapLog.(FileTxnSnapLog.java:85)
at
org.apache.zookeeper.server.ZooKeeperServer.(ZooKeeperServer.java:213)
at
org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:162)
at
org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:131)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:165)
at
org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2785)


Re: error in installing and starting hbase

2014-10-26 Thread beeshma r
hi sean

its resolved  :)
 issue with my hbase home directory :)


But now new issue is :(


i am trying creating simple table--> how to proceed this??













































*hbase(main):006:0> create 'df','ff'ERROR: Could not initialize class
org.apache.hadoop.security.JniBasedUnixGroupsMappingHere is some help
for this command:Creates a table. Pass a table name, and a set of
column familyspecifications (at least one), and, optionally, table
configuration.Column specification can be a simple string (name), or a
dictionary(dictionaries are described below in main help output),
necessarily including NAME attribute. Examples:Create a table with
namespace=ns1 and table qualifier=t1  hbase> create 'ns1:t1', {NAME =>
'f1', VERSIONS => 5}Create a table with namespace=default and table
qualifier=t1  hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'},
{NAME => 'f3'}  hbase> # The above in shorthand would be the
following:  hbase> create 't1', 'f1', 'f2', 'f3'  hbase> create 't1',
{NAME => 'f1', VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}
hbase> create 't1', {NAME => 'f1', CONFIGURATION =>
{'hbase.hstore.blockingStoreFiles' => '10'}}  Table configuration
options can be put at the end.Examples:  hbase> create 'ns1:t1', 'f1',
SPLITS => ['10', '20', '30', '40']  hbase> create 't1', 'f1', SPLITS
=> ['10', '20', '30', '40']  hbase> create 't1', 'f1', SPLITS_FILE =>
'splits.txt', OWNER => 'johndoe'  hbase> create 't1', {NAME => 'f1',
VERSIONS => 5}, METADATA => { 'mykey' => 'myvalue' }  hbase> #
Optionally pre-split the table into NUMREGIONS, using  hbase> #
SPLITALGO ("HexStringSplit", "UniformSplit" or classname)  hbase>
create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit'}
hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO =>
'HexStringSplit', CONFIGURATION =>
{'hbase.hregion.scan.loadColumnFamiliesOnDemand' => 'true'}}You can
also keep around a reference to the created table:  hbase> t1 = create
't1', 'f1'Which gives you a reference to the table named 't1', on
which you can thencall methods.*





On Sun, Oct 26, 2014 at 6:58 AM, beeshma r  wrote:

> hi sean
>
> Actually i started habse  from correct path
> *beeshma@ubuntu:~/hbase-0.98.6.1-hadoop2/bin$ ./start-hbase.sh*
>
> Error: Could not find or load main class
> org.apache.hadoop.hbase.util.HBaseConfTool
> Error: Could not find or load main class
> org.apache.hadoop.hbase.zookeeper.ZKServerTool
> starting master, logging to
> /home/beeshma/hbase/logs/hbase-beeshma-master-ubuntu.out
> Error: Could not find or load main class
> org.apache.hadoop.hbase.master.HMaster
> localhost: starting regionserver, logging to
> /home/beeshma/hbase-0.98.6.1-hadoop2/bin/../logs/hbase-beeshma-regionserver-ubuntu.out
>
>
> *i fallowed standalone mode*
> https://hbase.apache.org/book/quickstart.html
>
>
> *beeshma@ubuntu:~/hbase-0.98.6.1-hadoop2/bin$ ./hbase shell*
> /usr/lib/jruby/lib/ruby/site_ruby/shared/builtin/javasupport/java.rb:51:in
> `method_missing': cannot load Java class org.apache.hadoop.hbase.HConstants
> (NameError)
> from /home/beeshma/hbase/hbase-shell/src/main/ruby/hbase.rb:39
> from /home/beeshma/hbase/hbase-shell/src/main/ruby/hbase.rb:105:in
> `require'
> from /home/beeshma/hbase/bin/hirb.rb:105
>
>  Thanks
> Beeshma
>
>
>
> On Sun, Oct 26, 2014 at 6:21 AM, Sean Busbey  wrote:
>
>> It looks like you're in a home directory, but start-hbase.sh is in your
>> path.
>>
>> What manner of installation did you use?
>>
>> --
>> Sean
>> On Oct 26, 2014 6:45 AM, "beeshma r"  wrote:
>>
>> > Hi Ted
>> >
>> > i now trying to install hbase in my ubuntu.i struck here with this
>> problem
>> >
>> >  $start-hbase.sh
>> >
>> > > Error: Could not find or load main class org.apache.hadoop.hbase.util.
>> > > HBaseConfTool
>> > > Error: Could not find or load main class org.apache.hadoop.hbase.
>> > > zookeeper.ZKServerTool
>> > > Error: Could not find or load main class org.apache.hadoop.hbase.
>> > > master.HMaster
>> >
>> > I have tried all latest versions but no use
>> >
>> > This is *Hbae_sit.xml*
>> >
>> > 
>> > 
>> > hbase.rootdir
>> > file:///home/beesh_hadoop2/hbase
>> > 
>> > 
>> > hbase.zookeeper.property.dataDir
>> > /home/beesh_hadoop2/zookeeper
>> > 
>> > 
>> >
>> >
>> >
>> > Here i attched  log
>> >
>>
>
>
>
> --
>
>
>
>
>
>


--


Re: error in installing and starting hbase

2014-10-26 Thread beeshma r
hi sean

Actually i started habse  from correct path
*beeshma@ubuntu:~/hbase-0.98.6.1-hadoop2/bin$ ./start-hbase.sh*

Error: Could not find or load main class
org.apache.hadoop.hbase.util.HBaseConfTool
Error: Could not find or load main class
org.apache.hadoop.hbase.zookeeper.ZKServerTool
starting master, logging to
/home/beeshma/hbase/logs/hbase-beeshma-master-ubuntu.out
Error: Could not find or load main class
org.apache.hadoop.hbase.master.HMaster
localhost: starting regionserver, logging to
/home/beeshma/hbase-0.98.6.1-hadoop2/bin/../logs/hbase-beeshma-regionserver-ubuntu.out


*i fallowed standalone mode*
https://hbase.apache.org/book/quickstart.html


*beeshma@ubuntu:~/hbase-0.98.6.1-hadoop2/bin$ ./hbase shell*
/usr/lib/jruby/lib/ruby/site_ruby/shared/builtin/javasupport/java.rb:51:in
`method_missing': cannot load Java class org.apache.hadoop.hbase.HConstants
(NameError)
from /home/beeshma/hbase/hbase-shell/src/main/ruby/hbase.rb:39
from /home/beeshma/hbase/hbase-shell/src/main/ruby/hbase.rb:105:in
`require'
from /home/beeshma/hbase/bin/hirb.rb:105

 Thanks
Beeshma



On Sun, Oct 26, 2014 at 6:21 AM, Sean Busbey  wrote:

> It looks like you're in a home directory, but start-hbase.sh is in your
> path.
>
> What manner of installation did you use?
>
> --
> Sean
> On Oct 26, 2014 6:45 AM, "beeshma r"  wrote:
>
> > Hi Ted
> >
> > i now trying to install hbase in my ubuntu.i struck here with this
> problem
> >
> >  $start-hbase.sh
> >
> > > Error: Could not find or load main class org.apache.hadoop.hbase.util.
> > > HBaseConfTool
> > > Error: Could not find or load main class org.apache.hadoop.hbase.
> > > zookeeper.ZKServerTool
> > > Error: Could not find or load main class org.apache.hadoop.hbase.
> > > master.HMaster
> >
> > I have tried all latest versions but no use
> >
> > This is *Hbae_sit.xml*
> >
> > 
> > 
> > hbase.rootdir
> > file:///home/beesh_hadoop2/hbase
> > 
> > 
> > hbase.zookeeper.property.dataDir
> > /home/beesh_hadoop2/zookeeper
> > 
> > 
> >
> >
> >
> > Here i attched  log
> >
>



--