Re: zookeeper.znode.parent mismatch exception

2013-12-12 Thread Bharath Vissapragada
Can you check if your ZK servers are actually forming a quorum or if they
are just stand alone instances? If they don't form a quorum, znodes are
just created on which ever zk server the master first connects to and the
apps/RS connecting to other instances don't see the znode and thus raising
this error.


On Fri, Dec 13, 2013 at 1:03 PM, Sandeep L wrote:

> No, we didn't upgraded. We are using HBase-0.94.1
> The issue here is we are not always facing this issue, only once in a
> while we are facing this issue(2 or 3 times a day).Other times it working
> without any issue.
>
> Thanks,Sandeep.
>
> > Date: Thu, 12 Dec 2013 23:23:18 -0800
> > Subject: Re: zookeeper.znode.parent mismatch exception
> > From: pradeep...@gmail.com
> > To: user@hbase.apache.org
> >
> > Did you recently upgrade to 0.96? This is a problem I faced with
> mismatched
> > clients connecting to an 0.96 cluster. Starting in that version, the root
> > node for zookeeper chanced from /hbase to /hbase-unsecure (if in unsecure
> > mode).
> >
> >
> > On Thu, Dec 12, 2013 at 10:47 PM, Sandeep L  >wrote:
> >
> > > Hi,
> > > Our production cluster is in distributed mode with 5 servers as
> zookeeper
> > > quorum.
> > > While accessing production HBase server from our application we are
> > > getting zookeeper.znode.parent mismatch exception.
> > > Following is exception in our log files:
> > > from org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker in
> > > pool-67467-thread-1Check the value configured in
> 'zookeeper.znode.parent'.
> > > There could be a mismatch with the one configured in the master.
> > >
> > > We checked all hbase-site.xml files to find any mismatches in
> > > configurations, but no luck all configuration files same. We don't
> have any
> > > configuration with name zookeeper.znode.parent in hbase-site.xml, but I
> > > have one configuration with same name in hbase-defalut.xml with value
> as
> > > /hbase.
> > > In our application server we don't have any file with name
> > > hbase-default.xml, but we have hbase-site.xml in it.
> > >
> > > Can someone help us to resolve this issue.
> > >
> > > Thanks,Sandeep.
>
>



-- 
Bharath Vissapragada



RE: zookeeper.znode.parent mismatch exception

2013-12-12 Thread Sandeep L
No, we didn't upgraded. We are using HBase-0.94.1
The issue here is we are not always facing this issue, only once in a while we 
are facing this issue(2 or 3 times a day).Other times it working without any 
issue.

Thanks,Sandeep.

> Date: Thu, 12 Dec 2013 23:23:18 -0800
> Subject: Re: zookeeper.znode.parent mismatch exception
> From: pradeep...@gmail.com
> To: user@hbase.apache.org
> 
> Did you recently upgrade to 0.96? This is a problem I faced with mismatched
> clients connecting to an 0.96 cluster. Starting in that version, the root
> node for zookeeper chanced from /hbase to /hbase-unsecure (if in unsecure
> mode).
> 
> 
> On Thu, Dec 12, 2013 at 10:47 PM, Sandeep L wrote:
> 
> > Hi,
> > Our production cluster is in distributed mode with 5 servers as zookeeper
> > quorum.
> > While accessing production HBase server from our application we are
> > getting zookeeper.znode.parent mismatch exception.
> > Following is exception in our log files:
> > from org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker in
> > pool-67467-thread-1Check the value configured in 'zookeeper.znode.parent'.
> > There could be a mismatch with the one configured in the master.
> >
> > We checked all hbase-site.xml files to find any mismatches in
> > configurations, but no luck all configuration files same. We don't have any
> > configuration with name zookeeper.znode.parent in hbase-site.xml, but I
> > have one configuration with same name in hbase-defalut.xml with value as
> > /hbase.
> > In our application server we don't have any file with name
> > hbase-default.xml, but we have hbase-site.xml in it.
> >
> > Can someone help us to resolve this issue.
> >
> > Thanks,Sandeep.
  

Re: zookeeper.znode.parent mismatch exception

2013-12-12 Thread Pradeep Gollakota
Did you recently upgrade to 0.96? This is a problem I faced with mismatched
clients connecting to an 0.96 cluster. Starting in that version, the root
node for zookeeper chanced from /hbase to /hbase-unsecure (if in unsecure
mode).


On Thu, Dec 12, 2013 at 10:47 PM, Sandeep L wrote:

> Hi,
> Our production cluster is in distributed mode with 5 servers as zookeeper
> quorum.
> While accessing production HBase server from our application we are
> getting zookeeper.znode.parent mismatch exception.
> Following is exception in our log files:
> from org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker in
> pool-67467-thread-1Check the value configured in 'zookeeper.znode.parent'.
> There could be a mismatch with the one configured in the master.
>
> We checked all hbase-site.xml files to find any mismatches in
> configurations, but no luck all configuration files same. We don't have any
> configuration with name zookeeper.znode.parent in hbase-site.xml, but I
> have one configuration with same name in hbase-defalut.xml with value as
> /hbase.
> In our application server we don't have any file with name
> hbase-default.xml, but we have hbase-site.xml in it.
>
> Can someone help us to resolve this issue.
>
> Thanks,Sandeep.


zookeeper.znode.parent mismatch exception

2013-12-12 Thread Sandeep L
Hi,
Our production cluster is in distributed mode with 5 servers as zookeeper 
quorum.
While accessing production HBase server from our application we are getting 
zookeeper.znode.parent mismatch exception.
Following is exception in our log files:
from org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker in 
pool-67467-thread-1Check the value configured in 'zookeeper.znode.parent'. 
There could be a mismatch with the one configured in the master.

We checked all hbase-site.xml files to find any mismatches in configurations, 
but no luck all configuration files same. We don't have any configuration with 
name zookeeper.znode.parent in hbase-site.xml, but I have one configuration 
with same name in hbase-defalut.xml with value as /hbase.
In our application server we don't have any file with name hbase-default.xml, 
but we have hbase-site.xml in it.

Can someone help us to resolve this issue.

Thanks,Sandeep.   

Re: HBase Client: How can I merge 2 results into 1?

2013-12-12 Thread lars hofhansl
Hi Saiph,

no need to apologize.
Please keep asking questions as you encounter issues. :)

-- Lars




 From: Saiph Kappa 
To: user@hbase.apache.org 
Sent: Thursday, December 12, 2013 12:59 PM
Subject: Re: HBase Client: How can I merge 2 results into 1?
 

I apologize, but I did a mistake. I was merging keyvalues from different
rows (there was a problem in the way I was comparing and decoding the
byte[] of rows) into a single result, which is not possible naturally.

Therefore, I ask you to delete this thread.

Thanks and sorry for any inconvenience.



On Thu, Dec 12, 2013 at 6:27 PM, Saiph Kappa  wrote:

> Hi,
>
> I am using version 0.94.14 and I am trying to merge KeyValues of 2 results
> into 1 result.
>
> Basically somewhere in time I cache a keyvalue of a result:
>
> ### Putting in cache KeyValue with key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x00\x00\x07\x011c\x00\x00\x01B\xE2\xE0\x0C\xEF\x04,
> tableName: STOCK, Family: 1, Qualifier: c, Value: ^@^@^@3
>
> Later I get that KeyValue from my cache and add it to a list
> (List kvs). Then I add some more KeyValues from a Result
> instance to that list:
>
>             kvs.addAll(partialResult.list());
>             Collections.sort(kvs, KeyValue.COMPARATOR);
>
> Finally, I build a new result with all those KeyValues ( new Result(kvs) ).
>
> When I iterate through the map of that result I can read all values
> correctly:
>
>             for (byte[] f : result.getMap().keySet()) {
>                 System.out.print("### Result2 Family: " +
> Bytes.toString(f));
>                 for (byte[] q : result.getFamilyMap(f).keySet()) {
>                     System.out.print(" : Qualifier: " + Bytes.toString(q)
> + " :: Value: "
>                             +
> Bytes.toString(result.getFamilyMap(f).get(q)));
>                 }
>                 System.out.println();
>             }
>
> But if I try to get values using result.getValue(...):
>
>  for (byte[] f : result.getMap().keySet()) {
>                 System.out.print("### Result Family: " +
> Bytes.toString(f));
>                 for (byte[] q : result.getMap().get(f).keySet()) {
>                     System.out.print(" : Qualifier: " + Bytes.toString(q)
> + " :: Value: " + Bytes.toString(result.getValue(f, q)));
>                 }
>   }
>
> I only get the 2 first values, others are returned as null.
>
> Here is the kvs list (where the first KeyValue was merged with the other 5
> KeyValues):
>
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x00\x00\x07\x011c\x00\x00\x01B\xE2\xE0\x0C\xEF\x04,
> Family: 1, Qualifier: c, Value: ^@^@^@3
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011j\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
> Family: 1, Qualifier: j, Value: niutpzbmnewnfsowrlauxeda
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011n\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
> Family: 1, Qualifier: n, Value: ^@^@^@^@
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011o\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
> Family: 1, Qualifier: o, Value: ^@^@^@^@
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011p\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
> Family: 1, Qualifier: p, Value: ^@^@^@^@
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011q\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
> Family: 1, Qualifier: q, Value:
> nbdpcwgwriaysaiqgkfozcacvalqymmuybsaoodidnbbltioqh
> kvs length: 6
>
>
> I tried to look further inside the code of Result.getValue and I noticed
> that this line:
> int pos = Arrays.binarySearch(kvs, searchTerm, KeyValue.COMPARATOR);
> is returning -2 to the last 4 KeyValues in the kvs list, and that's why I
> was getting those values as null.
>
> (If I do not merge the first KeyValue the pos is returned correctly)
>
> Why does this happen? What am I doing wrong?
>
> Thanks.
>

Re: HBase Client: How can I merge 2 results into 1?

2013-12-12 Thread Saiph Kappa
I apologize, but I did a mistake. I was merging keyvalues from different
rows (there was a problem in the way I was comparing and decoding the
byte[] of rows) into a single result, which is not possible naturally.

Therefore, I ask you to delete this thread.

Thanks and sorry for any inconvenience.


On Thu, Dec 12, 2013 at 6:27 PM, Saiph Kappa  wrote:

> Hi,
>
> I am using version 0.94.14 and I am trying to merge KeyValues of 2 results
> into 1 result.
>
> Basically somewhere in time I cache a keyvalue of a result:
>
> ### Putting in cache KeyValue with key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x00\x00\x07\x011c\x00\x00\x01B\xE2\xE0\x0C\xEF\x04,
> tableName: STOCK, Family: 1, Qualifier: c, Value: ^@^@^@3
>
> Later I get that KeyValue from my cache and add it to a list
> (List kvs). Then I add some more KeyValues from a Result
> instance to that list:
>
> kvs.addAll(partialResult.list());
> Collections.sort(kvs, KeyValue.COMPARATOR);
>
> Finally, I build a new result with all those KeyValues ( new Result(kvs) ).
>
> When I iterate through the map of that result I can read all values
> correctly:
>
> for (byte[] f : result.getMap().keySet()) {
> System.out.print("### Result2 Family: " +
> Bytes.toString(f));
> for (byte[] q : result.getFamilyMap(f).keySet()) {
> System.out.print(" : Qualifier: " + Bytes.toString(q)
> + " :: Value: "
> +
> Bytes.toString(result.getFamilyMap(f).get(q)));
> }
> System.out.println();
> }
>
> But if I try to get values using result.getValue(...):
>
>  for (byte[] f : result.getMap().keySet()) {
> System.out.print("### Result Family: " +
> Bytes.toString(f));
> for (byte[] q : result.getMap().get(f).keySet()) {
> System.out.print(" : Qualifier: " + Bytes.toString(q)
> + " :: Value: " + Bytes.toString(result.getValue(f, q)));
> }
>   }
>
> I only get the 2 first values, others are returned as null.
>
> Here is the kvs list (where the first KeyValue was merged with the other 5
> KeyValues):
>
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x00\x00\x07\x011c\x00\x00\x01B\xE2\xE0\x0C\xEF\x04,
> Family: 1, Qualifier: c, Value: ^@^@^@3
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011j\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
> Family: 1, Qualifier: j, Value: niutpzbmnewnfsowrlauxeda
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011n\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
> Family: 1, Qualifier: n, Value: ^@^@^@^@
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011o\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
> Family: 1, Qualifier: o, Value: ^@^@^@^@
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011p\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
> Family: 1, Qualifier: p, Value: ^@^@^@^@
> ### KVS Key:
> \x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011q\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
> Family: 1, Qualifier: q, Value:
> nbdpcwgwriaysaiqgkfozcacvalqymmuybsaoodidnbbltioqh
> kvs length: 6
>
>
> I tried to look further inside the code of Result.getValue and I noticed
> that this line:
> int pos = Arrays.binarySearch(kvs, searchTerm, KeyValue.COMPARATOR);
> is returning -2 to the last 4 KeyValues in the kvs list, and that's why I
> was getting those values as null.
>
> (If I do not merge the first KeyValue the pos is returned correctly)
>
> Why does this happen? What am I doing wrong?
>
> Thanks.
>


HBase Client: How can I merge 2 results into 1?

2013-12-12 Thread Saiph Kappa
Hi,

I am using version 0.94.14 and I am trying to merge KeyValues of 2 results
into 1 result.

Basically somewhere in time I cache a keyvalue of a result:

### Putting in cache KeyValue with key:
\x00\x09\x00\x00\x00\x04\x00\x00\x00\x00\x07\x011c\x00\x00\x01B\xE2\xE0\x0C\xEF\x04,
tableName: STOCK, Family: 1, Qualifier: c, Value: ^@^@^@3

Later I get that KeyValue from my cache and add it to a list
(List kvs). Then I add some more KeyValues from a Result
instance to that list:

kvs.addAll(partialResult.list());
Collections.sort(kvs, KeyValue.COMPARATOR);

Finally, I build a new result with all those KeyValues ( new Result(kvs) ).

When I iterate through the map of that result I can read all values
correctly:

for (byte[] f : result.getMap().keySet()) {
System.out.print("### Result2 Family: " +
Bytes.toString(f));
for (byte[] q : result.getFamilyMap(f).keySet()) {
System.out.print(" : Qualifier: " + Bytes.toString(q) +
" :: Value: "
+
Bytes.toString(result.getFamilyMap(f).get(q)));
}
System.out.println();
}

But if I try to get values using result.getValue(...):

 for (byte[] f : result.getMap().keySet()) {
System.out.print("### Result Family: " + Bytes.toString(f));
for (byte[] q : result.getMap().get(f).keySet()) {
System.out.print(" : Qualifier: " + Bytes.toString(q) +
" :: Value: " + Bytes.toString(result.getValue(f, q)));
}
  }

I only get the 2 first values, others are returned as null.

Here is the kvs list (where the first KeyValue was merged with the other 5
KeyValues):

### KVS Key:
\x00\x09\x00\x00\x00\x04\x00\x00\x00\x00\x07\x011c\x00\x00\x01B\xE2\xE0\x0C\xEF\x04,
Family: 1, Qualifier: c, Value: ^@^@^@3
### KVS Key:
\x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011j\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
Family: 1, Qualifier: j, Value: niutpzbmnewnfsowrlauxeda
### KVS Key:
\x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011n\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
Family: 1, Qualifier: n, Value: ^@^@^@^@
### KVS Key:
\x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011o\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
Family: 1, Qualifier: o, Value: ^@^@^@^@
### KVS Key:
\x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011p\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
Family: 1, Qualifier: p, Value: ^@^@^@^@
### KVS Key:
\x00\x09\x00\x00\x00\x04\x00\x00\x01a\x97\x011q\x00\x00\x01B\xE2\xE0\xF0\x9D\x04,
Family: 1, Qualifier: q, Value:
nbdpcwgwriaysaiqgkfozcacvalqymmuybsaoodidnbbltioqh
kvs length: 6


I tried to look further inside the code of Result.getValue and I noticed
that this line:
int pos = Arrays.binarySearch(kvs, searchTerm, KeyValue.COMPARATOR);
is returning -2 to the last 4 KeyValues in the kvs list, and that's why I
was getting those values as null.

(If I do not merge the first KeyValue the pos is returned correctly)

Why does this happen? What am I doing wrong?

Thanks.


directly export hbase snapshots to local fs

2013-12-12 Thread oc tsdb
Hi,

We are using HBase 0.94.14 and have only one cluster with 1NN and 4 DNs.

We are trying to export snapshot directly to local system(e.g local fs
path: /tmp/hbase_backup) as specified below.It is just exporting/copying
snapshots (.hbase-snapshot) but not actual data(.archive).Why the below
command is not copying actual data?

 hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot
 hbase_tbl_snapshot_name -copy-to   file:///tmp/hbase_backup -mappers 16;


It is always first need to export to HDFS and then copy to local file
system using hadoop get command?

Thanks in advance.

-OC


Re: Why hadoop/hbase uses DNS/hosts/hostname in such a strange way?

2013-12-12 Thread Geovanie Marquez
This may not answer why it is designed this way, but it should give you
more insight into how it is done.

Here is how the network
resolution happens and the complication may be improved but this is what it is today


On Mon, Dec 9, 2013 at 8:31 PM, Rural Hunter  wrote:

> Hi,
>
> I have configured a hadoop/hbase cluster recently and found it's really a
> mess with all those DNS, hostname and /etc/hosts configuration. There are
> many questions related to this all over the internet. So I'm wondering why
> hadoop/hbase designed in such a strange way, which is very abnormal
> comparing with other network/distribution applications. In normal
> applications, DNS is used to indentify other servers(logical or physical),
> not the server itself. But I'm seeing this weired behavior in hadoop/hbase.
>
> Say we have server1 and server2 configured this way:
>
> server1(ip 192.168.1.2)
> hostname: server1
> /etc/hosts:
> 127.0.0.1localhost,server1
> 192.168.1.3server2
>
> server2(ip 192.168.1.3)
> hostname: server2
> /etc/hosts:
> 127.0.0.1localhost,server2
> 192.168.1.2server1
>
> With the configuration above, I'm seesing many cases hadoop/hbase trying
> to connect to localhost while it actaully should connect to another server.
> I believe this is because server1 reported its hostname as 'localhost' to
> server2 and server2 tries to use 'localhost' to connect to server1. But it
> shouldn't work that way. In normal network applications, server2 shouldn't
> try to connect to server1 with what server1 reported. If server2 inits the
> connection, it should use the DNS or /etc/hosts to resolve server1. If
> server 1 inits the connection, server2 should use the ip it gets from the
> already established connection from server1. There shouldn't be any
> confusion or mess.
>
> I don't see why hadoop/hbase can not use the same logic to handle the
> DNS/hosts/hostname mess. Anyone can resolve my confusion?
>


Re: Get all columns in a column family

2013-12-12 Thread Kevin O'dell
Hey JC,

  Is this what you are looking for
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#addFamily(byte[])


On Thu, Dec 12, 2013 at 3:02 AM, JC  wrote:

> I have use case where if one column of a column family changes, I would
> like
> to bring back all the columns in that column family. I can use the
> timestamp
> to identify the column that changes but it only returns the one column. Is
> there a way I can get all the columns of the column family back with one
> pass of the data? What are some other options here?
>
> Thanks in advance
>
>
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/Get-all-columns-in-a-column-family-tp4053696.html
> Sent from the HBase User mailing list archive at Nabble.com.
>



-- 
Kevin O'Dell
Systems Engineer, Cloudera


Get all columns in a column family

2013-12-12 Thread JC
I have use case where if one column of a column family changes, I would like
to bring back all the columns in that column family. I can use the timestamp
to identify the column that changes but it only returns the one column. Is
there a way I can get all the columns of the column family back with one
pass of the data? What are some other options here? 

Thanks in advance





--
View this message in context: 
http://apache-hbase.679495.n3.nabble.com/Get-all-columns-in-a-column-family-tp4053696.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Handling of Hbase Blob

2013-12-12 Thread Jean-Marc Spaggiari
Hi Suni,

I'm not sure to get what you mean by "we want the clob/blob to go under
_lobs directory". You want sqoop to get the blob field content, create a
file under _lobs and put it's reference into _lob HBase column family?

JM


2013/12/12 Suni K 

> Hi,
>
> I created a table with Blob Fields in Oracle, and I imported the data into
> Hbase by using sqoop by using below command
>
>
> *sqoop import --connect jdbc:oracle:@... --username root --password
> root --table emp4 -hbase-table emp4 --column-family cf --columns
> id,desription,name --hbase-create-table --as-sequencefile
> --inline-lob-limit 0*
>
> My Requirement is
>
> For hbase, we want the clob/blob to go under _lobs directory and
> referenceid
> to the _lob be in the column family.
>
>
> Thanks & Regards,
> suneetha
>