[jira] [Commented] (HBASE-10426) user_permission in security.rb calls non-existent UserPermission#getTable method

2014-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950482#comment-13950482
 ] 

Hudson commented on HBASE-10426:


FAILURE: Integrated in hbase-0.96 #370 (See 
[https://builds.apache.org/job/hbase-0.96/370/])
HBASE-10426 user_permission in security.rb calls non-existent 
UserPermission#getTable method (stack: rev 1582587)
* /hbase/branches/0.96/hbase-shell/src/main/ruby/hbase/security.rb


> user_permission in security.rb calls non-existent UserPermission#getTable 
> method
> 
>
> Key: HBASE-10426
> URL: https://issues.apache.org/jira/browse/HBASE-10426
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 10426-v1.txt, 10426-v2.txt
>
>
> In user mailing list, Alex reported this error under the thread 
> "user_permission error - undefined method 'getTable'":
> {code}
> hbase(main):010:0> create 'foo','bar'
> 0 row(s) in 0.5780 seconds
> => Hbase::Table - foo
> hbase(main):011:0> user_permission 'foo'
> User Table,Family,Qualifier:Permission
> *ERROR: undefined method `getTable' for
> #*
> {code}
> UserPermission#getTable method doesn't exist



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6651) Improve thread safety of HTablePool

2014-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950489#comment-13950489
 ] 

Hadoop QA commented on HBASE-6651:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12637342/HBASE-6651-V15-trunk.patch
  against trunk revision .
  ATTACHMENT ID: 12637342

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 19 new 
or modified tests.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
16 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9124//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9124//console

This message is automatically generated.

> Improve thread safety of HTablePool
> ---
>
> Key: HBASE-6651
> URL: https://issues.apache.org/jira/browse/HBASE-6651
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.94.1
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
> Attachments: HBASE-6651-V10.patch, HBASE-6651-V11.patch, 
> HBASE-6651-V12.patch, HBASE-6651-V13.patch, HBASE-6651-V14-0.96.patch, 
> HBASE-6651-V14-0.98.patch, HBASE-6651-V14-trunk.patch, 
> HBASE-6651-V15-0.96.patch, HBASE-6651-V15-0.98.patch, 
> HBASE-6651-V15-trunk.patch, HBASE-6651-V2.patch, HBASE-6651-V3.patch, 
> HBASE-6651-V4.patch, HBASE-6651-V5.patch, HBASE-6651-V6.patch, 
> HBASE-6651-V7.patch, HBASE-6651-V8.patch, HBASE-6651-V9.patch, 
> HBASE-6651.patch, sample.zip, sample.zip, sharedmap_for_hbaseclient.zip
>
>
> There are some operations in HTablePool accessing PoolMap in multiple places 
> without any explicit synchronization. 
> For example HTablePool.closeTablePool() calls PoolMap.values(), and calls 
> PoolMap.remove(). If other threads add new instances to the pool in the 
> middle of the calls, the newly added instances might be dropped. 
> (HTablePool.closeTablePool() also has another problem that calling it by 
> multiple threads causes accessing HTable by multiple threads.)
> Moreover, PoolMap is not thread safe for the same reason.
> For example PoolMap.put() calles ConcurrentMap.get() and calles 
> ConcurrentMap.put(). If other threads add a new instance to the concurent map 
> in the middle of the calls, the new instance might be dropped.
> And also implementations of Pool have the same problems.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10788) Add 99th percentile of latency in PE

2014-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950497#comment-13950497
 ] 

Hudson commented on HBASE-10788:


FAILURE: Integrated in HBase-TRUNK #5047 (See 
[https://builds.apache.org/job/HBase-TRUNK/5047/])
HBASE-10788 Add 99th percentile of latency in PE (Liu Shaohui) (liangxie: rev 
1582583)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java


> Add 99th percentile of latency in PE
> 
>
> Key: HBASE-10788
> URL: https://issues.apache.org/jira/browse/HBASE-10788
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10788-trunk-v1.diff, HBASE-10788-trunk-v2.diff, 
> HBASE-10788-trunk-v3.diff
>
>
> In production env, 99th percentile of latency is more important than the avg. 
> The 99th percentile is helpful to measure the influence of GC, slow 
> read/write of HDFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10676) Removing ThreadLocal of PrefetchedHeader in HFileBlock.FSReaderV2 make higher perforamce of scan

2014-03-28 Thread hongliang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950565#comment-13950565
 ] 

hongliang commented on HBASE-10676:
---

it's great

> Removing ThreadLocal of PrefetchedHeader in HFileBlock.FSReaderV2 make higher 
> perforamce of scan
> 
>
> Key: HBASE-10676
> URL: https://issues.apache.org/jira/browse/HBASE-10676
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.99.0
>Reporter: zhaojianbo
>Assignee: zhaojianbo
> Attachments: HBASE-10676-0.98-branch-AtomicReferenceV2.patch, 
> HBASE-10676-0.98-branchV2.patch
>
>
> PrefetchedHeader variable in HFileBlock.FSReaderV2 is used for avoiding 
> backward seek operation as the comment said:
> {quote}
> we will not incur a backward seek operation if we have already read this 
> block's header as part of the previous read's look-ahead. And we also want to 
> skip reading the header again if it has already been read.
> {quote}
> But that is not the case. In the code of 0.98, prefetchedHeader is 
> threadlocal for one storefile reader, and in the RegionScanner 
> lifecycle,different rpc handlers will serve scan requests of the same 
> scanner. Even though one handler of previous scan call prefetched the next 
> block header, the other handlers of current scan call will still trigger a 
> backward seek operation. The process is like this:
> # rs handler1 serves the scan call, reads block1 and prefetches the header of 
> block2
> # rs handler2 serves the same scanner's next scan call, because rs handler2 
> doesn't know the header of block2 already prefetched by rs handler1, triggers 
> a backward seek and reads block2, and prefetches the header of block3.
> It is not the sequential read. So I think that the threadlocal is useless, 
> and should be abandoned. I did the work, and evaluated the performance of one 
> client, two client and four client scanning the same region with one 
> storefile.  The test environment is
> # A hdfs cluster with a namenode, a secondary namenode , a datanode in a 
> machine
> # A hbase cluster with a zk, a master, a regionserver in the same machine
> # clients are also in the same machine.
> So all the data is local. The storefile is about 22.7GB from our online data, 
> 18995949 kvs. Caching is set 1000. And setCacheBlocks(false)
> With the improvement, the client total scan time decreases 21% for the one 
> client case, 11% for the two clients case. But the four clients case is 
> almost the same. The details tests' data is the following:
> ||case||client||time(ms)||
> | original | 1 | 306222 |
> | new | 1 | 241313 |
> | original | 2 | 416390 |
> | new | 2 | 369064 |
> | original | 4 | 555986 |
> | new | 4 | 562152 |
> With some modification(see the comments below), the newest result is 
> ||case||client||time(ms)||case||client||time(ms)||case||client||time(ms)||
> |original|1|306222|new with synchronized|1|239510|new with 
> AtomicReference|1|241243|
> |original|2|416390|new with synchronized|2|365367|new with 
> AtomicReference|2|368952|
> |original|4|555986|new with synchronized|4|540642|new with 
> AtomicReference|4|545715|
> |original|8|854029|new with synchronized|8|852137|new with 
> AtomicReference|8|850401|



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10860) Insufficient AccessController covering permission check

2014-03-28 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-10860:
--

 Summary: Insufficient AccessController covering permission check
 Key: HBASE-10860
 URL: https://issues.apache.org/jira/browse/HBASE-10860
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2


{code}
  List list = (List)entry.getValue();
  if (list == null || list.isEmpty()) {
get.addFamily(col);
  } else {
for (Cell cell : list) {
  get.addColumn(col, CellUtil.cloneQualifier(cell));
}
  }
{code}
When a delete family Mutation comes, a Cell will be added into the list with 
Qualifier as null. (See Delete#deleteFamily(byte[])). So it will miss getting 
added against the check list == null || list.isEmpty().  We will fail getting 
the cells under this cf for covering permission check.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10860) Insufficient AccessController covering permission check

2014-03-28 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10860:
---

Attachment: HBASE-10860.patch

> Insufficient AccessController covering permission check
> ---
>
> Key: HBASE-10860
> URL: https://issues.apache.org/jira/browse/HBASE-10860
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10860.patch
>
>
> {code}
>   List list = (List)entry.getValue();
>   if (list == null || list.isEmpty()) {
>   get.addFamily(col);
>   } else {
>   for (Cell cell : list) {
> get.addColumn(col, CellUtil.cloneQualifier(cell));
>   }
>   }
> {code}
> When a delete family Mutation comes, a Cell will be added into the list with 
> Qualifier as null. (See Delete#deleteFamily(byte[])). So it will miss getting 
> added against the check list == null || list.isEmpty().  We will fail getting 
> the cells under this cf for covering permission check.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10860) Insufficient AccessController covering permission check

2014-03-28 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10860:
---

Status: Patch Available  (was: Open)

> Insufficient AccessController covering permission check
> ---
>
> Key: HBASE-10860
> URL: https://issues.apache.org/jira/browse/HBASE-10860
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10860.patch
>
>
> {code}
>   List list = (List)entry.getValue();
>   if (list == null || list.isEmpty()) {
>   get.addFamily(col);
>   } else {
>   for (Cell cell : list) {
> get.addColumn(col, CellUtil.cloneQualifier(cell));
>   }
>   }
> {code}
> When a delete family Mutation comes, a Cell will be added into the list with 
> Qualifier as null. (See Delete#deleteFamily(byte[])). So it will miss getting 
> added against the check list == null || list.isEmpty().  We will fail getting 
> the cells under this cf for covering permission check.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10860) Insufficient AccessController covering permission check

2014-03-28 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950588#comment-13950588
 ] 

ramkrishna.s.vasudevan commented on HBASE-10860:


so we will get an exception here?
{code}
get.addColumn(col, CellUtil.cloneQualifier(cell));
{code}
Because the cell has null qualifier? Patch looks good anyway. +1

> Insufficient AccessController covering permission check
> ---
>
> Key: HBASE-10860
> URL: https://issues.apache.org/jira/browse/HBASE-10860
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10860.patch
>
>
> {code}
>   List list = (List)entry.getValue();
>   if (list == null || list.isEmpty()) {
>   get.addFamily(col);
>   } else {
>   for (Cell cell : list) {
> get.addColumn(col, CellUtil.cloneQualifier(cell));
>   }
>   }
> {code}
> When a delete family Mutation comes, a Cell will be added into the list with 
> Qualifier as null. (See Delete#deleteFamily(byte[])). So it will miss getting 
> added against the check list == null || list.isEmpty().  We will fail getting 
> the cells under this cf for covering permission check.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10860) Insufficient AccessController covering permission check

2014-03-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950593#comment-13950593
 ] 

Anoop Sam John commented on HBASE-10860:


No exceptions in the mentioned line Ram..  Just thing is that, it adds a column 
 : into the Get.  Actually we have to get all the cells under the 
family (for the given row)  as the delete is going to mask all.

> Insufficient AccessController covering permission check
> ---
>
> Key: HBASE-10860
> URL: https://issues.apache.org/jira/browse/HBASE-10860
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10860.patch
>
>
> {code}
>   List list = (List)entry.getValue();
>   if (list == null || list.isEmpty()) {
>   get.addFamily(col);
>   } else {
>   for (Cell cell : list) {
> get.addColumn(col, CellUtil.cloneQualifier(cell));
>   }
>   }
> {code}
> When a delete family Mutation comes, a Cell will be added into the list with 
> Qualifier as null. (See Delete#deleteFamily(byte[])). So it will miss getting 
> added against the check list == null || list.isEmpty().  We will fail getting 
> the cells under this cf for covering permission check.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10861) Supporting API in ByteRange

2014-03-28 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-10861:
--

 Summary: Supporting API in ByteRange
 Key: HBASE-10861
 URL: https://issues.apache.org/jira/browse/HBASE-10861
 Project: HBase
  Issue Type: Improvement
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan


We would need APIs that would 
setLimit(int limit)
getLimt()
asReadOnly()
These APIs would help in implementations that have Buffers offheap (for now BRs 
backed by DBB).
If anything more is needed could be added when needed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10860) Insufficient AccessController covering permission check

2014-03-28 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950597#comment-13950597
 ] 

ramkrishna.s.vasudevan commented on HBASE-10860:


Got it.. Internally KV handles null qualifier so no Exception.  +1 on patch.

> Insufficient AccessController covering permission check
> ---
>
> Key: HBASE-10860
> URL: https://issues.apache.org/jira/browse/HBASE-10860
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10860.patch
>
>
> {code}
>   List list = (List)entry.getValue();
>   if (list == null || list.isEmpty()) {
>   get.addFamily(col);
>   } else {
>   for (Cell cell : list) {
> get.addColumn(col, CellUtil.cloneQualifier(cell));
>   }
>   }
> {code}
> When a delete family Mutation comes, a Cell will be added into the list with 
> Qualifier as null. (See Delete#deleteFamily(byte[])). So it will miss getting 
> added against the check list == null || list.isEmpty().  We will fail getting 
> the cells under this cf for covering permission check.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10771) Primitive type put/get APIs in ByteRange

2014-03-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950599#comment-13950599
 ] 

Anoop Sam John commented on HBASE-10771:


What do u say abt above reply [~stack]? Sounds fine?
Can I get some +1s pls :) (Latest patch includes more APIs)

> Primitive type put/get APIs in ByteRange 
> -
>
> Key: HBASE-10771
> URL: https://issues.apache.org/jira/browse/HBASE-10771
> Project: HBase
>  Issue Type: Improvement
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.99.0
>
> Attachments: HBASE-10771.patch, HBASE-10771_V2.patch
>
>
> While doing HBASE-10713 I came across the need to write int/long (and read 
> also) from a ByteRange.  CellBlocks are backed by ByteRange. So we can add 
> such APIs.
> Also as per HBASE-10750  we return a ByteRange from MSLAB and also discussion 
> under HBASE-10191 suggest we can have BR backed HFileBlocks etc.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10771) Primitive type put/get APIs in ByteRange

2014-03-28 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950613#comment-13950613
 ] 

ramkrishna.s.vasudevan commented on HBASE-10771:


+1 on patch. 

> Primitive type put/get APIs in ByteRange 
> -
>
> Key: HBASE-10771
> URL: https://issues.apache.org/jira/browse/HBASE-10771
> Project: HBase
>  Issue Type: Improvement
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.99.0
>
> Attachments: HBASE-10771.patch, HBASE-10771_V2.patch
>
>
> While doing HBASE-10713 I came across the need to write int/long (and read 
> also) from a ByteRange.  CellBlocks are backed by ByteRange. So we can add 
> such APIs.
> Also as per HBASE-10750  we return a ByteRange from MSLAB and also discussion 
> under HBASE-10191 suggest we can have BR backed HFileBlocks etc.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7319) Extend Cell usage through read path

2014-03-28 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950614#comment-13950614
 ] 

ramkrishna.s.vasudevan commented on HBASE-7319:
---

Once HBASE-10531 is in, will start with this.  Replacing BRs in place of 
ByteBuffers preferably needs this change first.

> Extend Cell usage through read path
> ---
>
> Key: HBASE-7319
> URL: https://issues.apache.org/jira/browse/HBASE-7319
> Project: HBase
>  Issue Type: Umbrella
>  Components: Compaction, Performance, regionserver, Scanners
>Reporter: Matt Corgan
>
> Umbrella issue for eliminating Cell copying.
> The Cell interface allows us to work with a reference to underlying bytes in 
> the block cache without copying each Cell into consecutive bytes in an array 
> (KeyValue).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10850) Unexpected behavior when using filter SingleColumnValueFilter

2014-03-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950619#comment-13950619
 ] 

Anoop Sam John commented on HBASE-10850:


// save that the row was empty before filters applied to it.
  final boolean isEmptyRow = results.isEmpty();

This is needed in this place only I guess.  When there were kvs in result and 
after apply filterRowCells(List), all got removed, still we have to go ahead 
with fetching kvs from non essential families..  Only when the filterRow() says 
to filter this row, we can avoid this reads.

So this has become very tricky now!!

Can we seperate out the filterRowCell(List) and filterRow() ?  Now both done in 
FilterWrapper.  Seems this is the only way!!

> Unexpected behavior when using filter SingleColumnValueFilter
> -
>
> Key: HBASE-10850
> URL: https://issues.apache.org/jira/browse/HBASE-10850
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: haosdent
>Priority: Critical
> Attachments: HBASE-10850-96.patch, HBASE-10850.patch, 
> HBaseSingleColumnValueFilterTest.java
>
>
> When using the filter SingleColumnValueFilter, and depending of the columns 
> specified in the scan (filtering column always specified), the results can be 
> different.
> Here is an example.
> Suppose the following table:
> ||key||a:foo||a:bar||b:foo||b:bar||
> |1|false|_flag_|_flag_|_flag_|
> |2|true|_flag_|_flag_|_flag_|
> |3| |_flag_|_flag_|_flag_|
> With this filter:
> {code}
> SingleColumnValueFilter filter = new 
> SingleColumnValueFilter(Bytes.toBytes("a"), Bytes.toBytes("foo"), 
> CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes("false")));
> filter.setFilterIfMissing(true);
> {code}
> Depending of how I specify the list of columns to add in the scan, the result 
> is different. Yet, all examples below should always return only the first row 
> (key '1'):
> OK:
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> scan.addFamily(Bytes.toBytes("b"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("foo"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("bar"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> {code}
> This is a regression as it was working properly on HBase 0.92.
> You will find in attachement the unit tests reproducing the issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10861) Supporting API in ByteRange

2014-03-28 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10861:
---

Status: Patch Available  (was: Open)

> Supporting API in ByteRange
> ---
>
> Key: HBASE-10861
> URL: https://issues.apache.org/jira/browse/HBASE-10861
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-10861.patch
>
>
> We would need APIs that would 
> setLimit(int limit)
> getLimt()
> asReadOnly()
> These APIs would help in implementations that have Buffers offheap (for now 
> BRs backed by DBB).
> If anything more is needed could be added when needed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10850) Unexpected behavior when using filter SingleColumnValueFilter

2014-03-28 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10850:
---

Attachment: HBASE-10850_V2.patch

Attaching test patch to see what QA says.

> Unexpected behavior when using filter SingleColumnValueFilter
> -
>
> Key: HBASE-10850
> URL: https://issues.apache.org/jira/browse/HBASE-10850
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: haosdent
>Priority: Critical
> Attachments: HBASE-10850-96.patch, HBASE-10850.patch, 
> HBASE-10850_V2.patch, HBaseSingleColumnValueFilterTest.java
>
>
> When using the filter SingleColumnValueFilter, and depending of the columns 
> specified in the scan (filtering column always specified), the results can be 
> different.
> Here is an example.
> Suppose the following table:
> ||key||a:foo||a:bar||b:foo||b:bar||
> |1|false|_flag_|_flag_|_flag_|
> |2|true|_flag_|_flag_|_flag_|
> |3| |_flag_|_flag_|_flag_|
> With this filter:
> {code}
> SingleColumnValueFilter filter = new 
> SingleColumnValueFilter(Bytes.toBytes("a"), Bytes.toBytes("foo"), 
> CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes("false")));
> filter.setFilterIfMissing(true);
> {code}
> Depending of how I specify the list of columns to add in the scan, the result 
> is different. Yet, all examples below should always return only the first row 
> (key '1'):
> OK:
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> scan.addFamily(Bytes.toBytes("b"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("foo"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("bar"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> {code}
> This is a regression as it was working properly on HBase 0.92.
> You will find in attachement the unit tests reproducing the issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10850) Unexpected behavior when using filter SingleColumnValueFilter

2014-03-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950626#comment-13950626
 ] 

Anoop Sam John commented on HBASE-10850:


With this change all the tests in the attached test class working correctly 
when filterIfMissing is true/false.

> Unexpected behavior when using filter SingleColumnValueFilter
> -
>
> Key: HBASE-10850
> URL: https://issues.apache.org/jira/browse/HBASE-10850
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: haosdent
>Priority: Critical
> Attachments: HBASE-10850-96.patch, HBASE-10850.patch, 
> HBASE-10850_V2.patch, HBaseSingleColumnValueFilterTest.java
>
>
> When using the filter SingleColumnValueFilter, and depending of the columns 
> specified in the scan (filtering column always specified), the results can be 
> different.
> Here is an example.
> Suppose the following table:
> ||key||a:foo||a:bar||b:foo||b:bar||
> |1|false|_flag_|_flag_|_flag_|
> |2|true|_flag_|_flag_|_flag_|
> |3| |_flag_|_flag_|_flag_|
> With this filter:
> {code}
> SingleColumnValueFilter filter = new 
> SingleColumnValueFilter(Bytes.toBytes("a"), Bytes.toBytes("foo"), 
> CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes("false")));
> filter.setFilterIfMissing(true);
> {code}
> Depending of how I specify the list of columns to add in the scan, the result 
> is different. Yet, all examples below should always return only the first row 
> (key '1'):
> OK:
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> scan.addFamily(Bytes.toBytes("b"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("foo"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("bar"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> {code}
> This is a regression as it was working properly on HBase 0.92.
> You will find in attachement the unit tests reproducing the issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10861) Supporting API in ByteRange

2014-03-28 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10861:
---

Attachment: HBASE-10861.patch

> Supporting API in ByteRange
> ---
>
> Key: HBASE-10861
> URL: https://issues.apache.org/jira/browse/HBASE-10861
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-10861.patch
>
>
> We would need APIs that would 
> setLimit(int limit)
> getLimt()
> asReadOnly()
> These APIs would help in implementations that have Buffers offheap (for now 
> BRs backed by DBB).
> If anything more is needed could be added when needed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10854) Multiple Row/VisibilityLabels visible while in the memstore

2014-03-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950663#comment-13950663
 ] 

Anoop Sam John commented on HBASE-10854:


This is not the case with MemStore items alone.  Consider the case of having a 
cell (with label) being written. After this a flush is happened. So one cell in 
that HFile.  A diff version of the same cell is being written again (diff 
label) and this is being flushed. Now there are 2 cells in 2 HFiles and make 
sure no compaction is happening. Similar scenario described here can happen 
now.  After a compaction the behaviour will change.
By default the max version for a CF is 1. And so flushes and compactions will 
make sure to write only 1 cell version in these cases.
During scan, even if we specify some maxversion count in scan what we take is 
the min of both these versions number and which will come as 1 here. 
The visibility based evaluation and cell filtering will happen in Filter level 
while on a top layer (after this filtering) the filtering based on the number 
of max versions will happen. (In SQM)
So to fix this problem, we have to consider the min version number used in SQM 
at lower layers also.. (Readers)

bq.Second, we should agree on what is the correct behavior for schemas 
supporting multiple versions, with multiple cell versions with differing 
visibility expressions among the versions
IMO in this case we have to consider all cells and which version's visibility 
support viewing by the user, we have to return.



> Multiple Row/VisibilityLabels visible while in the memstore
> ---
>
> Key: HBASE-10854
> URL: https://issues.apache.org/jira/browse/HBASE-10854
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.98.1
>Reporter: Matteo Bertozzi
>
> If we update the row multiple times with different visibility labels
> we are able to get the "old version" of the row until is flushed
> {code}
> $ sudo -u hbase hbase shell
> hbase> add_labels 'A'
> hbase> add_labels 'B'
> hbase> create 'tb', 'f1'
> hbase> put 'tb', 'row', 'f1:q', 'v1', {VISIBILITY=>'A'}
> hbase> put 'tb', 'row', 'f1:q', 'v1all'
> hbase> put 'tb', 'row', 'f1:q', 'v1aOrB', {VISIBILITY=>'A|B'}
> hbase> put 'tb', 'row', 'f1:q', 'v1aAndB', {VISIBILITY=>'A&B'}
> hbase> scan 'tb'
> row column=f1:q, timestamp=1395948168154, value=v1aAndB
> 1 row
> $ sudo -u testuser hbase shell
> hbase> scan 'tb'
> row column=f1:q, timestamp=1395948168102, value=v1all
> 1 row
> {code}
> When we flush the memstore we get a single row (the last one inserted)
> so the testuser get 0 rows now.
> {code}
> $ sudo -u hbase hbase shell
> hbase> flush 'tb'
> hbase> scan 'tb'
> row column=f1:q, timestamp=1395948168154, value=v1aAndB
> 1 row
> $ sudo -u testuser hbase shell
> hbase> scan 'tb'
> 0 row
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10426) user_permission in security.rb calls non-existent UserPermission#getTable method

2014-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950670#comment-13950670
 ] 

Hudson commented on HBASE-10426:


SUCCESS: Integrated in hbase-0.96-hadoop2 #254 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/254/])
HBASE-10426 user_permission in security.rb calls non-existent 
UserPermission#getTable method (stack: rev 1582587)
* /hbase/branches/0.96/hbase-shell/src/main/ruby/hbase/security.rb


> user_permission in security.rb calls non-existent UserPermission#getTable 
> method
> 
>
> Key: HBASE-10426
> URL: https://issues.apache.org/jira/browse/HBASE-10426
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 10426-v1.txt, 10426-v2.txt
>
>
> In user mailing list, Alex reported this error under the thread 
> "user_permission error - undefined method 'getTable'":
> {code}
> hbase(main):010:0> create 'foo','bar'
> 0 row(s) in 0.5780 seconds
> => Hbase::Table - foo
> hbase(main):011:0> user_permission 'foo'
> User Table,Family,Qualifier:Permission
> *ERROR: undefined method `getTable' for
> #*
> {code}
> UserPermission#getTable method doesn't exist



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10854) Multiple Row/VisibilityLabels visible while in the memstore

2014-03-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950665#comment-13950665
 ] 

Anoop Sam John commented on HBASE-10854:


Similar case of inconsistency can come with a SCVF. Set the latestVersionOnly 
as false.  Based on the value of the older versions we might get back a row in 
cases when the older versions are in memstore.  Or in diff HFiles (No 
compaction)

Ping [~lhofhansl]

> Multiple Row/VisibilityLabels visible while in the memstore
> ---
>
> Key: HBASE-10854
> URL: https://issues.apache.org/jira/browse/HBASE-10854
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.98.1
>Reporter: Matteo Bertozzi
>
> If we update the row multiple times with different visibility labels
> we are able to get the "old version" of the row until is flushed
> {code}
> $ sudo -u hbase hbase shell
> hbase> add_labels 'A'
> hbase> add_labels 'B'
> hbase> create 'tb', 'f1'
> hbase> put 'tb', 'row', 'f1:q', 'v1', {VISIBILITY=>'A'}
> hbase> put 'tb', 'row', 'f1:q', 'v1all'
> hbase> put 'tb', 'row', 'f1:q', 'v1aOrB', {VISIBILITY=>'A|B'}
> hbase> put 'tb', 'row', 'f1:q', 'v1aAndB', {VISIBILITY=>'A&B'}
> hbase> scan 'tb'
> row column=f1:q, timestamp=1395948168154, value=v1aAndB
> 1 row
> $ sudo -u testuser hbase shell
> hbase> scan 'tb'
> row column=f1:q, timestamp=1395948168102, value=v1all
> 1 row
> {code}
> When we flush the memstore we get a single row (the last one inserted)
> so the testuser get 0 rows now.
> {code}
> $ sudo -u hbase hbase shell
> hbase> flush 'tb'
> hbase> scan 'tb'
> row column=f1:q, timestamp=1395948168154, value=v1aAndB
> 1 row
> $ sudo -u testuser hbase shell
> hbase> scan 'tb'
> 0 row
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work

2014-03-28 Thread haosdent (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950682#comment-13950682
 ] 

haosdent commented on HBASE-10848:
--

For this issue, I think just test NullComparator#compareTo would be enough. It 
is not necessary to test it through start a miniCluster. Just my opinion.

> Filter SingleColumnValueFilter combined with NullComparator does not work
> -
>
> Key: HBASE-10848
> URL: https://issues.apache.org/jira/browse/HBASE-10848
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
> Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, 
> HBaseRegression.java, TestScanWithNullComparable.java
>
>
> I want to filter out from the scan the rows that does not have a specific 
> column qualifier. For this purpose I use the filter SingleColumnValueFilter 
> combined with the NullComparator.
> But every time I use this in a scan, I get the following exception:
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47)
> at 
> com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry 
> of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> ... 25 more
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorI

[jira] [Commented] (HBASE-10861) Supporting API in ByteRange

2014-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950877#comment-13950877
 ] 

Hadoop QA commented on HBASE-10861:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637379/HBASE-10861.patch
  against trunk revision .
  ATTACHMENT ID: 12637379

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 9 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestRegionPlacement

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestImportExport.testImport94Table(TestImportExport.java:230)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9126//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9126//console

This message is automatically generated.

> Supporting API in ByteRange
> ---
>
> Key: HBASE-10861
> URL: https://issues.apache.org/jira/browse/HBASE-10861
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-10861.patch
>
>
> We would need APIs that would 
> setLimit(int limit)
> getLimt()
> asReadOnly()
> These APIs would help in implementations that have Buffers offheap (for now 
> BRs backed by DBB).
> If anything more is needed could be added when needed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10849) Fix increased javadoc warns

2014-03-28 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950945#comment-13950945
 ] 

Jimmy Xiang commented on HBASE-10849:
-

+1

> Fix increased javadoc warns 
> 
>
> Key: HBASE-10849
> URL: https://issues.apache.org/jira/browse/HBASE-10849
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10849.patch
>
>
> {code}
> 6 warnings
> [WARNING] Javadoc Warnings
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:338:
>  warning - Tag @link: can't find isa in 
> org.apache.hadoop.hbase.regionserver.HRegionServer
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:721:
>  warning - @param argument "controller" is not a parameter name.
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10854) Multiple Row/VisibilityLabels visible while in the memstore

2014-03-28 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950977#comment-13950977
 ] 

ramkrishna.s.vasudevan commented on HBASE-10854:


Filtering happens and later versions are checked by SQM using the column 
trackers. Similar issues were raised some time back.  Don't remember that one. 
I missed this issue during my day time here.

> Multiple Row/VisibilityLabels visible while in the memstore
> ---
>
> Key: HBASE-10854
> URL: https://issues.apache.org/jira/browse/HBASE-10854
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.98.1
>Reporter: Matteo Bertozzi
>
> If we update the row multiple times with different visibility labels
> we are able to get the "old version" of the row until is flushed
> {code}
> $ sudo -u hbase hbase shell
> hbase> add_labels 'A'
> hbase> add_labels 'B'
> hbase> create 'tb', 'f1'
> hbase> put 'tb', 'row', 'f1:q', 'v1', {VISIBILITY=>'A'}
> hbase> put 'tb', 'row', 'f1:q', 'v1all'
> hbase> put 'tb', 'row', 'f1:q', 'v1aOrB', {VISIBILITY=>'A|B'}
> hbase> put 'tb', 'row', 'f1:q', 'v1aAndB', {VISIBILITY=>'A&B'}
> hbase> scan 'tb'
> row column=f1:q, timestamp=1395948168154, value=v1aAndB
> 1 row
> $ sudo -u testuser hbase shell
> hbase> scan 'tb'
> row column=f1:q, timestamp=1395948168102, value=v1all
> 1 row
> {code}
> When we flush the memstore we get a single row (the last one inserted)
> so the testuser get 0 rows now.
> {code}
> $ sudo -u hbase hbase shell
> hbase> flush 'tb'
> hbase> scan 'tb'
> row column=f1:q, timestamp=1395948168154, value=v1aAndB
> 1 row
> $ sudo -u testuser hbase shell
> hbase> scan 'tb'
> 0 row
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10857) clear_auths command gives exception on existing label and user

2014-03-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10857:
---

Affects Version/s: (was: 0.98.0)
   0.98.1

Deployed tip of 0.98 and saw the same problem.

> clear_auths command gives exception on existing label and user
> --
>
> Key: HBASE-10857
> URL: https://issues.apache.org/jira/browse/HBASE-10857
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: Ted Yu
>
> As user hbase, I performed the following:
> {code}
> hbase(main):001:0> set_auths 'oozie', [ 'TOP_SECRET' ]
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> 2014-03-27 22:35:44,312 WARN  [main] conf.Configuration: hbase-site.xml:an 
> attempt to override final parameter: dfs.support.append;  Ignoring.
> 0 row(s) in 2.6000 seconds
> hbase(main):002:0> scan 'hbase:labels'
> ROW  COLUMN+CELL
>  \x00\x00\x00\x01column=f:\x00, 
> timestamp=1395944796030, value=system
>  \x00\x00\x00\x01column=f:hbase, 
> timestamp=1395944796030, value=
>  \x00\x00\x00\x02column=f:\x00, 
> timestamp=1395951045442, value=TOP_SECRET
>  \x00\x00\x00\x02column=f:hrt_qa, 
> timestamp=1395951229682, value=
>  \x00\x00\x00\x02column=f:hrt_qa1, 
> timestamp=1395951270297, value=
>  \x00\x00\x00\x02column=f:mapred, 
> timestamp=1395958442326, value=
>  \x00\x00\x00\x02column=f:oozie, 
> timestamp=1395959745422, value=
>  \x00\x00\x00\x03column=f:\x00, 
> timestamp=1395952069731, value=TOP_TOP_SECRET
>  \x00\x00\x00\x03column=f:mapred, 
> timestamp=1395956032141, value=
> 3 row(s) in 0.0620 seconds
> {code}
> However, clear_auths command gave me:
> {code}
> hbase(main):003:0> clear_auths 'oozie', [ 'TOP_SECRET' ]
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> ERROR: org.apache.hadoop.hbase.security.visibility.InvalidLabelException: 
> Label 'TOP_SECRET' is not set for the user oozie
>   at 
> org.apache.hadoop.hbase.security.visibility.VisibilityController.clearAuths(VisibilityController.java:1304)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.clearAuths(VisibilityLabelsProtos.java:5030)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:5188)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5518)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3299)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28865)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10849) Fix increased javadoc warns

2014-03-28 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951048#comment-13951048
 ] 

Jonathan Hsieh commented on HBASE-10849:


ok, sounds good.  I didn't realize the methods were actually removed -- I 
thought it was deletion just to get ride of javadoc warns, 

> Fix increased javadoc warns 
> 
>
> Key: HBASE-10849
> URL: https://issues.apache.org/jira/browse/HBASE-10849
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10849.patch
>
>
> {code}
> 6 warnings
> [WARNING] Javadoc Warnings
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:338:
>  warning - Tag @link: can't find isa in 
> org.apache.hadoop.hbase.regionserver.HRegionServer
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:721:
>  warning - @param argument "controller" is not a parameter name.
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10849) Fix increased javadoc warns

2014-03-28 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10849:
---

  Resolution: Fixed
Assignee: Anoop Sam John
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to Trunk.  Thanks for the reviews

> Fix increased javadoc warns 
> 
>
> Key: HBASE-10849
> URL: https://issues.apache.org/jira/browse/HBASE-10849
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10849.patch
>
>
> {code}
> 6 warnings
> [WARNING] Javadoc Warnings
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:338:
>  warning - Tag @link: can't find isa in 
> org.apache.hadoop.hbase.regionserver.HRegionServer
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:721:
>  warning - @param argument "controller" is not a parameter name.
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-10863) Scan doesn't return rows for user who has authorization by visibility label

2014-03-28 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan reassigned HBASE-10863:
--

Assignee: ramkrishna.s.vasudevan

> Scan doesn't return rows for user who has authorization by visibility label
> ---
>
> Key: HBASE-10863
> URL: https://issues.apache.org/jira/browse/HBASE-10863
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: Ted Yu
>Assignee: ramkrishna.s.vasudevan
>
> In secure deployment of 0.98 tip, I did:
> as user hbase:
> {code}
> add_labels 'A'
> create 'tb', 'f1'
> put 'tb', 'row', 'f1:q', 'v1', {VISIBILITY=>'A'}
> set_auths 'oozie', ['A']
> {code}
> as user oozie:
> {code}
> hbase(main):001:0> scan 'tb', { AUTHORIZATIONS => ['A']}
> ROW  COLUMN+CELL
> 0 row(s) in 0.1030 seconds
> {code}
> Here is my config:
> {code}
>   
> hfile.format.version
> 3
>   
>   
>hbase.coprocessor.master.classes
>
> org.apache.hadoop.hbase.security.visibility.VisibilityController
>   
>   
>hbase.coprocessor.region.classes
>
> org.apache.hadoop.hbase.security.visibility.VisibilityController
>   
>   
>hbase.regionserver.scan.visibility.label.generator.class
>
> org.apache.hadoop.hbase.security.visibility.DefaultScanLabelGenerator
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10815) Master regionserver should be rolling-upgradable

2014-03-28 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10815:


Status: Open  (was: Patch Available)

> Master regionserver should be rolling-upgradable
> 
>
> Key: HBASE-10815
> URL: https://issues.apache.org/jira/browse/HBASE-10815
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, Region Assignment
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0
>
> Attachments: hbase-10815.patch
>
>
> In HBASE-10569, two things could affect the rolling-upgrade from a 0.96+ 
> release:
> * Master doesn't have its own info server any. It shares the same info server 
> with the regionserver. We can have a setting so that we can start two info 
> servers, one for the master on the original port, and one for the 
> regionserver.
> * Backup master is a regionserver now. So it could hold regions. This could 
> affect some deployment. We can have a setting so that we can prevent backup 
> master from serving any region.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10859) HStore.openStoreFiles() should pass the StoreFileInfo object to createStoreFileAndReader()

2014-03-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951011#comment-13951011
 ] 

Nick Dimiduk commented on HBASE-10859:
--

The patch looks good overall. This comment catches my attention though

{noformat}
+// else create a store file link. The link file does not exists on 
filesystem though.
{noformat}

I think this could come as a surprise to other parts of the code, assuming the 
link does indeed exist.

> HStore.openStoreFiles() should pass the StoreFileInfo object to 
> createStoreFileAndReader()
> --
>
> Key: HBASE-10859
> URL: https://issues.apache.org/jira/browse/HBASE-10859
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: hbase-10070
>
> Attachments: hbase-10859_v1.patch
>
>
> We sometimes see the following stack trace on test logs (TestReplicasClient), 
> but this is not test-specific:
> {code}
> 2014-03-26 21:44:18,662 ERROR [RS_OPEN_REGION-c64-s12:35852-2] 
> handler.OpenRegionHandler(481): Failed open of 
> region=TestReplicasClient,,1395895445056_0001.5f8b8db27e36d2dde781193d92a05730.,
>  starting to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: File 
> does not exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:739)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:646)
>   at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:617)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4447)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4417)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4389)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4345)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4296)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> Caused by: java.io.IOException: java.io.FileNotFoundException: File does not 
> exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:531)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:486)
>   at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3357)
>   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:710)
>   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:707)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   ... 3 more
> Caused by: java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.(StoreFileInfo.java:95)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:600)
>   at org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:121)
>   at org.apache.hadoop.hbase.re

[jira] [Updated] (HBASE-10863) Scan doesn't return rows for user who has authorization by visibility label

2014-03-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10863:
---

Description: 
In secure deployment of 0.98 tip, I did:
as user hbase:
{code}
add_labels 'A'
create 'tb', 'f1'
put 'tb', 'row', 'f1:q', 'v1', {VISIBILITY=>'A'}
set_auths 'oozie', ['A']
{code}
as user oozie:
{code}
hbase(main):001:0> scan 'tb', { AUTHORIZATIONS => ['A']}
ROW  COLUMN+CELL
0 row(s) in 0.1030 seconds
{code}
Here is my config:
{code}
  
hfile.format.version
3
  
  
   hbase.coprocessor.master.classes
   
org.apache.hadoop.hbase.security.visibility.VisibilityController
  
  
   hbase.coprocessor.region.classes
   
org.apache.hadoop.hbase.security.visibility.VisibilityController
  
  
   hbase.regionserver.scan.visibility.label.generator.class
   
org.apache.hadoop.hbase.security.visibility.DefaultScanLabelGenerator
  
{code}

  was:
In secure deployment of 0.98 tip, I did:
as user hbase:
{code}
add_labels 'A'
create 'tb', 'f1'
put 'tb', 'row', 'f1:q', 'v1', {VISIBILITY=>'A'}
set_auths 'oozie', ['A']
{code}
as user oozie:
{code}
hbase(main):001:0> scan 'tb'
ROW  COLUMN+CELL
0 row(s) in 0.1030 seconds
{code}
Here is my config:
{code}
  
hfile.format.version
3
  
  
   hbase.coprocessor.master.classes
   
org.apache.hadoop.hbase.security.visibility.VisibilityController
  
  
   hbase.coprocessor.region.classes
   
org.apache.hadoop.hbase.security.visibility.VisibilityController
  
  
   hbase.regionserver.scan.visibility.label.generator.class
   
org.apache.hadoop.hbase.security.visibility.DefaultScanLabelGenerator
  
{code}


> Scan doesn't return rows for user who has authorization by visibility label
> ---
>
> Key: HBASE-10863
> URL: https://issues.apache.org/jira/browse/HBASE-10863
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.1
>Reporter: Ted Yu
>
> In secure deployment of 0.98 tip, I did:
> as user hbase:
> {code}
> add_labels 'A'
> create 'tb', 'f1'
> put 'tb', 'row', 'f1:q', 'v1', {VISIBILITY=>'A'}
> set_auths 'oozie', ['A']
> {code}
> as user oozie:
> {code}
> hbase(main):001:0> scan 'tb', { AUTHORIZATIONS => ['A']}
> ROW  COLUMN+CELL
> 0 row(s) in 0.1030 seconds
> {code}
> Here is my config:
> {code}
>   
> hfile.format.version
> 3
>   
>   
>hbase.coprocessor.master.classes
>
> org.apache.hadoop.hbase.security.visibility.VisibilityController
>   
>   
>hbase.coprocessor.region.classes
>
> org.apache.hadoop.hbase.security.visibility.VisibilityController
>   
>   
>hbase.regionserver.scan.visibility.label.generator.class
>
> org.apache.hadoop.hbase.security.visibility.DefaultScanLabelGenerator
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10859) HStore.openStoreFiles() should pass the StoreFileInfo object to createStoreFileAndReader()

2014-03-28 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951098#comment-13951098
 ] 

Devaraj Das commented on HBASE-10859:
-

The patch looks good to me.
[~ndimiduk], on the topic of in-memory vs disk, we probably could abstract away 
some functionality from HFileLink to something like InMemoryFileLink. Have 
HFileLink extend InMemoryFileLink and implement the "real" file-link stuff...

> HStore.openStoreFiles() should pass the StoreFileInfo object to 
> createStoreFileAndReader()
> --
>
> Key: HBASE-10859
> URL: https://issues.apache.org/jira/browse/HBASE-10859
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: hbase-10070
>
> Attachments: hbase-10859_v1.patch
>
>
> We sometimes see the following stack trace on test logs (TestReplicasClient), 
> but this is not test-specific:
> {code}
> 2014-03-26 21:44:18,662 ERROR [RS_OPEN_REGION-c64-s12:35852-2] 
> handler.OpenRegionHandler(481): Failed open of 
> region=TestReplicasClient,,1395895445056_0001.5f8b8db27e36d2dde781193d92a05730.,
>  starting to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: File 
> does not exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:739)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:646)
>   at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:617)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4447)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4417)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4389)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4345)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4296)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> Caused by: java.io.IOException: java.io.FileNotFoundException: File does not 
> exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:531)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:486)
>   at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3357)
>   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:710)
>   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:707)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   ... 3 more
> Caused by: java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.(StoreFileInfo.java:95)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:600)
>   at org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStor

[jira] [Commented] (HBASE-10771) Primitive type put/get APIs in ByteRange

2014-03-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951100#comment-13951100
 ] 

stack commented on HBASE-10771:
---

Otherwise, the patch seems fine.

bq. Adding of this kind of an API will allow some one to have 2 code paths if 
needed..

Sounds like we will have two code paths in places (smile).  Probably 
unavoidable.

> Primitive type put/get APIs in ByteRange 
> -
>
> Key: HBASE-10771
> URL: https://issues.apache.org/jira/browse/HBASE-10771
> Project: HBase
>  Issue Type: Improvement
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.99.0
>
> Attachments: HBASE-10771.patch, HBASE-10771_V2.patch
>
>
> While doing HBASE-10713 I came across the need to write int/long (and read 
> also) from a ByteRange.  CellBlocks are backed by ByteRange. So we can add 
> such APIs.
> Also as per HBASE-10750  we return a ByteRange from MSLAB and also discussion 
> under HBASE-10191 suggest we can have BR backed HFileBlocks etc.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10864) Spelling nit

2014-03-28 Thread Alex Newman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Newman updated HBASE-10864:


Attachment: 0001-HBASE-10715.-TimeRange-has-a-poorly-formatted-error-.patch

> Spelling nit
> 
>
> Key: HBASE-10864
> URL: https://issues.apache.org/jira/browse/HBASE-10864
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
> Attachments: 
> 0001-HBASE-10715.-TimeRange-has-a-poorly-formatted-error-.patch
>
>
> We should really be more careful about spelling qualifier



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10864) Spelling nit

2014-03-28 Thread Alex Newman (JIRA)
Alex Newman created HBASE-10864:
---

 Summary: Spelling nit
 Key: HBASE-10864
 URL: https://issues.apache.org/jira/browse/HBASE-10864
 Project: HBase
  Issue Type: Bug
Reporter: Alex Newman


We should really be more careful about spelling qualifier



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10864) Spelling nit

2014-03-28 Thread Alex Newman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Newman updated HBASE-10864:


Assignee: Alex Newman
  Status: Patch Available  (was: Open)

> Spelling nit
> 
>
> Key: HBASE-10864
> URL: https://issues.apache.org/jira/browse/HBASE-10864
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
>
> We should really be more careful about spelling qualifier



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-9043) Message Codec has unneeded imports

2014-03-28 Thread Alex Newman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Newman resolved HBASE-9043.


Resolution: Won't Fix

> Message Codec has unneeded imports
> --
>
> Key: HBASE-9043
> URL: https://issues.apache.org/jira/browse/HBASE-9043
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
>Priority: Trivial
> Attachments: HBASE-9043.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10771) Primitive type put/get APIs in ByteRange

2014-03-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951095#comment-13951095
 ] 

stack commented on HBASE-10771:
---

Why again is it that we don't just do ByteBuffer?  ByteRange doc says:

{code}
 * This interface differs from ByteBuffer:
 * On-heap bytes only
 * Raw {@code byte} access only; does not encode other primitives.
 * Implements {@code equals(Object)}, {@code #hashCode()}, and
 * {@code #compareTo(ByteRange)} so that it can be used in standard java
 * Collections. Comparison operations are lexicographic, which is native to
 * HBase.
 * Allows the addition of simple core methods like the deep and shallow
 * copy methods.
 * Can be reused in tight loops like a major compaction which can save
 * significant amounts of garbage. (Without reuse, we throw off garbage like
 * http://www.youtube.com/watch?v=lkmBH-MjZF4";>this thing.)
 * 
 * 
 * Mutable, and always evaluates {@code #equals(Object)}, {@code #hashCode()},
 * and {@code #compareTo(ByteRange)} based on the current contents.
 * 
 * 
 * Can contain convenience methods for comparing, printing, cloning, spawning
 * new arrays, copying to other arrays, etc. Please place non-core methods into
 * {@link ByteRangeUtils}.
{code}

So, we are violating at least the first two items in the Interface with these 
changes, right?

Will we want to evolve to this 
http://netty.io/4.0/api/io/netty/buffer/ByteBuf.html eventually?  Or pull in 
some of this functionality too?  (If we did ByteBuf, then we'd have other 
facility available to us from netty)

Have we written up an end-to-end for ByteRange any place going in and out?  
Pardon me if we have and I've just not kept up.

> Primitive type put/get APIs in ByteRange 
> -
>
> Key: HBASE-10771
> URL: https://issues.apache.org/jira/browse/HBASE-10771
> Project: HBase
>  Issue Type: Improvement
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.99.0
>
> Attachments: HBASE-10771.patch, HBASE-10771_V2.patch
>
>
> While doing HBASE-10713 I came across the need to write int/long (and read 
> also) from a ByteRange.  CellBlocks are backed by ByteRange. So we can add 
> such APIs.
> Also as per HBASE-10750  we return a ByteRange from MSLAB and also discussion 
> under HBASE-10191 suggest we can have BR backed HFileBlocks etc.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10809) HBaseAdmin#deleteTable fails when META region happen to move around same time

2014-03-28 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-10809:
--

   Resolution: Fixed
Fix Version/s: 0.96.3
   0.98.2
   0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks [~devaraj] & [~saint@gmail.com] for the reviews! I've integrated the 
patch into 0.96,0.98 & trunk branch.

> HBaseAdmin#deleteTable fails when META region happen to move around same time
> -
>
> Key: HBASE-10809
> URL: https://issues.apache.org/jira/browse/HBASE-10809
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.96.1, 0.99.0
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: hbase-10809-v1.patch, hbase-10809.patch, 
> hbase-10809.patch
>
>
> The issue is that in the retry loop, we never refetch the latest meta 
> location. So if meta starts to move right after the function get Meta server 
> location, the delete table eventually may fail with 
> {code}org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 
> is not online{code}
> Below is the stack trace:
> {code}
> 2014-01-31 04:02:41,943|beaver.machine|INFO|Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NotServingRegionException):
>  org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is 
> not online
> 2014-01-31 04:02:41,943|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2585)
> 2014-01-31 04:02:41,943|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3952)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:2977)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> 2014-01-31 04:02:41,945|beaver.machine|INFO|
> 2014-01-31 04:02:41,945|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1449)
> 2014-01-31 04:02:41,945|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:27332)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:648)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10864) Spelling nit

2014-03-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951131#comment-13951131
 ] 

stack commented on HBASE-10864:
---

Alex, this and the next issue are too trivial to warrant issues.  Contrib 
something more substantial like moving our pb to protostuff in a critical 
location.

> Spelling nit
> 
>
> Key: HBASE-10864
> URL: https://issues.apache.org/jira/browse/HBASE-10864
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
> Attachments: 
> 0001-HBASE-10715.-TimeRange-has-a-poorly-formatted-error-.patch
>
>
> We should really be more careful about spelling qualifier



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10864) Spelling nit

2014-03-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951132#comment-13951132
 ] 

stack commented on HBASE-10864:
---

Or, do a bunch at a time rather than piecemeal them in?

> Spelling nit
> 
>
> Key: HBASE-10864
> URL: https://issues.apache.org/jira/browse/HBASE-10864
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
> Attachments: 
> 0001-HBASE-10715.-TimeRange-has-a-poorly-formatted-error-.patch
>
>
> We should really be more careful about spelling qualifier



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10864) Spelling nit

2014-03-28 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951135#comment-13951135
 ] 

Alex Newman commented on HBASE-10864:
-

Ah no problem! Close her out. Sorry for the spam.

> Spelling nit
> 
>
> Key: HBASE-10864
> URL: https://issues.apache.org/jira/browse/HBASE-10864
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
> Attachments: 
> 0001-HBASE-10715.-TimeRange-has-a-poorly-formatted-error-.patch
>
>
> We should really be more careful about spelling qualifier



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951138#comment-13951138
 ] 

stack commented on HBASE-10531:
---

Are the findbugs and javadocs yours [~ram_krish]? Fine fixing them in a follow 
on I'd say.

> Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
> 
>
> Key: HBASE-10531
> URL: https://issues.apache.org/jira/browse/HBASE-10531
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.99.0
>
> Attachments: HBASE-10531.patch, HBASE-10531_1.patch, 
> HBASE-10531_12.patch, HBASE-10531_2.patch, HBASE-10531_3.patch, 
> HBASE-10531_4.patch, HBASE-10531_5.patch, HBASE-10531_6.patch, 
> HBASE-10531_7.patch, HBASE-10531_8.patch, HBASE-10531_9.patch
>
>
> Currently the byte[] key passed to HFileScanner.seekTo and 
> HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
> the caller forms this by using kv.getBuffer, which is actually deprecated.  
> So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10864) Spelling nit

2014-03-28 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10864:
--

Priority: Trivial  (was: Major)

Well, it is something to fix

> Spelling nit
> 
>
> Key: HBASE-10864
> URL: https://issues.apache.org/jira/browse/HBASE-10864
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
>Priority: Trivial
> Attachments: 
> 0001-HBASE-10715.-TimeRange-has-a-poorly-formatted-error-.patch
>
>
> We should really be more careful about spelling qualifier



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10771) Primitive type put/get APIs in ByteRange

2014-03-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951143#comment-13951143
 ] 

Nick Dimiduk commented on HBASE-10771:
--

IMO, we're better off making use of an upstream project's types than 
reinventing our own. We'll enjoy better compatibility with ecosystem tools. 
Trouble is, ByteBuffer is inflexible and we haven't a concensus on adopting 
Netty.

The primary concerns of ByteRange are: reusable instances within a tight loop 
of compactions, simple(r) interface -- something I myself violated in 
introducing PositionedByteRange, and more relevant comparable implementation. 
Can we not resolve these difficulties using subclasses, helper/utility methods, 
and/or reflection?

> Primitive type put/get APIs in ByteRange 
> -
>
> Key: HBASE-10771
> URL: https://issues.apache.org/jira/browse/HBASE-10771
> Project: HBase
>  Issue Type: Improvement
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.99.0
>
> Attachments: HBASE-10771.patch, HBASE-10771_V2.patch
>
>
> While doing HBASE-10713 I came across the need to write int/long (and read 
> also) from a ByteRange.  CellBlocks are backed by ByteRange. So we can add 
> such APIs.
> Also as per HBASE-10750  we return a ByteRange from MSLAB and also discussion 
> under HBASE-10191 suggest we can have BR backed HFileBlocks etc.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10864) Spelling nit

2014-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951142#comment-13951142
 ] 

Hadoop QA commented on HBASE-10864:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12637456/0001-HBASE-10715.-TimeRange-has-a-poorly-formatted-error-.patch
  against trunk revision .
  ATTACHMENT ID: 12637456

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9129//console

This message is automatically generated.

> Spelling nit
> 
>
> Key: HBASE-10864
> URL: https://issues.apache.org/jira/browse/HBASE-10864
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
>Priority: Trivial
> Attachments: 
> 0001-HBASE-10715.-TimeRange-has-a-poorly-formatted-error-.patch
>
>
> We should really be more careful about spelling qualifier



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10859) HStore.openStoreFiles() should pass the StoreFileInfo object to createStoreFileAndReader()

2014-03-28 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951146#comment-13951146
 ] 

Sergey Shelukhin commented on HBASE-10859:
--

+1

> HStore.openStoreFiles() should pass the StoreFileInfo object to 
> createStoreFileAndReader()
> --
>
> Key: HBASE-10859
> URL: https://issues.apache.org/jira/browse/HBASE-10859
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: hbase-10070
>
> Attachments: hbase-10859_v1.patch
>
>
> We sometimes see the following stack trace on test logs (TestReplicasClient), 
> but this is not test-specific:
> {code}
> 2014-03-26 21:44:18,662 ERROR [RS_OPEN_REGION-c64-s12:35852-2] 
> handler.OpenRegionHandler(481): Failed open of 
> region=TestReplicasClient,,1395895445056_0001.5f8b8db27e36d2dde781193d92a05730.,
>  starting to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: File 
> does not exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:739)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:646)
>   at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:617)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4447)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4417)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4389)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4345)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4296)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> Caused by: java.io.IOException: java.io.FileNotFoundException: File does not 
> exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:531)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:486)
>   at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3357)
>   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:710)
>   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:707)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   ... 3 more
> Caused by: java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.(StoreFileInfo.java:95)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:600)
>   at org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:506)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:503)
>   ... 8 more
> {code}
> The region fails to open for the region replica, because at this time, the 
> primary region is performing a compaction. The file is move

[jira] [Commented] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-28 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951152#comment-13951152
 ] 

ramkrishna.s.vasudevan commented on HBASE-10531:


The reports are missing. May be will give one more run and get teh lastet 
reports and fix them before commit. Anoop has given some nice comments over in 
RB.  Checking them out.

> Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
> 
>
> Key: HBASE-10531
> URL: https://issues.apache.org/jira/browse/HBASE-10531
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.99.0
>
> Attachments: HBASE-10531.patch, HBASE-10531_1.patch, 
> HBASE-10531_12.patch, HBASE-10531_2.patch, HBASE-10531_3.patch, 
> HBASE-10531_4.patch, HBASE-10531_5.patch, HBASE-10531_6.patch, 
> HBASE-10531_7.patch, HBASE-10531_8.patch, HBASE-10531_9.patch
>
>
> Currently the byte[] key passed to HFileScanner.seekTo and 
> HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
> the caller forms this by using kv.getBuffer, which is actually deprecated.  
> So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10771) Primitive type put/get APIs in ByteRange

2014-03-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951166#comment-13951166
 ] 

Anoop Sam John commented on HBASE-10771:


Oh yes Stack.. We are violating.. My bad I didnt read those javadoc yet.  Would 
like to go with BR instead of BB as we have freedom of going with netty or 
others.  
Dont want to make this object fat. That is why was not thinking even adding a 
mark or reset kind of things also..  Why the put and get are added is it can 
abstract the impl detail.. Onheap and offheap can impl diff .. 

> Primitive type put/get APIs in ByteRange 
> -
>
> Key: HBASE-10771
> URL: https://issues.apache.org/jira/browse/HBASE-10771
> Project: HBase
>  Issue Type: Improvement
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.99.0
>
> Attachments: HBASE-10771.patch, HBASE-10771_V2.patch
>
>
> While doing HBASE-10713 I came across the need to write int/long (and read 
> also) from a ByteRange.  CellBlocks are backed by ByteRange. So we can add 
> such APIs.
> Also as per HBASE-10750  we return a ByteRange from MSLAB and also discussion 
> under HBASE-10191 suggest we can have BR backed HFileBlocks etc.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951173#comment-13951173
 ] 

Anoop Sam John commented on HBASE-10531:


bq.Are the findbugs and javadocs yours
No. These are fixed in trunk now.

> Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
> 
>
> Key: HBASE-10531
> URL: https://issues.apache.org/jira/browse/HBASE-10531
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.99.0
>
> Attachments: HBASE-10531.patch, HBASE-10531_1.patch, 
> HBASE-10531_12.patch, HBASE-10531_2.patch, HBASE-10531_3.patch, 
> HBASE-10531_4.patch, HBASE-10531_5.patch, HBASE-10531_6.patch, 
> HBASE-10531_7.patch, HBASE-10531_8.patch, HBASE-10531_9.patch
>
>
> Currently the byte[] key passed to HFileScanner.seekTo and 
> HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
> the caller forms this by using kv.getBuffer, which is actually deprecated.  
> So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10849) Fix increased javadoc warns

2014-03-28 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950940#comment-13950940
 ] 

Jimmy Xiang commented on HBASE-10849:
-

[~anoopsamjohn], you are right. The doc should be updated. Thanks.

> Fix increased javadoc warns 
> 
>
> Key: HBASE-10849
> URL: https://issues.apache.org/jira/browse/HBASE-10849
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10849.patch
>
>
> {code}
> 6 warnings
> [WARNING] Javadoc Warnings
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:338:
>  warning - Tag @link: can't find isa in 
> org.apache.hadoop.hbase.regionserver.HRegionServer
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:721:
>  warning - @param argument "controller" is not a parameter name.
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10859) HStore.openStoreFiles() should pass the StoreFileInfo object to createStoreFileAndReader()

2014-03-28 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951015#comment-13951015
 ] 

Nick Dimiduk commented on HBASE-10859:
--

Further, it establishes a divergent model -- what's understood in the in-memory 
model vs the reality of what's on disk.

> HStore.openStoreFiles() should pass the StoreFileInfo object to 
> createStoreFileAndReader()
> --
>
> Key: HBASE-10859
> URL: https://issues.apache.org/jira/browse/HBASE-10859
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: hbase-10070
>
> Attachments: hbase-10859_v1.patch
>
>
> We sometimes see the following stack trace on test logs (TestReplicasClient), 
> but this is not test-specific:
> {code}
> 2014-03-26 21:44:18,662 ERROR [RS_OPEN_REGION-c64-s12:35852-2] 
> handler.OpenRegionHandler(481): Failed open of 
> region=TestReplicasClient,,1395895445056_0001.5f8b8db27e36d2dde781193d92a05730.,
>  starting to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: File 
> does not exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:739)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:646)
>   at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:617)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4447)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4417)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4389)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4345)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4296)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> Caused by: java.io.IOException: java.io.FileNotFoundException: File does not 
> exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:531)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:486)
>   at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3357)
>   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:710)
>   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:707)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   ... 3 more
> Caused by: java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.(StoreFileInfo.java:95)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:600)
>   at org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:506)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:503)
>   ... 8 more
> {code}
> The region fails to open fo

[jira] [Commented] (HBASE-10849) Fix increased javadoc warns

2014-03-28 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950973#comment-13950973
 ] 

Anoop Sam John commented on HBASE-10849:


[~jmhsieh] you fine about above explanation? If so we can commit.

> Fix increased javadoc warns 
> 
>
> Key: HBASE-10849
> URL: https://issues.apache.org/jira/browse/HBASE-10849
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10849.patch
>
>
> {code}
> 6 warnings
> [WARNING] Javadoc Warnings
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:338:
>  warning - Tag @link: can't find isa in 
> org.apache.hadoop.hbase.regionserver.HRegionServer
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:721:
>  warning - @param argument "controller" is not a parameter name.
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10860) Insufficient AccessController covering permission check

2014-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13950833#comment-13950833
 ] 

Hadoop QA commented on HBASE-10860:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637373/HBASE-10860.patch
  against trunk revision .
  ATTACHMENT ID: 12637373

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9125//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9125//console

This message is automatically generated.

> Insufficient AccessController covering permission check
> ---
>
> Key: HBASE-10860
> URL: https://issues.apache.org/jira/browse/HBASE-10860
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10860.patch
>
>
> {code}
>   List list = (List)entry.getValue();
>   if (list == null || list.isEmpty()) {
>   get.addFamily(col);
>   } else {
>   for (Cell cell : list) {
> get.addColumn(col, CellUtil.cloneQualifier(cell));
>   }
>   }
> {code}
> When a delete family Mutation comes, a Cell will be added into the list with 
> Qualifier as null. (See Delete#deleteFamily(byte[])). So it will miss getting 
> added against the check list == null || list.isEmpty().  We will fail getting 
> the cells under this cf for covering permission check.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10862) Update config field names in hbase-default.xml description for hbase.hregion.memstore.block.multiplier

2014-03-28 Thread Albert Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Chu updated HBASE-10862:
---

Status: Patch Available  (was: Open)

No tests added, trivial patch.

> Update config field names in hbase-default.xml description for 
> hbase.hregion.memstore.block.multiplier
> --
>
> Key: HBASE-10862
> URL: https://issues.apache.org/jira/browse/HBASE-10862
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.96.1.1
>Reporter: Albert Chu
>Priority: Trivial
> Attachments: HBASE-10862.patch
>
>
> I noticed several field names in the description for 
> hbase.hregion.memstore.block.multiplier were the old names and not the 
> current ones.  Patch attached.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10850) Unexpected behavior when using filter SingleColumnValueFilter

2014-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951024#comment-13951024
 ] 

Hadoop QA commented on HBASE-10850:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637380/HBASE-10850_V2.patch
  against trunk revision .
  ATTACHMENT ID: 12637380

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.filter.TestFilterWrapper

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9127//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9127//console

This message is automatically generated.

> Unexpected behavior when using filter SingleColumnValueFilter
> -
>
> Key: HBASE-10850
> URL: https://issues.apache.org/jira/browse/HBASE-10850
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: haosdent
>Priority: Critical
> Attachments: HBASE-10850-96.patch, HBASE-10850.patch, 
> HBASE-10850_V2.patch, HBaseSingleColumnValueFilterTest.java
>
>
> When using the filter SingleColumnValueFilter, and depending of the columns 
> specified in the scan (filtering column always specified), the results can be 
> different.
> Here is an example.
> Suppose the following table:
> ||key||a:foo||a:bar||b:foo||b:bar||
> |1|false|_flag_|_flag_|_flag_|
> |2|true|_flag_|_flag_|_flag_|
> |3| |_flag_|_flag_|_flag_|
> With this filter:
> {code}
> SingleColumnValueFilter filter = new 
> SingleColumnValueFilter(Bytes.toBytes("a"), Bytes.toBytes("foo"), 
> CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes("false")));
> filter.setFilterIfMissing(true);
> {code}
> Depending of how I specify the list of columns to add in the scan, the result 
> is different. Yet, all examples below should always return only the first row 
> (key '1'):
> OK:
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> scan.addFamily(Bytes.toBytes("b"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> scan.addCo

[jira] [Created] (HBASE-10862) Update config field names in hbase-default.xml description for hbase.hregion.memstore.block.multiplier

2014-03-28 Thread Albert Chu (JIRA)
Albert Chu created HBASE-10862:
--

 Summary: Update config field names in hbase-default.xml 
description for hbase.hregion.memstore.block.multiplier
 Key: HBASE-10862
 URL: https://issues.apache.org/jira/browse/HBASE-10862
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.96.1.1
Reporter: Albert Chu
Priority: Trivial


I noticed several field names in the description for 
hbase.hregion.memstore.block.multiplier were the old names and not the current 
ones.  Patch attached.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10863) Scan doesn't return rows for user who has authorization by visibility label

2014-03-28 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10863:
--

 Summary: Scan doesn't return rows for user who has authorization 
by visibility label
 Key: HBASE-10863
 URL: https://issues.apache.org/jira/browse/HBASE-10863
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Ted Yu


In secure deployment of 0.98 tip, I did:
as user hbase:
{code}
add_labels 'A'
create 'tb', 'f1'
put 'tb', 'row', 'f1:q', 'v1', {VISIBILITY=>'A'}
set_auths 'oozie', ['A']
{code}
as user oozie:
{code}
hbase(main):001:0> scan 'tb'
ROW  COLUMN+CELL
0 row(s) in 0.1030 seconds
{code}
Here is my config:
{code}
  
hfile.format.version
3
  
  
   hbase.coprocessor.master.classes
   
org.apache.hadoop.hbase.security.visibility.VisibilityController
  
  
   hbase.coprocessor.region.classes
   
org.apache.hadoop.hbase.security.visibility.VisibilityController
  
  
   hbase.regionserver.scan.visibility.label.generator.class
   
org.apache.hadoop.hbase.security.visibility.DefaultScanLabelGenerator
  
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10862) Update config field names in hbase-default.xml description for hbase.hregion.memstore.block.multiplier

2014-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951145#comment-13951145
 ] 

Hadoop QA commented on HBASE-10862:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637434/HBASE-10862.patch
  against trunk revision .
  ATTACHMENT ID: 12637434

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestRegionPlacement

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9128//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9128//console

This message is automatically generated.

> Update config field names in hbase-default.xml description for 
> hbase.hregion.memstore.block.multiplier
> --
>
> Key: HBASE-10862
> URL: https://issues.apache.org/jira/browse/HBASE-10862
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.96.1.1
>Reporter: Albert Chu
>Priority: Trivial
> Attachments: HBASE-10862.patch
>
>
> I noticed several field names in the description for 
> hbase.hregion.memstore.block.multiplier were the old names and not the 
> current ones.  Patch attached.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10859) HStore.openStoreFiles() should pass the StoreFileInfo object to createStoreFileAndReader()

2014-03-28 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951201#comment-13951201
 ] 

Enis Soztutar commented on HBASE-10859:
---

Thanks for reviews. I have a test where there is a writer, a flush and 
compaction requester and a thread which does reads against the secondary. I am 
trying to get this test failing before the patch to demonstrate the problem but 
it seems that it is more robust than expected. I'll try to adjust the test. 
bq. Further, it establishes a divergent model – what's understood in the 
in-memory model vs the reality of what's on disk.
That is a valid concern. I've thought about making the hfilelinks concrete. We 
can create region directories for the secondary replicas, and save the hfile 
links to the primary replica in those directories. We opted not to do so, 
because it looks like these will be not required and an unnecessary burden on 
the namenode. We can revisit that decision if it becomes a problem. 
bq. InMemoryFileLink
let me see whether I can do a subclass of HFileLink. 

> HStore.openStoreFiles() should pass the StoreFileInfo object to 
> createStoreFileAndReader()
> --
>
> Key: HBASE-10859
> URL: https://issues.apache.org/jira/browse/HBASE-10859
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: hbase-10070
>
> Attachments: hbase-10859_v1.patch
>
>
> We sometimes see the following stack trace on test logs (TestReplicasClient), 
> but this is not test-specific:
> {code}
> 2014-03-26 21:44:18,662 ERROR [RS_OPEN_REGION-c64-s12:35852-2] 
> handler.OpenRegionHandler(481): Failed open of 
> region=TestReplicasClient,,1395895445056_0001.5f8b8db27e36d2dde781193d92a05730.,
>  starting to roll back the global memstore size.
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: File 
> does not exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:739)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:646)
>   at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:617)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4447)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4417)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4389)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4345)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4296)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> Caused by: java.io.IOException: java.io.FileNotFoundException: File does not 
> exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:531)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:486)
>   at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:254)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3357)
>   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:710)
>   at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:707)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   ... 3 more
> Caused by: java.io.FileNotFoundException: File does not exist: 
> hdfs://localhost:56276/user/jenkins/hbase/data/default/TestReplicasClient/856934fb87781c9030975706b66137a5/info/589000f197b048e0897e1d81dd7e3a90
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(Distribute

[jira] [Updated] (HBASE-10862) Update config field names in hbase-default.xml description for hbase.hregion.memstore.block.multiplier

2014-03-28 Thread Albert Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Chu updated HBASE-10862:
---

Attachment: HBASE-10862.patch

> Update config field names in hbase-default.xml description for 
> hbase.hregion.memstore.block.multiplier
> --
>
> Key: HBASE-10862
> URL: https://issues.apache.org/jira/browse/HBASE-10862
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.96.1.1
>Reporter: Albert Chu
>Priority: Trivial
> Attachments: HBASE-10862.patch
>
>
> I noticed several field names in the description for 
> hbase.hregion.memstore.block.multiplier were the old names and not the 
> current ones.  Patch attached.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10865) Store per table server assignment preference

2014-03-28 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-10865:
-

 Summary: Store per table server assignment preference
 Key: HBASE-10865
 URL: https://issues.apache.org/jira/browse/HBASE-10865
 Project: HBase
  Issue Type: Improvement
  Components: Admin, master, Region Assignment
Affects Versions: 0.89-fb
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 0.89-fb


Storing per table assignment preference in HTD will allow it to be used after 
table creation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10794) multi-get should handle missing replica location from cache

2014-03-28 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-10794:
-

Attachment: HBASE-10794.02.addendum.patch

addendum to the 2nd patch to fix a bug; when upstream patches are committed 
I'll figure out how to combine all this stuff, it will be committed together

> multi-get should handle missing replica location from cache
> ---
>
> Key: HBASE-10794
> URL: https://issues.apache.org/jira/browse/HBASE-10794
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: hbase-10070
>
> Attachments: HBASE-10794.01.patch, HBASE-10794.02.addendum.patch, 
> HBASE-10794.02.patch, HBASE-10794.patch, HBASE-10794.patch
>
>
> Currently the way cache works is that the meta row is stored together for all 
> replicas of a region, so if some replicas are in recovery, getting locations 
> for a region will still go to cache only and return null locations for these. 
> Multi-get currently ignores such replicas. It should instead try to get 
> location again from meta if any replica is null.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-28 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-10866:
---

 Summary: Decouple HLogSplitterHandler from ZooKeeper
 Key: HBASE-10866
 URL: https://issues.apache.org/jira/browse/HBASE-10866
 Project: HBase
  Issue Type: Improvement
  Components: regionserver, Zookeeper
Reporter: Mikhail Antonov


As some sort of follow-up or initial step towards HBASE-10296...

Whatever consensus algorithm/library may be the chosen, perhaps on of first 
practical steps towards this goal would be to better abstract ZK-related API 
and details, which are now throughout the codebase (mostly leaked throuth 
ZkUtil, ZooKeeperWatcher and listeners).

I'd like to propose a series of patches to help better abstract out zookeeper 
(and then help develop consensus APIs). 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-28 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10866:


Attachment: HBASE-10866.patch

First version for review.

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-28 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10867:
--

 Summary: TestRegionPlacement#testRegionPlacement occasionally fails
 Key: HBASE-10867
 URL: https://issues.apache.org/jira/browse/HBASE-10867
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: 10867-v1.txt

>From 
>https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
> :
{code}
java.lang.ArrayIndexOutOfBoundsException: 10
at 
java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
at 
java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
at 
org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
at 
org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
at 
org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
at 
org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
{code}
In the setup:
{code}
TEST_UTIL.startMiniCluster(SLAVES);
{code}
where SLAVES is 10.
So when 10 was used in TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), 
we would get ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10867:
---

Attachment: 10867-v1.txt

> TestRegionPlacement#testRegionPlacement occasionally fails
> --
>
> Key: HBASE-10867
> URL: https://issues.apache.org/jira/browse/HBASE-10867
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10867-v1.txt
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
>  :
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 10
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
> {code}
> In the setup:
> {code}
> TEST_UTIL.startMiniCluster(SLAVES);
> {code}
> where SLAVES is 10.
> So when 10 was used in 
> TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get 
> ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10867:
---

Status: Patch Available  (was: Open)

> TestRegionPlacement#testRegionPlacement occasionally fails
> --
>
> Key: HBASE-10867
> URL: https://issues.apache.org/jira/browse/HBASE-10867
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10867-v1.txt
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
>  :
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 10
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
> {code}
> In the setup:
> {code}
> TEST_UTIL.startMiniCluster(SLAVES);
> {code}
> where SLAVES is 10.
> So when 10 was used in 
> TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get 
> ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-28 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951443#comment-13951443
 ] 

Lars Hofhansl commented on HBASE-10847:
---

Talked to Mr. [~jesse_yates]. I will just move the add-source portion from 
build under the secure profile into the global build. Running all tests now. So 
far so good.
Will post patch once I confirmed the test suite passing.

> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 0.94.19
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-28 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10866:


Description: 
As some sort of follow-up or initial step towards HBASE-10296...

Whatever consensus algorithm/library may be the chosen, perhaps on of first 
practical steps towards this goal would be to better abstract ZK-related API 
and details, which are now throughout the codebase (mostly leaked throuth 
ZkUtil, ZooKeeperWatcher and listeners).

I'd like to propose a series of patches to help better abstract out zookeeper 
(and then help develop consensus APIs). 

Here is first version of  patch for initial review (then I'm planning to work 
on another handlers in regionserver, and then perhaps start working on 
abstracting listeners).

  was:
As some sort of follow-up or initial step towards HBASE-10296...

Whatever consensus algorithm/library may be the chosen, perhaps on of first 
practical steps towards this goal would be to better abstract ZK-related API 
and details, which are now throughout the codebase (mostly leaked throuth 
ZkUtil, ZooKeeperWatcher and listeners).

I'd like to propose a series of patches to help better abstract out zookeeper 
(and then help develop consensus APIs). 


> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10868) TestAtomicOperation should close HRegion instance after each subtest

2014-03-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10868:
---

Status: Patch Available  (was: Open)

> TestAtomicOperation should close HRegion instance after each subtest
> 
>
> Key: HBASE-10868
> URL: https://issues.apache.org/jira/browse/HBASE-10868
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10868-v1.txt
>
>
> Jeff Bowles noted high number of open file handles after TestAtomicOperation 
> was run.
> TestAtomicOperation uses one class variable to hold HRegion instance which is 
> initialized at the beginning of every sub-test but is never closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10868) TestAtomicOperation should close HRegion instance after each subtest

2014-03-28 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10868:
--

 Summary: TestAtomicOperation should close HRegion instance after 
each subtest
 Key: HBASE-10868
 URL: https://issues.apache.org/jira/browse/HBASE-10868
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10868-v1.txt

Jeff Bowles noted high number of open file handles after TestAtomicOperation 
was run.

TestAtomicOperation uses one class variable to hold HRegion instance which is 
initialized at the beginning of every sub-test but is never closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10868) TestAtomicOperation should close HRegion instance after each subtest

2014-03-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10868:
---

Attachment: 10868-v1.txt

> TestAtomicOperation should close HRegion instance after each subtest
> 
>
> Key: HBASE-10868
> URL: https://issues.apache.org/jira/browse/HBASE-10868
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10868-v1.txt
>
>
> Jeff Bowles noted high number of open file handles after TestAtomicOperation 
> was run.
> TestAtomicOperation uses one class variable to hold HRegion instance which is 
> initialized at the beginning of every sub-test but is never closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10809) HBaseAdmin#deleteTable fails when META region happen to move around same time

2014-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951491#comment-13951491
 ] 

Hudson commented on HBASE-10809:


FAILURE: Integrated in HBase-0.98 #255 (See 
[https://builds.apache.org/job/HBase-0.98/255/])
HBASE-10809: HBaseAdmin#deleteTable fails when META region happen to move 
around same time (jeffreyz: rev 1582842)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRowProcessorEndpoint.java


> HBaseAdmin#deleteTable fails when META region happen to move around same time
> -
>
> Key: HBASE-10809
> URL: https://issues.apache.org/jira/browse/HBASE-10809
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.96.1, 0.99.0
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: hbase-10809-v1.patch, hbase-10809.patch, 
> hbase-10809.patch
>
>
> The issue is that in the retry loop, we never refetch the latest meta 
> location. So if meta starts to move right after the function get Meta server 
> location, the delete table eventually may fail with 
> {code}org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 
> is not online{code}
> Below is the stack trace:
> {code}
> 2014-01-31 04:02:41,943|beaver.machine|INFO|Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NotServingRegionException):
>  org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is 
> not online
> 2014-01-31 04:02:41,943|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2585)
> 2014-01-31 04:02:41,943|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3952)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:2977)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> 2014-01-31 04:02:41,945|beaver.machine|INFO|
> 2014-01-31 04:02:41,945|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1449)
> 2014-01-31 04:02:41,945|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:27332)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:648)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10809) HBaseAdmin#deleteTable fails when META region happen to move around same time

2014-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951512#comment-13951512
 ] 

Hudson commented on HBASE-10809:


FAILURE: Integrated in HBase-TRUNK #5048 (See 
[https://builds.apache.org/job/HBase-TRUNK/5048/])
HBASE-10809: HBaseAdmin#deleteTable fails when META region happen to move 
around same time (jeffreyz: rev 1582841)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRowProcessorEndpoint.java


> HBaseAdmin#deleteTable fails when META region happen to move around same time
> -
>
> Key: HBASE-10809
> URL: https://issues.apache.org/jira/browse/HBASE-10809
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.96.1, 0.99.0
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: hbase-10809-v1.patch, hbase-10809.patch, 
> hbase-10809.patch
>
>
> The issue is that in the retry loop, we never refetch the latest meta 
> location. So if meta starts to move right after the function get Meta server 
> location, the delete table eventually may fail with 
> {code}org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 
> is not online{code}
> Below is the stack trace:
> {code}
> 2014-01-31 04:02:41,943|beaver.machine|INFO|Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NotServingRegionException):
>  org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is 
> not online
> 2014-01-31 04:02:41,943|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2585)
> 2014-01-31 04:02:41,943|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3952)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:2977)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> 2014-01-31 04:02:41,945|beaver.machine|INFO|
> 2014-01-31 04:02:41,945|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1449)
> 2014-01-31 04:02:41,945|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:27332)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:648)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10849) Fix increased javadoc warns

2014-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951511#comment-13951511
 ] 

Hudson commented on HBASE-10849:


FAILURE: Integrated in HBase-TRUNK #5048 (See 
[https://builds.apache.org/job/HBase-TRUNK/5048/])
HBASE-10849 Fix increased javadoc warns.(Anoop) (anoopsamjohn: rev 1582832)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


> Fix increased javadoc warns 
> 
>
> Key: HBASE-10849
> URL: https://issues.apache.org/jira/browse/HBASE-10849
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10849.patch
>
>
> {code}
> 6 warnings
> [WARNING] Javadoc Warnings
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:338:
>  warning - Tag @link: can't find isa in 
> org.apache.hadoop.hbase.regionserver.HRegionServer
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find openServer() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java:45:
>  warning - Tag @link: can't find startThreads() in 
> org.apache.hadoop.hbase.ipc.RpcServerInterface
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:721:
>  warning - @param argument "controller" is not a parameter name.
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-28 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10866:
--

Status: Patch Available  (was: Open)

Try against Hadoopqa.  So far looks good [~mantonov]

Packaging-wise, I'd think a consensus package would live at o.a.h.h not at 
o.a.h.h.regionserver.consensus... consensus will have facility more general 
than regionserver?  Or do you think there will be a master consensus subpackage 
too and that master and regionserver will never share Interfaces?

Nit: See other Interfaces in hbase.  They leave off the 'public'.

Is this all to address log splitting?  Is there not a master side too?

Looks good so far.

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10809) HBaseAdmin#deleteTable fails when META region happen to move around same time

2014-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951516#comment-13951516
 ] 

Hudson commented on HBASE-10809:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #239 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/239/])
HBASE-10809: HBaseAdmin#deleteTable fails when META region happen to move 
around same time (jeffreyz: rev 1582842)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRowProcessorEndpoint.java


> HBaseAdmin#deleteTable fails when META region happen to move around same time
> -
>
> Key: HBASE-10809
> URL: https://issues.apache.org/jira/browse/HBASE-10809
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.96.1, 0.99.0
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: hbase-10809-v1.patch, hbase-10809.patch, 
> hbase-10809.patch
>
>
> The issue is that in the retry loop, we never refetch the latest meta 
> location. So if meta starts to move right after the function get Meta server 
> location, the delete table eventually may fail with 
> {code}org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 
> is not online{code}
> Below is the stack trace:
> {code}
> 2014-01-31 04:02:41,943|beaver.machine|INFO|Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NotServingRegionException):
>  org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is 
> not online
> 2014-01-31 04:02:41,943|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2585)
> 2014-01-31 04:02:41,943|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3952)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:2977)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> 2014-01-31 04:02:41,945|beaver.machine|INFO|
> 2014-01-31 04:02:41,945|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1449)
> 2014-01-31 04:02:41,945|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:27332)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:648)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-8963) Add configuration option to skip HFile archiving

2014-03-28 Thread bharath v (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951534#comment-13951534
 ] 

bharath v commented on HBASE-8963:
--

[~lhofhansl] Sorry for the delay here. I was traveling this whole week and 
didn't get a chance to work on this jira. That sounds nice but seems to require 
a bit of code refactoring. With the current code base, it can be achieved by 
configuring TimeToLiveHFileCleaner and setting "hbase.master.hfilecleaner.ttl" 
accordingly, its a global setting though. 

> Add configuration option to skip HFile archiving
> 
>
> Key: HBASE-8963
> URL: https://issues.apache.org/jira/browse/HBASE-8963
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: bharath v
> Fix For: 0.99.0
>
> Attachments: HBASE-8963.trunk.v1.patch, HBASE-8963.trunk.v2.patch, 
> HBASE-8963.trunk.v3.patch, HBASE-8963.trunk.v4.patch, 
> HBASE-8963.trunk.v5.patch, HBASE-8963.trunk.v6.patch, 
> HBASE-8963.trunk.v7.patch
>
>
> Currently HFileArchiver is always called when a table is dropped.
> A configuration option (either global or per table) should be provided so 
> that archiving can be skipped when table is deleted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-03-28 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951541#comment-13951541
 ] 

Konstantin Boudnik commented on HBASE-10866:


Thanks for the initial feedback, Stack! I think the main reason for doing this 
separation one handler at a time and just on the RS side is to limit the scope 
of the patches and make them easier to be digested. Would you suggest otherwise?

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
> Attachments: HBASE-10866.patch
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10809) HBaseAdmin#deleteTable fails when META region happen to move around same time

2014-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951544#comment-13951544
 ] 

Hudson commented on HBASE-10809:


FAILURE: Integrated in hbase-0.96 #371 (See 
[https://builds.apache.org/job/hbase-0.96/371/])
HBASE-10809: HBaseAdmin#deleteTable fails when META region happen to move 
around same time (jeffreyz: rev 1582844)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRowProcessorEndpoint.java


> HBaseAdmin#deleteTable fails when META region happen to move around same time
> -
>
> Key: HBASE-10809
> URL: https://issues.apache.org/jira/browse/HBASE-10809
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.96.1, 0.99.0
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: hbase-10809-v1.patch, hbase-10809.patch, 
> hbase-10809.patch
>
>
> The issue is that in the retry loop, we never refetch the latest meta 
> location. So if meta starts to move right after the function get Meta server 
> location, the delete table eventually may fail with 
> {code}org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 
> is not online{code}
> Below is the stack trace:
> {code}
> 2014-01-31 04:02:41,943|beaver.machine|INFO|Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NotServingRegionException):
>  org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is 
> not online
> 2014-01-31 04:02:41,943|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2585)
> 2014-01-31 04:02:41,943|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3952)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:2977)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26929)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
> 2014-01-31 04:02:41,944|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
> 2014-01-31 04:02:41,945|beaver.machine|INFO|
> 2014-01-31 04:02:41,945|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1449)
> 2014-01-31 04:02:41,945|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:27332)
> 2014-01-31 04:02:41,946|beaver.machine|INFO|at 
> org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:648)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6990) Pretty print TTL

2014-03-28 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951550#comment-13951550
 ] 

Esteban Gutierrez commented on HBASE-6990:
--

[~kevin.odell] Ping, are you working on this?

> Pretty print TTL
> 
>
> Key: HBASE-6990
> URL: https://issues.apache.org/jira/browse/HBASE-6990
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.6, 0.95.0
>Reporter: Jean-Daniel Cryans
>Assignee: Kevin Odell
>Priority: Minor
>
> I've seen a lot of users getting confused by the TTL configuration and I 
> think that if we just pretty printed it it would solve most of the issues. 
> For example, let's say a user wanted to set a TTL of 90 days. That would be 
> 7776000. But let's say that it was typo'd to 7776 instead, it gives you 
> 900 days!
> So when we print the TTL we could do something like "x days, x hours, x 
> minutes, x seconds (real_ttl_value)". This would also help people when they 
> use ms instead of seconds as they would see really big values in there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-8889) TestIOFencing#testFencingAroundCompaction occasionally fails

2014-03-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8889:
--

   Resolution: Fixed
Fix Version/s: (was: 1.0.0)
   0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

> TestIOFencing#testFencingAroundCompaction occasionally fails
> 
>
> Key: HBASE-8889
> URL: https://issues.apache.org/jira/browse/HBASE-8889
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8889-v1.txt, TestIOFencing-#8362.tar.gz, 
> TestIOFencing.tar.gz
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/6232//testReport/org.apache.hadoop.hbase/TestIOFencing/testFencingAroundCompaction/
>  :
> {code}
> java.lang.AssertionError: Timed out waiting for new server to open region
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hbase.TestIOFencing.doTest(TestIOFencing.java:269)
>   at 
> org.apache.hadoop.hbase.TestIOFencing.testFencingAroundCompaction(TestIOFencing.java:205)
> {code}
> {code}
> 2013-07-06 23:13:53,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
> Waiting for the new server to pick up the region 
> tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
> 2013-07-06 23:13:54,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
> Waiting for the new server to pick up the region 
> tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
> 2013-07-06 23:13:55,121 DEBUG [pool-1-thread-1] 
> hbase.TestIOFencing$CompactionBlockerRegion(102): allowing compactions
> 2013-07-06 23:13:55,121 INFO  [pool-1-thread-1] 
> hbase.HBaseTestingUtility(911): Shutting down minicluster
> 2013-07-06 23:13:55,121 DEBUG [pool-1-thread-1] util.JVMClusterUtil(237): 
> Shutting down HBase Cluster
> 2013-07-06 23:13:55,121 INFO  
> [RS:0;asf002:39065-smallCompactions-1373152134716] regionserver.HStore(951): 
> Starting compaction of 2 file(s) in family of 
> tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03. into 
> tmpdir=hdfs://localhost:50140/user/jenkins/hbase/tabletest/6e62d3b24ea23160931362b60359ff03/.tmp,
>  totalSize=108.4k
> ...
> 2013-07-06 23:13:55,155 INFO  [RS:0;asf002:39065] 
> regionserver.HRegionServer(2476): Received CLOSE for the region: 
> 6e62d3b24ea23160931362b60359ff03 ,which we are already trying to CLOSE
> 2013-07-06 23:13:55,157 WARN  [RS:0;asf002:39065] 
> regionserver.HRegionServer(2414): Failed to close 
> tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03. - ignoring and 
> continuing
> org.apache.hadoop.hbase.exceptions.NotServingRegionException: The region 
> 6e62d3b24ea23160931362b60359ff03 was already closing. New CLOSE request is 
> ignored.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:2479)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegionIgnoreErrors(HRegionServer.java:2409)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeUserRegions(HRegionServer.java:2011)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:903)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:337)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1131)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41)
>   at org.apache.hadoop.hbase.security.User.call(User.java:420)
>   at org.apache.hadoop.hbase.security.User.access$300(User.java:51)
>   at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:260)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:140)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10815) Master regionserver should be rolling-upgradable

2014-03-28 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10815:


Attachment: hbase-10815_v2.patch

Attached v2 that uses a jetty server to redirect request to the old master info 
server port to the new port.

> Master regionserver should be rolling-upgradable
> 
>
> Key: HBASE-10815
> URL: https://issues.apache.org/jira/browse/HBASE-10815
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, Region Assignment
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0
>
> Attachments: hbase-10815.patch, hbase-10815_v2.patch
>
>
> In HBASE-10569, two things could affect the rolling-upgrade from a 0.96+ 
> release:
> * Master doesn't have its own info server any. It shares the same info server 
> with the regionserver. We can have a setting so that we can start two info 
> servers, one for the master on the original port, and one for the 
> regionserver.
> * Backup master is a regionserver now. So it could hold regions. This could 
> affect some deployment. We can have a setting so that we can prevent backup 
> master from serving any region.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10815) Master regionserver should be rolling-upgradable

2014-03-28 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10815:


Status: Patch Available  (was: Open)

> Master regionserver should be rolling-upgradable
> 
>
> Key: HBASE-10815
> URL: https://issues.apache.org/jira/browse/HBASE-10815
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, Region Assignment
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0
>
> Attachments: hbase-10815.patch, hbase-10815_v2.patch
>
>
> In HBASE-10569, two things could affect the rolling-upgrade from a 0.96+ 
> release:
> * Master doesn't have its own info server any. It shares the same info server 
> with the regionserver. We can have a setting so that we can start two info 
> servers, one for the master on the original port, and one for the 
> regionserver.
> * Backup master is a regionserver now. So it could hold regions. This could 
> affect some deployment. We can have a setting so that we can prevent backup 
> master from serving any region.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10867) TestRegionPlacement#testRegionPlacement occasionally fails

2014-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951557#comment-13951557
 ] 

Hadoop QA commented on HBASE-10867:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12637551/10867-v1.txt
  against trunk revision .
  ATTACHMENT ID: 12637551

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnDatanodeDeath(TestLogRolling.java:368)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9130//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9130//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9130//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9130//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9130//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9130//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9130//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9130//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9130//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9130//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9130//console

This message is automatically generated.

> TestRegionPlacement#testRegionPlacement occasionally fails
> --
>
> Key: HBASE-10867
> URL: https://issues.apache.org/jira/browse/HBASE-10867
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10867-v1.txt
>
>
> From 
> https://builds.apache.org/job/HBase-TRUNK/5047/testReport/org.apache.hadoop.hbase.master/TestRegionPlacement/testRegionPlacement/
>  :
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 10
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:368)
>   at 
> java.util.concurrent.CopyOnWriteArrayList.get(CopyOnWriteArrayList.java:377)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.getRegionServer(LocalHBaseCluster.java:224)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.getRegionServer(MiniHBaseCluster.java:609)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.killRandomServerAndVerifyAssignment(TestRegionPlacement.java:303)
>   at 
> org.apache.hadoop.hbase.master.TestRegionPlacement.testRegionPlacement(TestRegionPlacement.java:270)
> {code}
> In the setup:
> {code}
> TEST_UTIL.startMiniCluster(SLAVES);
> {code}
> where SLAVES is 10.
> So when 10 was used in 
> TEST_UTIL.getHBaseCluster().getRegionServer(killIndex), we would get 
> ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-28 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951565#comment-13951565
 ] 

Lars Hofhansl commented on HBASE-10847:
---

Ran all tests: 
{code}
Results :
Tests run: 728, Failures: 0, Errors: 0, Skipped: 0
...
Results :
Tests run: 1482, Failures: 0, Errors: 0, Skipped: 13

[INFO] 
[INFO] BUILD SUCCESS
[INFO] -
{code}

Notice that we now run 1482 (instead of 1422). Those are all the security 
related ones.

> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 0.94.19
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-28 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10847:
--

Attachment: 10847.txt

Here's the simple patch. (as described above I opted for the maven trickery 
rather than moving all the files into place under the src directory).

[~stack], [~apurtell], [~ghelmling], [~jesse_yates], please have a quick look. 
Thanks.

> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10864) Spelling nit

2014-03-28 Thread Alex Newman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Newman updated HBASE-10864:


Attachment: 0001-Spelling-fix.patch

> Spelling nit
> 
>
> Key: HBASE-10864
> URL: https://issues.apache.org/jira/browse/HBASE-10864
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
>Priority: Trivial
> Attachments: 0001-Spelling-fix.patch
>
>
> We should really be more careful about spelling qualifier



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10864) Spelling nit

2014-03-28 Thread Alex Newman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Newman updated HBASE-10864:


Attachment: (was: 
0001-HBASE-10715.-TimeRange-has-a-poorly-formatted-error-.patch)

> Spelling nit
> 
>
> Key: HBASE-10864
> URL: https://issues.apache.org/jira/browse/HBASE-10864
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
>Priority: Trivial
> Attachments: 0001-Spelling-fix.patch
>
>
> We should really be more careful about spelling qualifier



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10864) Spelling nit

2014-03-28 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951572#comment-13951572
 ] 

Alex Newman commented on HBASE-10864:
-

Oh dear, i realized what the problem was, I plopped the wrong patch file in. 
The other one fixes something like 30 spelling mistakes. It may still be too 
minor

> Spelling nit
> 
>
> Key: HBASE-10864
> URL: https://issues.apache.org/jira/browse/HBASE-10864
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
>Priority: Trivial
> Attachments: 0001-Spelling-fix.patch
>
>
> We should really be more careful about spelling qualifier



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-28 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned HBASE-10847:
-

Assignee: Lars Hofhansl

> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10815) Master regionserver should be rolling-upgradable

2014-03-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951575#comment-13951575
 ] 

stack commented on HBASE-10815:
---

We are unable to add a new connector to listen on another port to our 
infoserver instance?  e.g. 
http://stackoverflow.com/questions/6905098/how-to-configure-jetty-to-listen-to-multiple-ports
  Can we not add this connector:

{code}
+masterJettyServer.addConnector(connector);
{code}

to the existing instance?

Add this to hbase-default.xml I'd say and to the release notes 
"hbase.master.infoserver.redirect".  Turn it off in tests?

This is good: +masterJettyServer.setStopAtShutdown(true);

Is it turned off for tests?

This is good too:

-  conf.getInt("hbase.regionservers", 1), LocalHMaster.class, 
HRegionServer.class);
+  conf.getInt("hbase.regionservers", 0), LocalHMaster.class, 
HRegionServer.class);



I'd think that this would be true by default?

+usingBackupMasters = conf.getBoolean("hbase.balancer.use-backupmaster", 
false);

...and is the new master server on by default?  I'd hope so.

Is this right?

+if (st == null || usingBackupMasters) return;

We return if we are using backup masters?... or am I misreading whatpublic 
void setClusterStatus(ClusterStatus st) { is about?

What is happening here?

 masterServerName = masterServices.getServerName();
+excludedServers.remove(masterServerName);

What will the default set up be after this patch goes in?  backup masters are 
used r not?  master regionserver will be up by default or not?

Should we be getting this from zk?

+int masterInfoPort = conf.getInt(HConstants.MASTER_INFO_PORT,
+  HConstants.DEFAULT_MASTER_INFOPORT);
+conf.setInt("hbase.master.info.port.orig", masterInfoPort);

Patch looks good.





> Master regionserver should be rolling-upgradable
> 
>
> Key: HBASE-10815
> URL: https://issues.apache.org/jira/browse/HBASE-10815
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, Region Assignment
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0
>
> Attachments: hbase-10815.patch, hbase-10815_v2.patch
>
>
> In HBASE-10569, two things could affect the rolling-upgrade from a 0.96+ 
> release:
> * Master doesn't have its own info server any. It shares the same info server 
> with the regionserver. We can have a setting so that we can start two info 
> servers, one for the master on the original port, and one for the 
> regionserver.
> * Backup master is a regionserver now. So it could hold regions. This could 
> affect some deployment. We can have a setting so that we can prevent backup 
> master from serving any region.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-28 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951602#comment-13951602
 ] 

Jesse Yates commented on HBASE-10847:
-

Wouldn't using this: 
http://maven.apache.org/plugins/maven-resources-plugin/examples/include-exclude.html
 be cleaner?

> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-28 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951612#comment-13951612
 ] 

Jesse Yates commented on HBASE-10847:
-

Nm, I was wrong. thanks [~lhofhansl]] for calling me on it.

+1 maybe add a comment, on commit, for why they are separate

> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10815) Master regionserver should be rolling-upgradable

2014-03-28 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951614#comment-13951614
 ] 

Jimmy Xiang commented on HBASE-10815:
-

I uploaded the patch to RB, which is convenient when it works. 
bq. We are unable to add a new connector to listen on another port to our 
infoserver instance?
Our infoserver extends the hadoop HttpServer. I could not find a way to add a 
new connector. With vanilla jetty server, it's very easy to do.
bq. Add this to hbase-default.xml I'd say and to the release notes 
"hbase.master.infoserver.redirect". Turn it off in tests?
Sure. Will add it to hbase-default.xml. We don't need to turn it off. It is 
already off for tests both m/rs web UI are already off for tests.
{quote}
I'd think that this would be true by default?
+ usingBackupMasters = conf.getBoolean("hbase.balancer.use-backupmaster", 
false);
...and is the new master server on by default? I'd hope so.
{quote}
By default, the new master server is on, and backup masters don't host any 
region. So that it won't be any surprise to users.
{quote}
Is this right?
+ if (st == null || usingBackupMasters) return;
We return if we are using backup masters?... or am I misreading what public 
void setClusterStatus(ClusterStatus st) { is about?
{quote}
Yes, this is right. If using backup masters, we don't put them in the excluded 
server list.  Originally, this method does nothing in the base class.
{quote}
What is happening here?
masterServerName = masterServices.getServerName();
+ excludedServers.remove(masterServerName);
{quote}
Since the master starts as a backup master at first now, to be safe, here we 
are trying to make sure the active master in not on the excluded server list. 
The balancer logic will make sure only meta/namespace/.. are assigned to it.

bq. What will the default set up be after this patch goes in? backup masters 
are used r not? master regionserver will be up by default or not?
With this patch, by default, backup masters are not used, the redirect jetty 
server in m+rs is up.
{quote}
Should we be getting this from zk?
+ int masterInfoPort = conf.getInt(HConstants.MASTER_INFO_PORT,
+ HConstants.DEFAULT_MASTER_INFOPORT);
+ conf.setInt("hbase.master.info.port.orig", masterInfoPort);
{quote}
The infoport is not in ZK, right?

> Master regionserver should be rolling-upgradable
> 
>
> Key: HBASE-10815
> URL: https://issues.apache.org/jira/browse/HBASE-10815
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, Region Assignment
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0
>
> Attachments: hbase-10815.patch, hbase-10815_v2.patch
>
>
> In HBASE-10569, two things could affect the rolling-upgrade from a 0.96+ 
> release:
> * Master doesn't have its own info server any. It shares the same info server 
> with the regionserver. We can have a setting so that we can start two info 
> servers, one for the master on the original port, and one for the 
> regionserver.
> * Backup master is a regionserver now. So it could hold regions. This could 
> affect some deployment. We can have a setting so that we can prevent backup 
> master from serving any region.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10847) 0.94: drop non-secure builds, make security the default

2014-03-28 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951616#comment-13951616
 ] 

Gary Helmling commented on HBASE-10847:
---

I don't think we actually need {{security/src/test/resources/hbase-site.xml}} 
without the security profile.  The only thing it was doing was setting 
{{SecureRpcEngine}} to be used for tests.  Which raises the question: should 
{{SecureRpcEngine}} or {{WritableRpcEngine}} be used for running the tests if 
security is integrated into the main build?

> 0.94: drop non-secure builds, make security the default
> ---
>
> Key: HBASE-10847
> URL: https://issues.apache.org/jira/browse/HBASE-10847
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.19
>
> Attachments: 10847.txt
>
>
> I would like to only create a single 0.94 tarball/release that contains the 
> security code - and drop the non-secure tarballs and releases.
> Let's discuss...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9045) Support Dictionary based Tag compression in HFiles

2014-03-28 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13951619#comment-13951619
 ] 

stack commented on HBASE-9045:
--

This patch adds a method 'shouldCompressTags' that looks like it should have 
been named 'isCompressTags' or 'getCompressTags'.  If so, I'll open new issue 
to fix.  [~anoop.hbase]

> Support Dictionary based Tag compression in HFiles
> --
>
> Key: HBASE-9045
> URL: https://issues.apache.org/jira/browse/HBASE-9045
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.98.0
>
> Attachments: HBASE-9045.patch, HBASE-9045_V2.patch, 
> HBASE-9045_V3.patch
>
>
> Along with the DataBlockEncoding algorithms, Dictionary based Tag compression 
> can be done



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >