[jira] [Updated] (HBASE-12075) Preemptive Fast Fail

2014-10-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12075:
--
   Resolution: Fixed
Fix Version/s: 0.99.2
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to branch-1+.  Thanks for nice patch and persistence [~manukranthk]

 Preemptive Fast Fail
 

 Key: HBASE-12075
 URL: https://issues.apache.org/jira/browse/HBASE-12075
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.99.0, 2.0.0, 0.98.6.1
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
 Fix For: 2.0.0, 0.99.2

 Attachments: 0001-Add-a-test-case-for-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 HBASE-12075-Preemptive-Fast-Fail-V15.patch


 In multi threaded clients, we use a feature developed on 0.89-fb branch 
 called Preemptive Fast Fail. This allows the client threads which would 
 potentially fail, fail fast. The idea behind this feature is that we allow, 
 among the hundreds of client threads, one thread to try and establish 
 connection with the regionserver and if that succeeds, we mark it as a live 
 node again. Meanwhile, other threads which are trying to establish connection 
 to the same server would ideally go into the timeouts which is effectively 
 unfruitful. We can in those cases return appropriate exceptions to those 
 clients instead of letting them retry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12358) Create ByteBuffer backed Cell

2014-10-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188051#comment-14188051
 ] 

stack commented on HBASE-12358:
---

bq. So the doc in some time.

Suggest sooner rather than later.  Doesn't have to be fancy and can change as 
we learn stuff.  Its just handy to have a reference doc because its hard to 
track across JIRAs.  Thanks.

bq. Some of Andy's experiments shown that the position/limit checks in nio BB 
impls makes the perf bad and not getting inlining also. 

Yeah. Can look at what others have done too to get the speed up.  Netty good 
because has refcounting but downside is we are not going to get netty bufs from 
dfsclient; we can deal.

bq. I would feel before doing that we should have our subtasks helping us out 
to achieve this.

Any POC'ing to do in here first? Will help figure the tasks.

bq. Can we do this way?

Would be good to try it first and be prepared to throw it away if it is awkward 
(I know its a bunch of work).

bq. Compression case am checking some way we can avoid. 

Yeah, hopefully no or minimal copying compressing.

[~ram_krish]
bq. I would feel before doing that we should have our subtasks helping us out 
to achieve this.

Would be good to have general direction decided before the subtasks?  A bit of 
POC'ing and a bit of spec how we are to proceed?  Then easy making up the 
subtasks.

bq.  APIs are in the read and which one in write then may be it may make sense 
to extend Cell only for the new BB based cell.

Hopefully we can avoid one way to write and another to read.

Thanks lads.  Let me look at HBASE-12282


 Create ByteBuffer backed Cell
 -

 Key: HBASE-12358
 URL: https://issues.apache.org/jira/browse/HBASE-12358
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12358.patch


 As part of HBASE-12224 and HBASE-12282 we wanted a Cell that is backed by BB. 
  Changing the core Cell impl would not be needed as it is used in server 
 only.  So we will create a BB backed Cell and use it in the Server side read 
 path. This JIRA just creates an interface that extends Cell and adds the 
 needed API.
 The getTimeStamp and getTypebyte() can still refer to the original Cell API 
 only.  The getXXxOffset() and getXXXLength() can also refer to the original 
 Cell only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-29 Thread Ashish Singhi (JIRA)
Ashish Singhi created HBASE-12375:
-

 Summary: LoadIncrementalHFiles fails to load data in table when CF 
name starts with '_'
 Key: HBASE-12375
 URL: https://issues.apache.org/jira/browse/HBASE-12375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor


We do not restrict user from creating a table having column family starting 
with '_'.
So when user creates a table in such a way then LoadIncrementalHFiles will skip 
those family data to load into the table.
{code}
// Skip _logs, etc
if (familyDir.getName().startsWith(_)) continue;
{code}

I think we should remove that check as I do not see any _logs directory being 
created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188062#comment-14188062
 ] 

stack commented on HBASE-12282:
---

I took a look at this already up at 
https://issues.apache.org/jira/browse/HBASE-12282?focusedCommentId=14178619page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14178619

Here is some more comment:

Looking again at Cell changes, rather than change Cell, could we do something 
like hasArray from BB in Cell and if it does not have array (because it DBB), 
then we do stuff like Anoop is doing to do compares of DBBs where unsafe or by 
offset into the DBB rather than array references?

CellComparator is still byte [] based in this patch?  Still gets the arrays 
from Cell.

Why would we do this:

+  ByteBuffer wrapBuf = ByteBuffer.wrap(buf);
+  return ByteBufferUtils.equals(left.getValueBuffer(), 
left.getValueOffset(),
+  left.getValueLength(), wrapBuf, 0, wrapBuf.capacity());

Why not pass in two BBs into BBU and let it figure it out?



 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12358) Create ByteBuffer backed Cell

2014-10-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188064#comment-14188064
 ] 

ramkrishna.s.vasudevan commented on HBASE-12358:


I will write a doc and share it here to get the comments and what we will be 
working on.

 Create ByteBuffer backed Cell
 -

 Key: HBASE-12358
 URL: https://issues.apache.org/jira/browse/HBASE-12358
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12358.patch


 As part of HBASE-12224 and HBASE-12282 we wanted a Cell that is backed by BB. 
  Changing the core Cell impl would not be needed as it is used in server 
 only.  So we will create a BB backed Cell and use it in the Server side read 
 path. This JIRA just creates an interface that extends Cell and adds the 
 needed API.
 The getTimeStamp and getTypebyte() can still refer to the original Cell API 
 only.  The getXXxOffset() and getXXXLength() can also refer to the original 
 Cell only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-10-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188070#comment-14188070
 ] 

stack commented on HBASE-10201:
---

bq.  I think it is better to open another issue to handle the duplication.

OK.

bq.  getEarliestFlushTimeForAllStore should be public because TestIOFencing use 
it(which in another package). 

FYI, we mark these with @VisibleForTesting annotation..  I can do on commit.

bq. but I see lots of other similar methods declared as public...

Yeah, sorry about that; we ain't always consistent trying.

bq. Does this meet the requirement?

Yes. Out of interest, are you using the hbase formatter?

bq. I tried but failed to make dev-support/test-patch.sh work properly...

Yeah, this stuff is focused on the master.  Unit tests passing on branch-1 
would be great. Just note it here in the issue.

You going to try hbase-it?

Thanks.

 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: 3149-trunk-v1.txt, HBASE-10201-0.98.patch, 
 HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, HBASE-10201-0.99.patch, 
 HBASE-10201.patch, HBASE-10201_1.patch, HBASE-10201_2.patch, 
 HBASE-10201_3.patch


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188074#comment-14188074
 ] 

ramkrishna.s.vasudevan commented on HBASE-12282:


bq. then we do stuff like Anoop is doing to do compares of DBBs where unsafe or 
by offset into the DBB rather than array references?
These changes are all based on the BB that we currently have in refernce.  On 
the BB try to use unsafe or go without unsafe.  The util methods will check for 
hasArray or not.
bq.CellComparator is still byte [] based in this patch? Still gets the arrays 
from Cell.
The comparators are changed for few cases to make some specific cases to work.  
Just to show how we need the changes and where we need the changes.
bq.ByteBuffer wrapBuf = ByteBuffer.wrap(buf);
They are all just hacks to get buff on a KV so that the comparators can be 
changed and made to work with buffers.
bq. rather than change Cell, could we do something like hasArray from BB in 
Cell and if it does not have array (because it DBB)
How can we do this now? Already the cell API is accepting a byte[] and 
returning as byte[].  So currently all Cell are hasArray as true only right?  
Unless Cell returns a BB or something other than byte[] we cannot use hasArray 
inside a Cell. 


 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12312) Another couple of createTable race conditions

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188084#comment-14188084
 ] 

Hudson commented on HBASE-12312:


SUCCESS: Integrated in HBase-1.0 #377 (See 
[https://builds.apache.org/job/HBase-1.0/377/])
HBASE-12312 Another couple of createTable race conditions (stack: rev 
a973fd514f153572182a822e6bf930b11be1a81d)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestScanEarlyTermination.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationBase.java


 Another couple of createTable race conditions
 -

 Key: HBASE-12312
 URL: https://issues.apache.org/jira/browse/HBASE-12312
 Project: HBase
  Issue Type: Bug
Reporter: Dima Spivak
Assignee: Dima Spivak
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12312_master_v1.patch, 
 HBASE-12312_master_v2.patch, HBASE-12312_master_v3 (1).patch, 
 HBASE-12312_master_v3.patch, HBASE-12312_master_v3.patch, 
 HBASE-12312_master_v3.patch, HBASE-12312_master_v4.patch


 Found a couple more failing tests in TestAccessController and 
 TestScanEarlyTermination caused by my favorite race condition. :) Will post a 
 patch in a second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12346) Scan's default auths behavior under Visibility labels

2014-10-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188085#comment-14188085
 ] 

ramkrishna.s.vasudevan commented on HBASE-12346:


Does this mean we have EnforcingScanLabelGenerator and 
DefaultScanLabelGenerator both doing the same work?

 Scan's default auths behavior under Visibility labels
 -

 Key: HBASE-12346
 URL: https://issues.apache.org/jira/browse/HBASE-12346
 Project: HBase
  Issue Type: Bug
  Components: API, security
Affects Versions: 0.98.7, 0.99.1
Reporter: Jerry He
 Fix For: 0.98.8, 0.99.2

 Attachments: HBASE-12346-master-v2.patch, HBASE-12346-master.patch


 In Visibility Labels security, a set of labels (auths) are administered and 
 associated with a user.
 A user can normally  only see cell data during scan that are part of the 
 user's label set (auths).
 Scan uses setAuthorizations to indicates its wants to use the auths to access 
 the cells.
 Similarly in the shell:
 {code}
 scan 'table1', AUTHORIZATIONS = ['private']
 {code}
 But it is a surprise to find that setAuthorizations seems to be 'mandatory' 
 in the default visibility label security setting.  Every scan needs to 
 setAuthorizations before the scan can get any cells even the cells are under 
 the labels the request user is part of.
 The following steps will illustrate the issue:
 Run as superuser.
 {code}
 1. create a visibility label called 'private'
 2. create 'table1'
 3. put into 'table1' data and label the data as 'private'
 4. set_auths 'user1', 'private'
 5. grant 'user1', 'RW', 'table1'
 {code}
 Run as 'user1':
 {code}
 1. scan 'table1'
 This show no cells.
 2. scan 'table1', scan 'table1', AUTHORIZATIONS = ['private']
 This will show all the data.
 {code}
 I am not sure if this is expected by design or a bug.
 But a more reasonable, more client application backward compatible, and less 
 surprising default behavior should probably look like this:
 A scan's default auths, if its Authorizations attributes is not set 
 explicitly, should be all the auths the request user is administered and 
 allowed on the server.
 If scan.setAuthorizations is used, then the server further filter the auths 
 during scan: use the input auths minus what is not in user's label set on the 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12336) RegionServer failed to shutdown for NodeFailoverWorker thread

2014-10-29 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188087#comment-14188087
 ] 

Liu Shaohui commented on HBASE-12336:
-

[~tianq]
Sorry, the log has been deleted automatically. 
We encountered this problem when we upgrade the zk cluster from 3 nodes to 5 
fives and the zk cluster was restarted.


 RegionServer failed to shutdown for NodeFailoverWorker thread
 -

 Key: HBASE-12336
 URL: https://issues.apache.org/jira/browse/HBASE-12336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Attachments: stack


 After enabling hbase.zookeeper.useMulti in hbase cluster, we found that 
 regionserver failed to shutdown. Other threads have exited except a 
 NodeFailoverWorker thread.
 {code}
 ReplicationExecutor-0 prio=10 tid=0x7f0d40195ad0 nid=0x73a in 
 Object.wait() [0x7f0dc8fe6000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:485)
 at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309)
 - locked 0x0005a16df080 (a 
 org.apache.zookeeper.ClientCnxn$Packet)
 at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:930)
 at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:912)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.multi(RecoverableZooKeeper.java:531)
 at 
 org.apache.hadoop.hbase.zookeeper.ZKUtil.multiOrSequential(ZKUtil.java:1518)
 at 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper.copyQueuesFromRSUsingMulti(ReplicationZookeeper.java:804)
 at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:612)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 It's sure that the shutdown method of the executor is called in  
 ReplicationSourceManager#join.
  
 I am looking for the root cause and suggestions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12075) Preemptive Fast Fail

2014-10-29 Thread Manukranth Kolloju (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188088#comment-14188088
 ] 

Manukranth Kolloju commented on HBASE-12075:


Thanks [~stack], [~tedyu], [~eclark] for patiently going through my patch.

 Preemptive Fast Fail
 

 Key: HBASE-12075
 URL: https://issues.apache.org/jira/browse/HBASE-12075
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.99.0, 2.0.0, 0.98.6.1
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
 Fix For: 2.0.0, 0.99.2

 Attachments: 0001-Add-a-test-case-for-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 HBASE-12075-Preemptive-Fast-Fail-V15.patch


 In multi threaded clients, we use a feature developed on 0.89-fb branch 
 called Preemptive Fast Fail. This allows the client threads which would 
 potentially fail, fail fast. The idea behind this feature is that we allow, 
 among the hundreds of client threads, one thread to try and establish 
 connection with the regionserver and if that succeeds, we mark it as a live 
 node again. Meanwhile, other threads which are trying to establish connection 
 to the same server would ideally go into the timeouts which is effectively 
 unfruitful. We can in those cases return appropriate exceptions to those 
 clients instead of letting them retry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-10856) Prep for 1.0

2014-10-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-10856.
---
Resolution: Fixed

Resolving. Only outstanding subtasks are the update of jars in time for release 
1.0 in mvn and our dependencies.  That is on going.  No other dependent issues 
(those not being worked on were moved out).  I believe I covered all the 
outstanding doc issues in here and pushed the doc out.

 Prep for 1.0
 

 Key: HBASE-10856
 URL: https://issues.apache.org/jira/browse/HBASE-10856
 Project: HBase
  Issue Type: Umbrella
Reporter: stack
 Fix For: 0.99.2


 Tasks for 1.0 copied here from our '1.0.0' mailing list discussion.  Idea is 
 to file subtasks off this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12336) RegionServer failed to shutdown for NodeFailoverWorker thread

2014-10-29 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188091#comment-14188091
 ] 

Liu Shaohui commented on HBASE-12336:
-

[~stack]
Yes,  call setDaemon on ThreadFactoryBuilder will fix this problem.
But i am wondering why this thread did not exist even if we called shutdown on 
shutdown.


 RegionServer failed to shutdown for NodeFailoverWorker thread
 -

 Key: HBASE-12336
 URL: https://issues.apache.org/jira/browse/HBASE-12336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Attachments: stack


 After enabling hbase.zookeeper.useMulti in hbase cluster, we found that 
 regionserver failed to shutdown. Other threads have exited except a 
 NodeFailoverWorker thread.
 {code}
 ReplicationExecutor-0 prio=10 tid=0x7f0d40195ad0 nid=0x73a in 
 Object.wait() [0x7f0dc8fe6000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:485)
 at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309)
 - locked 0x0005a16df080 (a 
 org.apache.zookeeper.ClientCnxn$Packet)
 at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:930)
 at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:912)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.multi(RecoverableZooKeeper.java:531)
 at 
 org.apache.hadoop.hbase.zookeeper.ZKUtil.multiOrSequential(ZKUtil.java:1518)
 at 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper.copyQueuesFromRSUsingMulti(ReplicationZookeeper.java:804)
 at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:612)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 It's sure that the shutdown method of the executor is called in  
 ReplicationSourceManager#join.
  
 I am looking for the root cause and suggestions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12336) RegionServer failed to shutdown for NodeFailoverWorker thread

2014-10-29 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-12336:

Attachment: HBASE-12336-trunk-v1.diff

Call setDaemon on ThreadFactoryBuilder to fix this problem.

 RegionServer failed to shutdown for NodeFailoverWorker thread
 -

 Key: HBASE-12336
 URL: https://issues.apache.org/jira/browse/HBASE-12336
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.11
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Attachments: HBASE-12336-trunk-v1.diff, stack


 After enabling hbase.zookeeper.useMulti in hbase cluster, we found that 
 regionserver failed to shutdown. Other threads have exited except a 
 NodeFailoverWorker thread.
 {code}
 ReplicationExecutor-0 prio=10 tid=0x7f0d40195ad0 nid=0x73a in 
 Object.wait() [0x7f0dc8fe6000]
java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Object.wait(Object.java:485)
 at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309)
 - locked 0x0005a16df080 (a 
 org.apache.zookeeper.ClientCnxn$Packet)
 at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:930)
 at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:912)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.multi(RecoverableZooKeeper.java:531)
 at 
 org.apache.hadoop.hbase.zookeeper.ZKUtil.multiOrSequential(ZKUtil.java:1518)
 at 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper.copyQueuesFromRSUsingMulti(ReplicationZookeeper.java:804)
 at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:612)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 It's sure that the shutdown method of the executor is called in  
 ReplicationSourceManager#join.
  
 I am looking for the root cause and suggestions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10201) Port 'Make flush decisions per column family' to trunk

2014-10-29 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188093#comment-14188093
 ] 

zhangduo commented on HBASE-10201:
--

{quote}
Out of interest, are you using the hbase formatter?
{quote}
No, I just use the default formatter with indent and max length changed. Only 
new code is formatted, old code is format manually to keep the patch clean...
I will try the hbase formatter later. I found it when looking for 
test-patch.sh, thanks.

{quote}
You going to try hbase-it?
{quote}
Yes I have run it with 'mvn verify' under hbase-it. There are some fails and 
errors, I need to see the source code to identify the reason.

Thanks.


 Port 'Make flush decisions per column family' to trunk
 --

 Key: HBASE-10201
 URL: https://issues.apache.org/jira/browse/HBASE-10201
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: zhangduo
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: 3149-trunk-v1.txt, HBASE-10201-0.98.patch, 
 HBASE-10201-0.98_1.patch, HBASE-10201-0.98_2.patch, HBASE-10201-0.99.patch, 
 HBASE-10201.patch, HBASE-10201_1.patch, HBASE-10201_2.patch, 
 HBASE-10201_3.patch


 Currently the flush decision is made using the aggregate size of all column 
 families. When large and small column families co-exist, this causes many 
 small flushes of the smaller CF. We need to make per-CF flush decisions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12354) Update dependencies in time for 1.0 release

2014-10-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188095#comment-14188095
 ] 

Hadoop QA commented on HBASE-12354:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12677827/12354v2.txt
  against trunk revision .
  ATTACHMENT ID: 12677827

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11502//console

This message is automatically generated.

 Update dependencies in time for 1.0 release
 ---

 Key: HBASE-12354
 URL: https://issues.apache.org/jira/browse/HBASE-12354
 Project: HBase
  Issue Type: Sub-task
  Components: dependencies
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 12354.txt, 12354v2.txt


 Going through and updating egregiously old dependencies for 1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12355) Update maven plugins

2014-10-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188096#comment-14188096
 ] 

Hadoop QA commented on HBASE-12355:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12677828/12355v6.txt
  against trunk revision .
  ATTACHMENT ID: 12677828

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11503//console

This message is automatically generated.

 Update maven plugins
 

 Key: HBASE-12355
 URL: https://issues.apache.org/jira/browse/HBASE-12355
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: stack
Assignee: stack
 Fix For: 0.99.2

 Attachments: 12355.txt, 12355v2.txt, 12355v3.txt, 12355v5.txt, 
 12355v6.txt, 12355v6.txt


 Update maven plugins. Some are way old.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12355) Update maven plugins

2014-10-29 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188097#comment-14188097
 ] 

Elliott Clark commented on HBASE-12355:
---

+1

 Update maven plugins
 

 Key: HBASE-12355
 URL: https://issues.apache.org/jira/browse/HBASE-12355
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: stack
Assignee: stack
 Fix For: 0.99.2

 Attachments: 12355.txt, 12355v2.txt, 12355v3.txt, 12355v5.txt, 
 12355v6.txt, 12355v6.txt


 Update maven plugins. Some are way old.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12312) Another couple of createTable race conditions

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188099#comment-14188099
 ] 

Hudson commented on HBASE-12312:


SUCCESS: Integrated in HBase-TRUNK #5713 (See 
[https://builds.apache.org/job/HBase-TRUNK/5713/])
HBASE-12312 Another couple of createTable race conditions (stack: rev 
95282f2ea53bdb55af7fe5a0a749c1fcde824b6c)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestScanEarlyTermination.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationBase.java


 Another couple of createTable race conditions
 -

 Key: HBASE-12312
 URL: https://issues.apache.org/jira/browse/HBASE-12312
 Project: HBase
  Issue Type: Bug
Reporter: Dima Spivak
Assignee: Dima Spivak
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12312_master_v1.patch, 
 HBASE-12312_master_v2.patch, HBASE-12312_master_v3 (1).patch, 
 HBASE-12312_master_v3.patch, HBASE-12312_master_v3.patch, 
 HBASE-12312_master_v3.patch, HBASE-12312_master_v4.patch


 Found a couple more failing tests in TestAccessController and 
 TestScanEarlyTermination caused by my favorite race condition. :) Will post a 
 patch in a second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12313) Redo the hfile index length optimization so cell-based rather than serialized KV key

2014-10-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188111#comment-14188111
 ] 

Hadoop QA commented on HBASE-12313:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12677833/12313v10.txt
  against trunk revision .
  ATTACHMENT ID: 12677833

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 36 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+FSReaderImpl(FSDataInputStream istream, long fileSize, HFileContext 
fileContext) throws IOException {
+HFileBlock.FSReaderImpl fsBlockReaderV2 = new 
HFileBlock.FSReaderImpl(fsdis, fileSize, hfs, path,
+  HFileBlock.FSReaderImpl hbr = new HFileBlock.FSReaderImpl(new 
FSDataInputStreamWrapper(is),

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hdfs.server.namenode.TestMetaSave.testMetasaveAfterDelete(TestMetaSave.java:126)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11504//console

This message is automatically generated.

 Redo the hfile index length optimization so cell-based rather than serialized 
 KV key
 

 Key: HBASE-12313
 URL: https://issues.apache.org/jira/browse/HBASE-12313
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: stack
Assignee: stack
 Attachments: 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 12313v10.txt, 12313v5.txt, 12313v6.txt, 12313v8.txt


 Trying to remove API that returns the 'key' of a KV serialized into a byte 
 array is thorny.
 I tried to move over 

[jira] [Updated] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-29 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-12375:
--
Status: Patch Available  (was: Open)

 LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
 --

 Key: HBASE-12375
 URL: https://issues.apache.org/jira/browse/HBASE-12375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBASE-12375.patch


 We do not restrict user from creating a table having column family starting 
 with '_'.
 So when user creates a table in such a way then LoadIncrementalHFiles will 
 skip those family data to load into the table.
 {code}
 // Skip _logs, etc
 if (familyDir.getName().startsWith(_)) continue;
 {code}
 I think we should remove that check as I do not see any _logs directory being 
 created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-29 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-12375:
--
Attachment: HBASE-12375.patch

Patch for master branch.
Can some one please review.

 LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
 --

 Key: HBASE-12375
 URL: https://issues.apache.org/jira/browse/HBASE-12375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBASE-12375.patch


 We do not restrict user from creating a table having column family starting 
 with '_'.
 So when user creates a table in such a way then LoadIncrementalHFiles will 
 skip those family data to load into the table.
 {code}
 // Skip _logs, etc
 if (familyDir.getName().startsWith(_)) continue;
 {code}
 I think we should remove that check as I do not see any _logs directory being 
 created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12068) [Branch-1] Avoid need to always do KeyValueUtil#ensureKeyValue for Filter transformCell

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188134#comment-14188134
 ] 

Hudson commented on HBASE-12068:


SUCCESS: Integrated in HBase-TRUNK #5714 (See 
[https://builds.apache.org/job/HBase-TRUNK/5714/])
Add note to upgrade section on HBASE-12068; i.e. things to do if you have 
custom filters (stack: rev 3a9cf5b2cdc3c3d24a085b3a4d4c289dcce09766)
* src/main/docbkx/upgrading.xml


 [Branch-1] Avoid need to always do KeyValueUtil#ensureKeyValue for Filter 
 transformCell
 ---

 Key: HBASE-12068
 URL: https://issues.apache.org/jira/browse/HBASE-12068
 Project: HBase
  Issue Type: Sub-task
  Components: Filters
Affects Versions: 0.99.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.1

 Attachments: HBASE-12068.patch


 During read with Filters added to Scan/Get, the core code calls 
 transformCell(Cell) on the Filter. Most of the filters do not implement 
 transform API so the method from FilterBase will get executed
 {code}
   @Override
   public Cell transformCell(Cell v) throws IOException {
 // Old filters based off of this class will override KeyValue 
 transform(KeyValue).
 // Thus to maintain compatibility we need to call the old version.
 return transform(KeyValueUtil.ensureKeyValue(v));
   }
 {code}
 Here always it do KeyValueUtil.ensureKeyValue.  When a non KV cell comes in, 
 we need recreate KV and do deep copy of key and value!
 We have to stick with this model in branch-1 for BC.
 So as a workaround to avoid possible KV convert, we can implement 
 transformCell(Cell) method in all of our individual Filter classes which just 
 return the incoming cell (So that method from FilterBase wont get executed)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12297) Support DBB usage in Bloom and HFileIndex area

2014-10-29 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12297:
---
   Resolution: Fixed
Fix Version/s: 0.99.2
   2.0.0
 Hadoop Flags: Incompatible change
   Status: Resolved  (was: Patch Available)

Pushed to 0.99 and trunk.
Thanks for the reviews Stack and Ted.

 Support DBB usage in Bloom and HFileIndex area
 --

 Key: HBASE-12297
 URL: https://issues.apache.org/jira/browse/HBASE-12297
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12297.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188157#comment-14188157
 ] 

Anoop Sam John commented on HBASE-12282:


I think Stack suggestion is to
add hasArray() in Cell
We have new BBBackedCell which extending Cell. 
In impl of BB backed let hasArray() return false.
In comparators, based on hasArray() use getxxxArray() or getxxxBuffer()
Correct Stack?

 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12358) Create ByteBuffer backed Cell

2014-10-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188168#comment-14188168
 ] 

Anoop Sam John commented on HBASE-12358:


bq.CP would be exposed with both type of cells. Can we have annotations in the 
API that are exposed to cells and filters that tells which one to use and which 
one to use? That may be confusing but if we can have a cleaner way of showing 
which APIs are in the read and which one in write then may be it may make sense 
to extend Cell only for the new BB based cell.
If we continue to pass Cell in read path.. every where we need buffers, we will 
end up in casting. That is ugly. Can we change all places in read path to new 
interface BBBackedCell or some other better name.
Like what we pass to Filter, to cps, what StoreScanner, InternalScanner, 
RegionScanner returns.. etc...   It can land in 2.0 only I believe.
One option as Stack suggested have a hasArray() in Cell and based on decide 
which API to call in places like Comparators. (where we deal with Cells only).  
Here the impls should throw Exception out of getxxxArray() API if hasArray() is 
false. (like BB impls)
Or else not make BBBackedCell to extend Cell at all. So we will see only 
getxxxBuffer() in read path. One adv is there is no diff in Public exposed 
Cell. It might not make much sense for hasArray() at client side because there 
we deal with Cells backed by Array only. Only in read path it make sense.


 Create ByteBuffer backed Cell
 -

 Key: HBASE-12358
 URL: https://issues.apache.org/jira/browse/HBASE-12358
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12358.patch


 As part of HBASE-12224 and HBASE-12282 we wanted a Cell that is backed by BB. 
  Changing the core Cell impl would not be needed as it is used in server 
 only.  So we will create a BB backed Cell and use it in the Server side read 
 path. This JIRA just creates an interface that extends Cell and adds the 
 needed API.
 The getTimeStamp and getTypebyte() can still refer to the original Cell API 
 only.  The getXXxOffset() and getXXXLength() can also refer to the original 
 Cell only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-29 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188167#comment-14188167
 ] 

Matteo Bertozzi commented on HBASE-12375:
-

I think that by removing the check you'll fail in case of splits (but I haven't 
checked)
the problem is that the LoadIncrementalHFiles will create a _tmp directory.
In theory is enough adding to this patch the rename of _tmp to something like 
.tmp

did you tried to run this patch with a set of files that requires splitting?

 LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
 --

 Key: HBASE-12375
 URL: https://issues.apache.org/jira/browse/HBASE-12375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBASE-12375.patch


 We do not restrict user from creating a table having column family starting 
 with '_'.
 So when user creates a table in such a way then LoadIncrementalHFiles will 
 skip those family data to load into the table.
 {code}
 // Skip _logs, etc
 if (familyDir.getName().startsWith(_)) continue;
 {code}
 I think we should remove that check as I do not see any _logs directory being 
 created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12374) Change DBEs to work with new BB based cell

2014-10-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188169#comment-14188169
 ] 

Anoop Sam John commented on HBASE-12374:


The entire read path APIs to have this new Interface as param/return type 
rather than Cell.  Till the RPC response.

 Change DBEs to work with new BB based cell
 --

 Key: HBASE-12374
 URL: https://issues.apache.org/jira/browse/HBASE-12374
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan

 Once we are changing the read path to use BB based cell then the DBEs should 
 also return BB based cells.  Currently they are byte[] array backed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12373) Provide a command to list visibility labels

2014-10-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188173#comment-14188173
 ] 

Anoop Sam John commented on HBASE-12373:


It should be allowed only for a user who is having system label auth. 
A client side API and a shell command (?)
You there for a patch Jerry? Thanks.

 Provide a command to list visibility labels
 ---

 Key: HBASE-12373
 URL: https://issues.apache.org/jira/browse/HBASE-12373
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.7, 0.99.1
Reporter: Jerry He
Priority: Minor

 A command to list visibility labels that are in place would be handy.
 This is also in line with many of the other hbase list commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188180#comment-14188180
 ] 

ramkrishna.s.vasudevan commented on HBASE-12282:


bq.In comparators, based on hasArray() use getxxxArray() or getxxxBuffer()
Yes true. This I agree.
So how should the cell be created?  So based on the buffer from the HFileBlock 
we wil decide if it is a direct BB or on heap BB and create KV based on that ? 
That KV is either a normal KV that we use now (create cell based on 
buffer.array) or it would be a new KV which will have buffer in it?


 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188181#comment-14188181
 ] 

ramkrishna.s.vasudevan commented on HBASE-12282:


Another thing is if that is the case then all the new APIs that we add in 
ByteBufferUtils could just handle only the DBB cases (including the Unsafe 
cases for DBB). Not the cases of normal BB because in this case we could still 
work with byte[] inside the BB.

 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188184#comment-14188184
 ] 

Anoop Sam John commented on HBASE-12282:


bq.So how should the cell be created?  So based on the buffer from the 
HFileBlock we wil decide if it is a direct BB or on heap BB and create KV based 
on that ? That KV is either a normal KV that we use now (create cell based on 
buffer.array) or it would be a new KV which will have buffer in it?
IMHO just not worry abt DBB/HBB here. Always create a new KV type which 
implements new BBBackedBuffer.  We have optimization in Comparators etc to 
check based on hasArry() and do.

 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188186#comment-14188186
 ] 

Anoop Sam John commented on HBASE-12282:


Even the KeyOnlyKeyValue also. Let it be a BB backed one. It will be better we 
have this BufferBackedCell only in read path.  The Compartors no need to worry 
one type is buffer backed and other array backed. Let both be Buffer backed.
Even we can have new Comparators to avoid  checks in all method.

 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-29 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-12219:

Attachment: HBASE-12219-v1.patch

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
  Labels: scalability
 Attachments: HBASE-12219-v1.patch, HBASE-12219.v0.txt, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188193#comment-14188193
 ] 

ramkrishna.s.vasudevan commented on HBASE-12282:


bq. Let it be a BB backed one.
Yes.  That is why in the attached patch for ease of use created buffer for even 
the fake keys that we create that includes KeyOnlyKV. In the read path both 
should be BB based only.

 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-29 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi reassigned HBASE-12219:
---

Assignee: Matteo Bertozzi  (was: Esteban Gutierrez)

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Matteo Bertozzi
  Labels: scalability
 Attachments: HBASE-12219-v1.patch, HBASE-12219.v0.txt, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-29 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-12219:

Status: Patch Available  (was: In Progress)

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.6.1, 0.99.1, 0.94.24
Reporter: Esteban Gutierrez
Assignee: Matteo Bertozzi
  Labels: scalability
 Attachments: HBASE-12219-v1.patch, HBASE-12219.v0.txt, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-29 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi reassigned HBASE-12219:
---

Assignee: Esteban Gutierrez  (was: Matteo Bertozzi)

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
  Labels: scalability
 Attachments: HBASE-12219-v1.patch, HBASE-12219.v0.txt, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients and this can be too aggressive to HDFS and the master causing 
 contention to process other requests. A simple solution is to have a TTL 
 based cached for FSTableDescriptors#getAll() and  
 FSTableDescriptors#TableDescriptorAndModtime() that can allow the master to 
 process those calls faster without causing contention without having to 
 perform a trip to HDFS for every call. to listtables() or getTableDescriptor()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11419) After increasing TTL value of a hbase table having pre-split regions and decreasing TTL value, table becomes inaccessible.

2014-10-29 Thread Prabhu Joseph (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188197#comment-14188197
 ] 

Prabhu Joseph commented on HBASE-11419:
---

HFILE: on viewing metadata of the hfile , it throws error as 
dataBlockIndexReader is empty

 hbase hfile -m -f 
/hbase/AccountHistoryMA1/5640608f0ab19ee100ca71974acd5677/d/e8be48b383e1428698736f71f85b0049

Block index size as per heapsize: 336
Exception in thread main java.lang.NullPointerException
at org.apache.hadoop.hbase.KeyValue.keyToString(KeyValue.java:716)
at 
org.apache.hadoop.hbase.io.hfile.AbstractHFileReader.toStringFirstKey(AbstractHFileReader.java:138)
at 
org.apache.hadoop.hbase.io.hfile.AbstractHFileReader.toString(AbstractHFileReader.java:149)
at 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.printMeta(HFilePrettyPrinter.java:318)
at 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:234)
at 
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:189)
at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:750)


 After increasing TTL value of a hbase table having pre-split regions and 
 decreasing TTL value, table becomes inaccessible.
 --

 Key: HBASE-11419
 URL: https://issues.apache.org/jira/browse/HBASE-11419
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.6
 Environment: Linux x86_64 
Reporter: Prabhu Joseph
Priority: Blocker
 Attachments: HBaseExporter.java, account.csv, hbase-site.xml

   Original Estimate: 96h
  Remaining Estimate: 96h

 After increasing and decreasing the TTL value of a Hbase Table , table gets 
 inaccessible. Scan table not working.
 Scan in hbase shell throws
 java.lang.IllegalStateException: Block index not loaded
 at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV1.blockContainingKey(HFileReaderV1.java:181)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV1$AbstractScannerV1.seekTo(HFileReaderV1.java:426)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:131)
 at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2015)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3706)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1761)
 at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1753)
 at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1730)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2409)
 at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
 at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12358) Create ByteBuffer backed Cell

2014-10-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188198#comment-14188198
 ] 

ramkrishna.s.vasudevan commented on HBASE-12358:


May be hasArray() would be the  best option so that the exposed APIs are not 
changed. So users using the CP and filters should use the hasArray() to 
determine which one to use getXXxArray or getXXXBuffer.

 Create ByteBuffer backed Cell
 -

 Key: HBASE-12358
 URL: https://issues.apache.org/jira/browse/HBASE-12358
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12358.patch


 As part of HBASE-12224 and HBASE-12282 we wanted a Cell that is backed by BB. 
  Changing the core Cell impl would not be needed as it is used in server 
 only.  So we will create a BB backed Cell and use it in the Server side read 
 path. This JIRA just creates an interface that extends Cell and adds the 
 needed API.
 The getTimeStamp and getTypebyte() can still refer to the original Cell API 
 only.  The getXXxOffset() and getXXXLength() can also refer to the original 
 Cell only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188200#comment-14188200
 ] 

Hadoop QA commented on HBASE-12375:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12677847/HBASE-12375.patch
  against trunk revision .
  ATTACHMENT ID: 12677847

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11505//console

This message is automatically generated.

 LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
 --

 Key: HBASE-12375
 URL: https://issues.apache.org/jira/browse/HBASE-12375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBASE-12375.patch


 We do not restrict user from creating a table having column family starting 
 with '_'.
 So when user creates a table in such a way then LoadIncrementalHFiles will 
 skip those family data to load into the table.
 {code}
 // Skip _logs, etc
 if (familyDir.getName().startsWith(_)) continue;
 {code}
 I think we should remove that check as I do not see any _logs directory being 
 created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11683) Metrics for MOB

2014-10-29 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-11683:
-
Attachment: HBASE-11683-V7.diff

 Metrics for MOB
 ---

 Key: HBASE-11683
 URL: https://issues.apache.org/jira/browse/HBASE-11683
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 2.0.0
Reporter: Jonathan Hsieh
Assignee: Jingcheng Du
 Attachments: HBASE-11683-V2.diff, HBASE-11683-V3.diff, 
 HBASE-11683-V4.diff, HBASE-11683-V5.diff, HBASE-11683-V6.diff, 
 HBASE-11683-V7.diff, HBASE-11683.diff


 We need to make sure to capture metrics about mobs.
 Some basic ones include:
 # of mob writes
 # of mob reads
 # avg size of mob (?)
 # mob files
 # of mob compactions / sweeps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11683) Metrics for MOB

2014-10-29 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188210#comment-14188210
 ] 

Jingcheng Du commented on HBASE-11683:
--

Upload the latest patch V7.

 Metrics for MOB
 ---

 Key: HBASE-11683
 URL: https://issues.apache.org/jira/browse/HBASE-11683
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 2.0.0
Reporter: Jonathan Hsieh
Assignee: Jingcheng Du
 Attachments: HBASE-11683-V2.diff, HBASE-11683-V3.diff, 
 HBASE-11683-V4.diff, HBASE-11683-V5.diff, HBASE-11683-V6.diff, 
 HBASE-11683-V7.diff, HBASE-11683.diff


 We need to make sure to capture metrics about mobs.
 Some basic ones include:
 # of mob writes
 # of mob reads
 # avg size of mob (?)
 # mob files
 # of mob compactions / sweeps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11819) Unit test for CoprocessorHConnection

2014-10-29 Thread Talat UYARER (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Talat UYARER updated HBASE-11819:
-
Attachment: HBASE-11819.patch

Hi [~apurtell],

I create a test with your instruction. Could you review my patch ? 

 Unit test for CoprocessorHConnection 
 -

 Key: HBASE-11819
 URL: https://issues.apache.org/jira/browse/HBASE-11819
 Project: HBase
  Issue Type: Test
Reporter: Andrew Purtell
Assignee: Talat UYARER
Priority: Minor
  Labels: newbie++
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11819.patch


 Add a unit test to hbase-server that exercises CoprocessorHConnection . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12297) Support DBB usage in Bloom and HFileIndex area

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188224#comment-14188224
 ] 

Hudson commented on HBASE-12297:


FAILURE: Integrated in HBase-1.0 #379 (See 
[https://builds.apache.org/job/HBase-1.0/379/])
HBASE-12297 Support DBB usage in Bloom and HFileIndex area. (anoop.s.john: rev 
e1d1ba564bf808278c38bcbbdc527b233b4cbfca)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/ByteBloomFilter.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompoundBloomFilter.java


 Support DBB usage in Bloom and HFileIndex area
 --

 Key: HBASE-12297
 URL: https://issues.apache.org/jira/browse/HBASE-12297
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12297.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12297) Support DBB usage in Bloom and HFileIndex area

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188252#comment-14188252
 ] 

Hudson commented on HBASE-12297:


SUCCESS: Integrated in HBase-TRUNK #5715 (See 
[https://builds.apache.org/job/HBase-TRUNK/5715/])
HBASE-12297 Support DBB usage in Bloom and HFileIndex area. (anoop.s.john: rev 
cbb334035d87542ed06693bb9c8534f64360672b)
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompoundBloomFilter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/ByteBloomFilter.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java


 Support DBB usage in Bloom and HFileIndex area
 --

 Key: HBASE-12297
 URL: https://issues.apache.org/jira/browse/HBASE-12297
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12297.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12219) Cache more efficiently getAll() and get() in FSTableDescriptors

2014-10-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188259#comment-14188259
 ] 

Hadoop QA commented on HBASE-12219:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12677859/HBASE-12219-v1.patch
  against trunk revision .
  ATTACHMENT ID: 12677859

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3792 checkstyle errors (more than the trunk's current 3790 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  tds.put(this.metaTableDescritor.getNameAsString(), new 
TableDescriptor(metaTableDescritor, TableState.State.ENABLED));
+  public static TableDescriptor getTableDescriptorFromFs(FileSystem fs, Path 
tableDir, boolean rewritePb)
+FSTableDescriptors htds = new 
FSTableDescriptorsTest(UTIL.getConfiguration(), fs, rootdir, false, false);
+FSTableDescriptors nonchtds = new 
FSTableDescriptorsTest(UTIL.getConfiguration(), fs, rootdir, false, false);

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11506//console

This message is automatically generated.

 Cache more efficiently getAll() and get() in FSTableDescriptors
 ---

 Key: HBASE-12219
 URL: https://issues.apache.org/jira/browse/HBASE-12219
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.24, 0.99.1, 0.98.6.1
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
  Labels: scalability
 Attachments: HBASE-12219-v1.patch, HBASE-12219.v0.txt, list.png


 Currently table descriptors and tables are cached once they are accessed for 
 the first time. Next calls to the master only require a trip to HDFS to 
 lookup the modified time in order to reload the table descriptors if 
 modified. However in clusters with a large number of tables or concurrent 
 clients 

[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188279#comment-14188279
 ] 

ramkrishna.s.vasudevan commented on HBASE-12282:


Forgot to add one more point here. When we want to compare a KV in Memstore and 
a fake key backed by BB or with a key from a store file (that is backed by BB), 
we need to compare using both types of KV. I had raised a subtask for changing 
the memstore to BB.
Otherwise we have to have both types of comparison as done in the patch 
attached here.

 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188288#comment-14188288
 ] 

ramkrishna.s.vasudevan commented on HBASE-12282:


{code}
+if (left.getQualifierArray() != null  right.getQualifierArray() != null) 
{
+  return Bytes.compareTo(left.getQualifierArray(), 
left.getQualifierOffset(),
+  left.getQualifierLength(), right.getQualifierArray(), 
right.getQualifierOffset(),
+  right.getQualifierLength());
+} else if (((left.getQualifierArray() != null  left.getQualifierBuffer() 
== null)  right
+.getQualifierBuffer() != null)) {
+  return ByteBufferUtils.compareTo(left.getQualifierArray(), 
left.getQualifierOffset(),
+  left.getQualifierLength(), right.getQualifierBuffer(), 
right.getQualifierOffset(),
+  right.getQualifierLength());
+} else if (left.getQualifierBuffer() != null
+ (right.getQualifierBuffer() == null  right.getQualifierArray() != 
null)) {
+  return ByteBufferUtils.compareTo(left.getQualifierBuffer(), 
left.getQualifierOffset(),
+  left.getQualifierLength(), right.getQualifierArray(), 
right.getQualifierOffset(),
+  right.getQualifierLength());
+} else {
+  return ByteBufferUtils.compareTo(left.getQualifierBuffer(), 
left.getQualifierOffset(),
+  left.getQualifierLength(), right.getQualifierBuffer(), 
right.getQualifierOffset(),
+  right.getQualifierLength());
+}
{code}
So as done here we may have to use hasArray on both the left Cell and rightCell 
and do the comparison.

 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12354) Update dependencies in time for 1.0 release

2014-10-29 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188316#comment-14188316
 ] 

Nicolas Liochon commented on HBASE-12354:
-

+1, there is a +1 from Enis above as well.

 Update dependencies in time for 1.0 release
 ---

 Key: HBASE-12354
 URL: https://issues.apache.org/jira/browse/HBASE-12354
 Project: HBase
  Issue Type: Sub-task
  Components: dependencies
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 12354.txt, 12354v2.txt


 Going through and updating egregiously old dependencies for 1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-29 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188371#comment-14188371
 ] 

Ashish Singhi commented on HBASE-12375:
---

Thanks Matteo, for looking into the patch.

bq. I think that by removing the check you'll fail in case of splits (but I 
haven't checked)
No, it did not fail I tested manually the below scenario,
1. Create a table initially with 3 splits
2. Run bulkload
3. split the table into one more region
4. put some data into the table for that region
5. Run completebulkload

bq. the problem is that the LoadIncrementalHFiles will create a _tmp 
directory.
Yes, it does create the _tmp directory but inside the CF directory so it will 
not create any problem.

We can see that from the logs generated after running the above mentioned 
scenario,
{noformat}
2014-10-29 20:03:40,172 INFO  [LoadIncrementalHFiles-0] 
mapreduce.LoadIncrementalHFiles: Trying to load 
hfile=hdfs://10.18.40.106:9000/s4/_d/_tmp/af37ac06db0f4a8ebe9ccd848d5864b7.top 
first=90 last=90
2014-10-29 20:03:40,172 INFO  [LoadIncrementalHFiles-3] 
mapreduce.LoadIncrementalHFiles: Trying to load 
hfile=hdfs://10.18.40.106:9000/s4/_d/_tmp/af37ac06db0f4a8ebe9ccd848d5864b7.bottom
 first=5 last=67
{noformat}

bq. In theory is enough adding to this patch the rename of _tmp to something 
like .tmp
Do you still want me to do this ?

bq. did you tried to run this patch with a set of files that requires splitting?
Yes, as mentioned above

 LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
 --

 Key: HBASE-12375
 URL: https://issues.apache.org/jira/browse/HBASE-12375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBASE-12375.patch


 We do not restrict user from creating a table having column family starting 
 with '_'.
 So when user creates a table in such a way then LoadIncrementalHFiles will 
 skip those family data to load into the table.
 {code}
 // Skip _logs, etc
 if (familyDir.getName().startsWith(_)) continue;
 {code}
 I think we should remove that check as I do not see any _logs directory being 
 created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10780) HFilePrettyPrinter#processFile should return immediately if file does not exists.

2014-10-29 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-10780:
--
Status: Open  (was: Patch Available)

 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists.
 -

 Key: HBASE-10780
 URL: https://issues.apache.org/jira/browse/HBASE-10780
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBASE-10780-v2.patch, HBASE-10780-v2.patch, 
 HBASE-10780.patch


 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists same like HLogPrettyPrinter#run
 {code}
 if (!fs.exists(file)) {
   System.err.println(ERROR, file doesnt exist:  + file);
 }{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10780) HFilePrettyPrinter#processFile should return immediately if file does not exists.

2014-10-29 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-10780:
--
Attachment: HBASE-10780-v3.patch

Reattaching the same patch to see if any actual test failure due to this patch.

 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists.
 -

 Key: HBASE-10780
 URL: https://issues.apache.org/jira/browse/HBASE-10780
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11, 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBASE-10780-v2.patch, HBASE-10780-v2.patch, 
 HBASE-10780-v3.patch, HBASE-10780.patch


 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists same like HLogPrettyPrinter#run
 {code}
 if (!fs.exists(file)) {
   System.err.println(ERROR, file doesnt exist:  + file);
 }{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10780) HFilePrettyPrinter#processFile should return immediately if file does not exists.

2014-10-29 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-10780:
--
Affects Version/s: 0.98.5
   Status: Patch Available  (was: Open)

 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists.
 -

 Key: HBASE-10780
 URL: https://issues.apache.org/jira/browse/HBASE-10780
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.98.5, 0.94.11
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBASE-10780-v2.patch, HBASE-10780-v2.patch, 
 HBASE-10780-v3.patch, HBASE-10780.patch


 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists same like HLogPrettyPrinter#run
 {code}
 if (!fs.exists(file)) {
   System.err.println(ERROR, file doesnt exist:  + file);
 }{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-29 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188401#comment-14188401
 ] 

Matteo Bertozzi commented on HBASE-12375:
-

cool, thanks for verifying that. 
+1 on the patch

If you want to do that, you can also change the _tmp stuff and get rid of all 
the if _ stuff
otherwise the patch is already good enough for me.
{code}
protected ListLoadQueueItem splitStoreFile(final LoadQueueItem item,
...
// We use a '_' prefix which is ignored when walking directory trees above.
final Path tmpDir = new Path(item.hfilePath.getParent(), _tmp);
{code}

 LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
 --

 Key: HBASE-12375
 URL: https://issues.apache.org/jira/browse/HBASE-12375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBASE-12375.patch


 We do not restrict user from creating a table having column family starting 
 with '_'.
 So when user creates a table in such a way then LoadIncrementalHFiles will 
 skip those family data to load into the table.
 {code}
 // Skip _logs, etc
 if (familyDir.getName().startsWith(_)) continue;
 {code}
 I think we should remove that check as I do not see any _logs directory being 
 created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12279) Generated thrift files were generated with the wrong parameters

2014-10-29 Thread Niels Basjes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188422#comment-14188422
 ] 

Niels Basjes commented on HBASE-12279:
--

The patch I created for HBASE-12272 includes an explicit check that the 
configured version is actually used.
So as long as the pom.xml says thrift.version0.9.0/thrift.version trying 
it with 0.9.1 will fail, which is exactly the check we need here.
As far as I can tell the HBASE-12272 patch should also work quite nicely on the 
older HBase where thrift 0.8.0 is still needed.

 Generated thrift files were generated with the wrong parameters
 ---

 Key: HBASE-12279
 URL: https://issues.apache.org/jira/browse/HBASE-12279
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0, 0.98.0, 0.99.0
Reporter: Niels Basjes
 Fix For: 2.0.0, 0.98.8, 0.94.25, 0.99.2

 Attachments: HBASE-12279-2014-10-16-v1.patch


 It turns out that the java code generated from the thrift files have been 
 generated with the wrong settings.
 Instead of the documented 
 ([thrift|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift/package-summary.html],
  
 [thrift2|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift2/package-summary.html])
  
 {code}
 thrift -strict --gen java:hashcode 
 {code}
 the current files seem to be generated instead with
 {code}
 thrift -strict --gen java
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10780) HFilePrettyPrinter#processFile should return immediately if file does not exists.

2014-10-29 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188459#comment-14188459
 ] 

Ted Yu commented on HBASE-10780:


+1

 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists.
 -

 Key: HBASE-10780
 URL: https://issues.apache.org/jira/browse/HBASE-10780
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11, 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBASE-10780-v2.patch, HBASE-10780-v2.patch, 
 HBASE-10780-v3.patch, HBASE-10780.patch


 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists same like HLogPrettyPrinter#run
 {code}
 if (!fs.exists(file)) {
   System.err.println(ERROR, file doesnt exist:  + file);
 }{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188501#comment-14188501
 ] 

Anoop Sam John commented on HBASE-12282:


+1 for hasArray()
The impl of BB backed Cell will throw UnsupportedOpException when calling 
getxxxArray() right?

 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10780) HFilePrettyPrinter#processFile should return immediately if file does not exists.

2014-10-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188508#comment-14188508
 ] 

Hadoop QA commented on HBASE-10780:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12677891/HBASE-10780-v3.patch
  against trunk revision .
  ATTACHMENT ID: 12677891

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11507//console

This message is automatically generated.

 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists.
 -

 Key: HBASE-10780
 URL: https://issues.apache.org/jira/browse/HBASE-10780
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.94.11, 0.98.5
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor
 Attachments: HBASE-10780-v2.patch, HBASE-10780-v2.patch, 
 HBASE-10780-v3.patch, HBASE-10780.patch


 HFilePrettyPrinter#processFile should return immediately if file does not 
 exists same like HLogPrettyPrinter#run
 {code}
 if (!fs.exists(file)) {
   System.err.println(ERROR, file doesnt exist:  + file);
 }{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12324) Improve compaction speed and process for immutable short lived datasets

2014-10-29 Thread Sheetal Dolas (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188518#comment-14188518
 ] 

Sheetal Dolas commented on HBASE-12324:
---

So to consolidate all inputs and comments, here is what I see

* All compactions (including striped compactions) already have logic to delete 
TTL expired files  when 'hbase.store.delete.expired.storefile' is set to 'true' 
(Reference: HBASE-5199 and HBASE-10141 )
* Archiving old files is based comparison between file timestamp and server 
time stamp. HBASE-5199 can be further improved to store and check latest TTL of 
any field in the file trailer and use that for comparison instead of file 
timestamp. This could be independent improvement from this thread.
* Proposed OnlyDeleteExpiredFilesCompactionFiles has its own use case where use 
does not want to compact at all and just delete old data. 
* Cases where some compassion is needed (to avoid too many HFiles) can be 
addressed by striped compaction policy (as it already has smarter logic for 
deciding which files to compact as well as at already deletes TTL expired files 
before compaction.)
* Declaring table/cf immutable - to make smarter decisions: This probably 
needs more exploration.
* New question now aires is shall there be multiple compaction policies (only 
delete expired files, striped, immutable etc ) or should is all be consolidated 
under striped compaction with configuration parameters to enable/disable 
certain behavior.

 Improve compaction speed and process for immutable short lived datasets
 ---

 Key: HBASE-12324
 URL: https://issues.apache.org/jira/browse/HBASE-12324
 Project: HBase
  Issue Type: New Feature
  Components: Compaction
Affects Versions: 0.98.0, 0.96.0
Reporter: Sheetal Dolas
 Attachments: OnlyDeleteExpiredFilesCompactionPolicy.java


 We have seen multiple cases where HBase is used to store immutable data and 
 the data lives for short period of time (few days)
 On very high volume systems, major compactions become very costly and 
 slowdown ingestion rates.
 In all such use cases (immutable data, high write rate and moderate read 
 rates and shorter ttl), avoiding any compactions and just deleting old data 
 brings lot of performance benefits.
 We should have a compaction policy that can only delete/archive files older 
 than TTL and not compact any files.
 Also attaching a patch that can do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11861) Native MOB Compaction mechanisms.

2014-10-29 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-11861:
---
Attachment: 141030-mob-compaction.pdf

Attached is a pictorial design of the proposed core mob compaction mechanism.

 Native MOB Compaction mechanisms.
 -

 Key: HBASE-11861
 URL: https://issues.apache.org/jira/browse/HBASE-11861
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 2.0.0
Reporter: Jonathan Hsieh
 Attachments: 141030-mob-compaction.pdf


 Currently, the first cut of mob will have external processes to age off old 
 mob data (the ttl cleaner), and to compact away deleted or over written data 
 (the sweep tool).  
 From an operational point of view, having two external tools, especially one 
 that relies on MapReduce is undesirable.  In this issue we'll tackle 
 integrating these into hbase without requiring external processes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12346) Scan's default auths behavior under Visibility labels

2014-10-29 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188540#comment-14188540
 ] 

Jerry He commented on HBASE-12346:
--

[~anoop.hbase] explained well. Thanks, guys.

 Scan's default auths behavior under Visibility labels
 -

 Key: HBASE-12346
 URL: https://issues.apache.org/jira/browse/HBASE-12346
 Project: HBase
  Issue Type: Bug
  Components: API, security
Affects Versions: 0.98.7, 0.99.1
Reporter: Jerry He
 Fix For: 0.98.8, 0.99.2

 Attachments: HBASE-12346-master-v2.patch, HBASE-12346-master.patch


 In Visibility Labels security, a set of labels (auths) are administered and 
 associated with a user.
 A user can normally  only see cell data during scan that are part of the 
 user's label set (auths).
 Scan uses setAuthorizations to indicates its wants to use the auths to access 
 the cells.
 Similarly in the shell:
 {code}
 scan 'table1', AUTHORIZATIONS = ['private']
 {code}
 But it is a surprise to find that setAuthorizations seems to be 'mandatory' 
 in the default visibility label security setting.  Every scan needs to 
 setAuthorizations before the scan can get any cells even the cells are under 
 the labels the request user is part of.
 The following steps will illustrate the issue:
 Run as superuser.
 {code}
 1. create a visibility label called 'private'
 2. create 'table1'
 3. put into 'table1' data and label the data as 'private'
 4. set_auths 'user1', 'private'
 5. grant 'user1', 'RW', 'table1'
 {code}
 Run as 'user1':
 {code}
 1. scan 'table1'
 This show no cells.
 2. scan 'table1', scan 'table1', AUTHORIZATIONS = ['private']
 This will show all the data.
 {code}
 I am not sure if this is expected by design or a bug.
 But a more reasonable, more client application backward compatible, and less 
 surprising default behavior should probably look like this:
 A scan's default auths, if its Authorizations attributes is not set 
 explicitly, should be all the auths the request user is administered and 
 allowed on the server.
 If scan.setAuthorizations is used, then the server further filter the auths 
 during scan: use the input auths minus what is not in user's label set on the 
 server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12373) Provide a command to list visibility labels

2014-10-29 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188548#comment-14188548
 ] 

Jerry He commented on HBASE-12373:
--

bq. It should be allowed only for a user who is having system label auth
Yes.

I will have a patch as soon as I can. Thanks!

 Provide a command to list visibility labels
 ---

 Key: HBASE-12373
 URL: https://issues.apache.org/jira/browse/HBASE-12373
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.7, 0.99.1
Reporter: Jerry He
Priority: Minor

 A command to list visibility labels that are in place would be handy.
 This is also in line with many of the other hbase list commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12313) Redo the hfile index length optimization so cell-based rather than serialized KV key

2014-10-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188657#comment-14188657
 ] 

Anoop Sam John commented on HBASE-12313:


bq.Put back old behavior. 
+1 for latest patch.
Hope you can fix line lengths 100 on commit
Thanks Stack

 Redo the hfile index length optimization so cell-based rather than serialized 
 KV key
 

 Key: HBASE-12313
 URL: https://issues.apache.org/jira/browse/HBASE-12313
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: stack
Assignee: stack
 Attachments: 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 12313v10.txt, 12313v5.txt, 12313v6.txt, 12313v8.txt


 Trying to remove API that returns the 'key' of a KV serialized into a byte 
 array is thorny.
 I tried to move over the first and last key serializations and the hfile 
 index entries to be cell but patch was turning massive.  Here is a smaller 
 patch that just redoes the optimization that tries to find 'short' midpoints 
 between last key of last block and first key of next block so it is 
 Cell-based rather than byte array based (presuming Keys serialized in a 
 certain way).  Adds unit tests which we didn't have before.
 Also remove CellKey.  Not needed... at least not yet.  Its just utility for 
 toString.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12285) Builds are failing, possibly because of SUREFIRE-1091

2014-10-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188697#comment-14188697
 ] 

stack commented on HBASE-12285:
---

I set branch-1 and master back to DEBUG level logging (master was on INFO, 
branch-1 was on WARN-only).  branch-1 has been blue overnight with a legit 
failure on occasion.  Thought is that the HBASE-12353 which got rid of a bunch 
of spew fixes the branch-1 issue.  Lets see.

 Builds are failing, possibly because of SUREFIRE-1091
 -

 Key: HBASE-12285
 URL: https://issues.apache.org/jira/browse/HBASE-12285
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Blocker
 Attachments: HBASE-12285_branch-1_v1.patch, 
 HBASE-12285_branch-1_v1.patch


 Our branch-1 builds on builds.apache.org have been failing in recent days 
 after we switched over to an official version of Surefire a few days back 
 (HBASE-4955). The version we're using, 2.17, is hit by a bug 
 ([SUREFIRE-1091|https://jira.codehaus.org/browse/SUREFIRE-1091]) that results 
 in an IOException, which looks like what we're seeing on Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12293) Tests are logging too much

2014-10-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-12293.
---
Resolution: Duplicate

Resolving as dup of parent issue.  Thats where the action is.

 Tests are logging too much
 --

 Key: HBASE-12293
 URL: https://issues.apache.org/jira/browse/HBASE-12293
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Minor

 In trying to solve HBASE-12285, it was pointed out that tests are writing too 
 much to output again. At best, this is a sloppy practice and, at worst, it 
 leaves us open to builds breaking when our test tools can't handle the flood. 
 If [~nkeywal] would be willing give me a little bit of mentoring on how he 
 dealt with this problem a few years back, I'd be happy to add it to my plate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12354) Update dependencies in time for 1.0 release

2014-10-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12354:
--
  Resolution: Fixed
Release Note: Updated dependencies. Of note, went from hadoop 2.2 to 2.5.1.
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks [~nkeywal] and [~enis] (missed this one).

Pushed to branch-1+

 Update dependencies in time for 1.0 release
 ---

 Key: HBASE-12354
 URL: https://issues.apache.org/jira/browse/HBASE-12354
 Project: HBase
  Issue Type: Sub-task
  Components: dependencies
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 12354.txt, 12354v2.txt


 Going through and updating egregiously old dependencies for 1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12313) Redo the hfile index length optimization so cell-based rather than serialized KV key

2014-10-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12313:
--
   Resolution: Fixed
Fix Version/s: 0.99.2
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to branch-1+.

Thanks for reviews mighty [~anoopsamjohn] and [~ram_krish] I fixed long lines 
on commit.

 Redo the hfile index length optimization so cell-based rather than serialized 
 KV key
 

 Key: HBASE-12313
 URL: https://issues.apache.org/jira/browse/HBASE-12313
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 12313v10.txt, 12313v5.txt, 12313v6.txt, 12313v8.txt


 Trying to remove API that returns the 'key' of a KV serialized into a byte 
 array is thorny.
 I tried to move over the first and last key serializations and the hfile 
 index entries to be cell but patch was turning massive.  Here is a smaller 
 patch that just redoes the optimization that tries to find 'short' midpoints 
 between last key of last block and first key of next block so it is 
 Cell-based rather than byte array based (presuming Keys serialized in a 
 certain way).  Adds unit tests which we didn't have before.
 Also remove CellKey.  Not needed... at least not yet.  Its just utility for 
 toString.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12372) [WINDOWS] Enable log4j configuration in hbase.cmd

2014-10-29 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-12372.
---
Resolution: Fixed

Pushed to all 0.98+. 

 [WINDOWS] Enable log4j configuration in hbase.cmd 
 --

 Key: HBASE-12372
 URL: https://issues.apache.org/jira/browse/HBASE-12372
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Trivial
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: hbase-12372.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12285) Builds are failing, possibly because of SUREFIRE-1091

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188825#comment-14188825
 ] 

Hudson commented on HBASE-12285:


SUCCESS: Integrated in HBase-TRUNK #5716 (See 
[https://builds.apache.org/job/HBase-TRUNK/5716/])
HBASE-12285 Builds are failing, possibly because of SUREFIRE-1091 ; ADDENDUM 
SETTING LOG LEVEL TO DEBUG AGAIN (stack: rev 
b240b00f4f751944869511756a3b739040769acb)
* hbase-server/src/test/resources/log4j.properties


 Builds are failing, possibly because of SUREFIRE-1091
 -

 Key: HBASE-12285
 URL: https://issues.apache.org/jira/browse/HBASE-12285
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Blocker
 Attachments: HBASE-12285_branch-1_v1.patch, 
 HBASE-12285_branch-1_v1.patch


 Our branch-1 builds on builds.apache.org have been failing in recent days 
 after we switched over to an official version of Surefire a few days back 
 (HBASE-4955). The version we're using, 2.17, is hit by a bug 
 ([SUREFIRE-1091|https://jira.codehaus.org/browse/SUREFIRE-1091]) that results 
 in an IOException, which looks like what we're seeing on Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12376) HBaseAdmin leaks ZK connections if failure starting watchers (ConnectionLossException)

2014-10-29 Thread stack (JIRA)
stack created HBASE-12376:
-

 Summary: HBaseAdmin leaks ZK connections if failure starting 
watchers (ConnectionLossException)
 Key: HBASE-12376
 URL: https://issues.apache.org/jira/browse/HBASE-12376
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.94.24, 0.98.7
Reporter: stack
Assignee: stack
Priority: Critical


This is a 0.98 issue that some users have been running into mostly running 
Canary and for whatever reason, setup of zk connection fails, usually with a 
ConnectionLossException.  End result is ugly leak zk connections.  ZKWatcher 
created instances are just left hang out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12376) HBaseAdmin leaks ZK connections if failure starting watchers (ConnectionLossException)

2014-10-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12376:
--
Attachment: 0001-12376-HBaseAdmin-leaks-ZK-connections-if-failure-sta.patch

Patch for 0.98.

Don't have issue in master/branch-1.

Opened up CatalogTracker some so I could inject failure for a unit test.

Reviews please.  Locally 0.98 TestAdmin passes.

 HBaseAdmin leaks ZK connections if failure starting watchers 
 (ConnectionLossException)
 --

 Key: HBASE-12376
 URL: https://issues.apache.org/jira/browse/HBASE-12376
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.98.7, 0.94.24
Reporter: stack
Assignee: stack
Priority: Critical
 Attachments: 
 0001-12376-HBaseAdmin-leaks-ZK-connections-if-failure-sta.patch


 This is a 0.98 issue that some users have been running into mostly running 
 Canary and for whatever reason, setup of zk connection fails, usually with a 
 ConnectionLossException.  End result is ugly leak zk connections.  ZKWatcher 
 created instances are just left hang out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12376) HBaseAdmin leaks ZK connections if failure starting watchers (ConnectionLossException)

2014-10-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188870#comment-14188870
 ] 

stack commented on HBASE-12376:
---

There is no CatalogTracker in branch-1+

 HBaseAdmin leaks ZK connections if failure starting watchers 
 (ConnectionLossException)
 --

 Key: HBASE-12376
 URL: https://issues.apache.org/jira/browse/HBASE-12376
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.98.7, 0.94.24
Reporter: stack
Assignee: stack
Priority: Critical
 Attachments: 
 0001-12376-HBaseAdmin-leaks-ZK-connections-if-failure-sta.patch


 This is a 0.98 issue that some users have been running into mostly running 
 Canary and for whatever reason, setup of zk connection fails, usually with a 
 ConnectionLossException.  End result is ugly leak zk connections.  ZKWatcher 
 created instances are just left hang out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11819) Unit test for CoprocessorHConnection

2014-10-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188894#comment-14188894
 ] 

stack commented on HBASE-11819:
---

You create an admin. In future, close it inside a try/finally rather than just 
when you are done.

+Admin admin = util.getHBaseAdmin();

+admin.createTable(htd, new byte[][] { rowSeperator1, rowSeperator2 });
+util.waitUntilAllRegionsAssigned(testTable);
+admin.close();


Instead do...

+Admin admin = util.getHBaseAdmin();
try {
+admin.createTable(htd, new byte[][] { rowSeperator1, rowSeperator2 });
+util.waitUntilAllRegionsAssigned(testTable);
} finally {
admin.close...

Just FYI.

Same with HTable.

See how the way you are making HTable is now deprecated?  Avoid doing it the 
way you have it if you can.

Otherwise patch looks good to me.

Good for you [~apurtell]?

 Unit test for CoprocessorHConnection 
 -

 Key: HBASE-11819
 URL: https://issues.apache.org/jira/browse/HBASE-11819
 Project: HBase
  Issue Type: Test
Reporter: Andrew Purtell
Assignee: Talat UYARER
Priority: Minor
  Labels: newbie++
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11819.patch


 Add a unit test to hbase-server that exercises CoprocessorHConnection . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12285) Builds are failing, possibly because of SUREFIRE-1091

2014-10-29 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188893#comment-14188893
 ] 

Dima Spivak commented on HBASE-12285:
-

A build of HBase-1.0 [just passed|https://builds.apache.org/job/HBase-1.0/380] 
(log level on branch-1 is back at DEBUG and the log-spew reduction from 
HBASE-12293 was present), so I think this is probably safe to resolve as fixed. 
What do you think, [~stack]? Since this is no longer blocking builds, we may 
want to hold off updating Surefire until 2.18 is officially released.

 Builds are failing, possibly because of SUREFIRE-1091
 -

 Key: HBASE-12285
 URL: https://issues.apache.org/jira/browse/HBASE-12285
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Blocker
 Attachments: HBASE-12285_branch-1_v1.patch, 
 HBASE-12285_branch-1_v1.patch


 Our branch-1 builds on builds.apache.org have been failing in recent days 
 after we switched over to an official version of Surefire a few days back 
 (HBASE-4955). The version we're using, 2.17, is hit by a bug 
 ([SUREFIRE-1091|https://jira.codehaus.org/browse/SUREFIRE-1091]) that results 
 in an IOException, which looks like what we're seeing on Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12354) Update dependencies in time for 1.0 release

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188898#comment-14188898
 ] 

Hudson commented on HBASE-12354:


SUCCESS: Integrated in HBase-1.0 #380 (See 
[https://builds.apache.org/job/HBase-1.0/380/])
HBASE-12354 Update dependencies in time for 1.0 release (stack: rev 
7aed6de9c84072df7cf4ce3cff43de76b65c48a1)
* pom.xml
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java


 Update dependencies in time for 1.0 release
 ---

 Key: HBASE-12354
 URL: https://issues.apache.org/jira/browse/HBASE-12354
 Project: HBase
  Issue Type: Sub-task
  Components: dependencies
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 12354.txt, 12354v2.txt


 Going through and updating egregiously old dependencies for 1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12285) Builds are failing, possibly because of SUREFIRE-1091

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188897#comment-14188897
 ] 

Hudson commented on HBASE-12285:


SUCCESS: Integrated in HBase-1.0 #380 (See 
[https://builds.apache.org/job/HBase-1.0/380/])
HBASE-12285 Builds are failing, possibly because of SUREFIRE-1091 ; ADDENDUM 
SETTING LOG LEVEL TO DEBUG AGAIN (stack: rev 
752c5460999ed474d45a7032b3ed986e7aa6749d)
* hbase-server/src/test/resources/log4j.properties


 Builds are failing, possibly because of SUREFIRE-1091
 -

 Key: HBASE-12285
 URL: https://issues.apache.org/jira/browse/HBASE-12285
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Blocker
 Attachments: HBASE-12285_branch-1_v1.patch, 
 HBASE-12285_branch-1_v1.patch


 Our branch-1 builds on builds.apache.org have been failing in recent days 
 after we switched over to an official version of Surefire a few days back 
 (HBASE-4955). The version we're using, 2.17, is hit by a bug 
 ([SUREFIRE-1091|https://jira.codehaus.org/browse/SUREFIRE-1091]) that results 
 in an IOException, which looks like what we're seeing on Jenkins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12313) Redo the hfile index length optimization so cell-based rather than serialized KV key

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188896#comment-14188896
 ] 

Hudson commented on HBASE-12313:


SUCCESS: Integrated in HBase-1.0 #380 (See 
[https://builds.apache.org/job/HBase-1.0/380/])
HBASE-12313 Redo the hfile index length optimization so cell-based rather than 
serialized KV key (stack: rev 6c39d36b32837bb2e114bc1919427c825d18042a)
* 
hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayScanner.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellUtil.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/Cell.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/ClientSideRegionScanner.java
* 
hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeCell.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellKey.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestDriver.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
* hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellComparator.java
* hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestIPCUtil.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java


 Redo the hfile index length optimization so cell-based rather than serialized 
 KV key
 

 Key: HBASE-12313
 URL: https://issues.apache.org/jira/browse/HBASE-12313
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 12313v10.txt, 12313v5.txt, 12313v6.txt, 12313v8.txt


 Trying to remove API that returns the 'key' of a KV serialized into a byte 
 array is thorny.
 I tried to move over the first and last key serializations and the hfile 
 index entries to be cell but patch was turning massive.  Here is a smaller 
 patch that just redoes the optimization that tries to find 'short' midpoints 
 between last key of last block and first key of next block so it is 
 Cell-based rather than byte array based (presuming Keys serialized in a 
 certain way).  Adds unit tests which we didn't have before.
 Also remove CellKey.  Not needed... at least not yet.  Its just utility for 
 toString.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10462) Recategorize some of the client facing Public / Private interfaces

2014-10-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188903#comment-14188903
 ] 

stack commented on HBASE-10462:
---

Any movement on this one [~enis] What needs to be done to finish?  Thanks.

 Recategorize some of the client facing Public / Private interfaces
 --

 Key: HBASE-10462
 URL: https://issues.apache.org/jira/browse/HBASE-10462
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Blocker
 Fix For: 2.0.0, 0.99.2

 Attachments: hbase-10462_wip1.patch


 We should go over the list of InterfaceAudience.Public interfaces one more to 
 remove those that are NOT indeed public interfaces. 
 From current trunk, we should change these from public to private: 
 {code}
 ReversedScannerCallable
 ReversedClientScanner
 ClientScanner  (note that ResultScanner is public interface, while 
 ClientScanner should not be) 
 ClientSmallScanner
 TableSnapshotScanner - We need a way of constructing this since it cannot be 
 constructed from HConnection / HTable. Maybe a basic factory. 
 {code}
 These are not marked: 
 {code}
 Registry, 
 ZooKeeperRegistry
 RpcRetryingCallerFactory
 ZooKeeperKeepAliveConnection
 AsyncProcess
 DelegatingRetryingCallable
 HConnectionKey
 MasterKeepAliveConnection
 MultiServerCallable
 {code}
 We can think about making these public interface: 
 {code}
 ScanMetrics
 {code}
 Add javadoc to: 
 {code}
 Query
 {code}
 We can add a test to find out all classes in client package to check for 
 interface mark. 
 We can extend this to brainstorm on the preferred API options. We probably 
 want the clients to use HTableInterface, instead of HTable everywhere. 
 HConnectionManager comes with bazillion methods which are not intended for 
 public use, etc. 
 Raising this as blocker to 1.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9206) namespace permissions

2014-10-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188907#comment-14188907
 ] 

stack commented on HBASE-9206:
--

[~toffer] Should we move out of 1.0? Would be good to have it in.

 namespace permissions
 -

 Key: HBASE-9206
 URL: https://issues.apache.org/jira/browse/HBASE-9206
 Project: HBase
  Issue Type: Sub-task
Reporter: Francis Liu
 Fix For: 0.99.2


 Now that we have namespaces let's address how we can give admins more 
 flexibility.
 Let's list out the privileges we'd like. Then we can map it to existing 
 privileges and see if we need more. 
 So far we have:
 1. Modify namespace descriptor (ie quota, other values)
 2. create namespace
 3. delete namespace
 4. list tables in namespace
 5. create/drop tables in a namespace
 6. All namespace's tables create
 7. All namespace's tables write
 8. All namespace's tables execute
 9. All namespace's tables delete
 10. All namespace's tables admin
 1-3, is currently set to global admin only. Which seems acceptable to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12355) Update maven plugins

2014-10-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12355:
--
Attachment: 12355v7.txt

What I am committing.  It does NOT include updating surefire to 2.18-SNAPSHOT 
(nor does it add in apache SNAPSHOTs repo).  Lets do that in different issue in 
one more directly related to the work going on over in HBASE-12285 Builds are 
failing, possibly because of SUREFIRE-1091  Besides, we may not need to go to 
SNAPSHOT as it currently is looking (see HBASE-12285).

 Update maven plugins
 

 Key: HBASE-12355
 URL: https://issues.apache.org/jira/browse/HBASE-12355
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: stack
Assignee: stack
 Fix For: 0.99.2

 Attachments: 12355.txt, 12355v2.txt, 12355v3.txt, 12355v5.txt, 
 12355v6.txt, 12355v6.txt, 12355v7.txt


 Update maven plugins. Some are way old.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12376) HBaseAdmin leaks ZK connections if failure starting watchers (ConnectionLossException)

2014-10-29 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188918#comment-14188918
 ] 

Sean Busbey commented on HBASE-12376:
-

{code}
+} finally {
+  // If we did not succeed but created a catalogtracker, clean it up. CT 
has a ZK instance
+  // in it and we'll leak if we don't do the 'stop'.
+  if (!succeeded  ct != null) {
+ct.stop();
+ct = null;
+  }
{code}

Wrap the ct.stop() call in a try/catch(RuntimeException) that logs a warning. 
Otherwise, if we end up in the finally block because of an exception 
(RuntimeException, ZooKeeperConnectionException, or IOException) and it should 
throw we'll overrun the original exception.

{code}
+  /**
+   * For testing so can intercept the catalog tracker start.
+   * @param c
+   * @return Instance of CatalogTracker or exceptions if we fail
+   * @throws IOException
+   * @throws InterruptedException
+   */
{code}

leave out the boilerplate javadoc param and throws.

{code}
+  @VisibleForTesting
+  protected CatalogTracker startCatalogTracker(final CatalogTracker ct)
+  throws IOException, InterruptedException {
{code}

nit: this can be package-private and still be used in the test it's currently 
used in.

{code}
+  CatalogTracker ct = doctoredAdmin.getCatalogTracker();
+  assertFalse(ct.isStopped());
+  ct.stop();
+  assertTrue(ct.isStopped());
{code}

use {{doctoredAdmin.cleanupCatalogTracker(ct)}} instead of {{ct.stop()}} since 
that's what the javadoc for getCatalogTracker says to do.

 HBaseAdmin leaks ZK connections if failure starting watchers 
 (ConnectionLossException)
 --

 Key: HBASE-12376
 URL: https://issues.apache.org/jira/browse/HBASE-12376
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.98.7, 0.94.24
Reporter: stack
Assignee: stack
Priority: Critical
 Attachments: 
 0001-12376-HBaseAdmin-leaks-ZK-connections-if-failure-sta.patch


 This is a 0.98 issue that some users have been running into mostly running 
 Canary and for whatever reason, setup of zk connection fails, usually with a 
 ConnectionLossException.  End result is ugly leak zk connections.  ZKWatcher 
 created instances are just left hang out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12355) Update maven plugins

2014-10-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12355:
--
   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to branch-1+.

Tried 0.98 but lots of diff so let it be.  Let things like the new plugins 
catch stuff in master and branch-1 and we can then backport anything they find.

 Update maven plugins
 

 Key: HBASE-12355
 URL: https://issues.apache.org/jira/browse/HBASE-12355
 Project: HBase
  Issue Type: Sub-task
  Components: build
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 12355.txt, 12355v2.txt, 12355v3.txt, 12355v5.txt, 
 12355v6.txt, 12355v6.txt, 12355v7.txt


 Update maven plugins. Some are way old.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12121) maven release plugin does not allow for customized goals

2014-10-29 Thread Enoch Hsu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enoch Hsu updated HBASE-12121:
--
Fix Version/s: 1.0.0

 maven release plugin does not allow for customized goals
 

 Key: HBASE-12121
 URL: https://issues.apache.org/jira/browse/HBASE-12121
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.6
Reporter: Enoch Hsu
Assignee: Enoch Hsu
Priority: Minor
 Fix For: 1.0.0

 Attachments: HBASE-12121.patch


 Inside the pom under the maven-release-plugin there is a configuration that 
 defines what the release-plugin uses like so
 configuration
 !--You need this profile. It'll sign your artifacts.
 I'm not sure if this config. actually works though.
 I've been specifying -Papache-release on the command-line
 --
 releaseProfilesapache-release/releaseProfiles
 !--This stops our running tests for each stage of maven release.
 But it builds the test jar. From SUREFIRE-172.
 --
 arguments-Dmaven.test.skip.exec/arguments
 pomFileNamepom.xml/pomFileName
 /configuration
 There is no property for goals so if the user passes in a goal from the 
 command line it will not get executed and the default behavior will be used 
 instead.
 I propose to add in the following
 goals${goals}/goals
 This will allow custom release goal options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12376) HBaseAdmin leaks ZK connections if failure starting watchers (ConnectionLossException)

2014-10-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12376:
--
Attachment: 
0001-12376-HBaseAdmin-leaks-ZK-connections-if-failure-sta.version2.patch

Thanks for nice review [~busbey]  This version addresses all your comments.

Regards

bq. Wrap the ct.stop() call in a try/catch(RuntimeException) that logs a 
warning. 

So, you say RuntimeException in case an unexpected Exception (OOME or 
something?).  The stop does pretty good job at catching all other crap.  This a 
style you think we should institute throughout?

Thanks.

 HBaseAdmin leaks ZK connections if failure starting watchers 
 (ConnectionLossException)
 --

 Key: HBASE-12376
 URL: https://issues.apache.org/jira/browse/HBASE-12376
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.98.7, 0.94.24
Reporter: stack
Assignee: stack
Priority: Critical
 Attachments: 
 0001-12376-HBaseAdmin-leaks-ZK-connections-if-failure-sta.patch, 
 0001-12376-HBaseAdmin-leaks-ZK-connections-if-failure-sta.version2.patch


 This is a 0.98 issue that some users have been running into mostly running 
 Canary and for whatever reason, setup of zk connection fails, usually with a 
 ConnectionLossException.  End result is ugly leak zk connections.  ZKWatcher 
 created instances are just left hang out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12313) Redo the hfile index length optimization so cell-based rather than serialized KV key

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188952#comment-14188952
 ] 

Hudson commented on HBASE-12313:


SUCCESS: Integrated in HBase-TRUNK #5717 (See 
[https://builds.apache.org/job/HBase-TRUNK/5717/])
HBASE-12313 Redo the hfile index length optimization so cell-based rather than 
serialized KV key (stack: rev 889333a6fd854cf27b552cf25ff711f2c50f8c08)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/ClientSideRegionScanner.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
* 
hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeCell.java
* 
hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/decode/PrefixTreeArrayScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellComparator.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestDriver.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellKey.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestIPCUtil.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/Cell.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java
* hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellUtil.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java


 Redo the hfile index length optimization so cell-based rather than serialized 
 KV key
 

 Key: HBASE-12313
 URL: https://issues.apache.org/jira/browse/HBASE-12313
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 12313v10.txt, 12313v5.txt, 12313v6.txt, 12313v8.txt


 Trying to remove API that returns the 'key' of a KV serialized into a byte 
 array is thorny.
 I tried to move over the first and last key serializations and the hfile 
 index entries to be cell but patch was turning massive.  Here is a smaller 
 patch that just redoes the optimization that tries to find 'short' midpoints 
 between last key of last block and first key of next block so it is 
 Cell-based rather than byte array based (presuming Keys serialized in a 
 certain way).  Adds unit tests which we didn't have before.
 Also remove CellKey.  Not needed... at least not yet.  Its just utility for 
 toString.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

[jira] [Commented] (HBASE-12354) Update dependencies in time for 1.0 release

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188954#comment-14188954
 ] 

Hudson commented on HBASE-12354:


SUCCESS: Integrated in HBase-TRUNK #5717 (See 
[https://builds.apache.org/job/HBase-TRUNK/5717/])
HBASE-12354 Update dependencies in time for 1.0 release (stack: rev 
7cfafe401e18522cbd92a8b06fb2ea380f71ced9)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java
* pom.xml


 Update dependencies in time for 1.0 release
 ---

 Key: HBASE-12354
 URL: https://issues.apache.org/jira/browse/HBASE-12354
 Project: HBase
  Issue Type: Sub-task
  Components: dependencies
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 12354.txt, 12354v2.txt


 Going through and updating egregiously old dependencies for 1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12372) [WINDOWS] Enable log4j configuration in hbase.cmd

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188953#comment-14188953
 ] 

Hudson commented on HBASE-12372:


SUCCESS: Integrated in HBase-TRUNK #5717 (See 
[https://builds.apache.org/job/HBase-TRUNK/5717/])
HBASE-12372 [WINDOWS] Enable log4j configuration in hbase.cmd (enis: rev 
0fa43bd574f897f764749efcf0d42890aa44ff45)
* bin/hbase.cmd


 [WINDOWS] Enable log4j configuration in hbase.cmd 
 --

 Key: HBASE-12372
 URL: https://issues.apache.org/jira/browse/HBASE-12372
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Trivial
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: hbase-12372.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12377) HBaseAdmin#deleteTable fails when META region is moved around same time frame

2014-10-29 Thread Stephen Yuan Jiang (JIRA)
Stephen Yuan Jiang created HBASE-12377:
--

 Summary: HBaseAdmin#deleteTable fails when META region is moved 
around same time frame
 Key: HBASE-12377
 URL: https://issues.apache.org/jira/browse/HBASE-12377
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.4
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 2.0.0, 0.98.8, 0.99.2


This is the same issue that HBASE-10809 tried to address.  The fix of 
HBASE-10809 refetch the latest meta location in retry-loop.  However, there are 
2 problems: (1).  inside the retry loop, there is another try-catch block that 
would throw the exception before retry can kick in; (2). It looks like that 
HBaseAdmin::getFirstMetaServerForTable() always tries to get meta data from 
meta cache, which means if the meta cache is stale and out of date, retries 
would not solve the problem by fetch the right data.

Here is the call stack of the issue:

{noformat}
2014-10-27 
10:11:58,495|beaver.machine|INFO|18218|140065036261120|MainThread|org.apache.hadoop.hbase.NotServingRegionException:
 org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not 
online on ip-172-31-0-48.ec2.internal,60020,1414403435009
2014-10-27 10:11:58,496|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2774)
2014-10-27 10:11:58,496|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4257)
2014-10-27 10:11:58,497|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3156)
2014-10-27 10:11:58,497|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
2014-10-27 10:11:58,498|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
2014-10-27 10:11:58,498|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
2014-10-27 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
2014-10-27 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
2014-10-27 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
java.lang.Thread.run(Thread.java:745)
2014-10-27 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|
2014-10-27 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|at 
sun.reflect.GeneratedConstructorAccessor12.newInstance(Unknown Source)
2014-10-27 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
2014-10-27 10:11:58,501|beaver.machine|INFO|18218|140065036261120|MainThread|at 
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
2014-10-27 10:11:58,501|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
2014-10-27 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
2014-10-27 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:306)
2014-10-27 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:699)
2014-10-27 10:11:58,503|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:654)
2014-10-27 10:11:58,503|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.IntegrationTestManyRegions.tearDown(IntegrationTestManyRegions.java:99)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12377) HBaseAdmin#deleteTable fails when META region is moved around the same timeframe

2014-10-29 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12377:
---
Summary: HBaseAdmin#deleteTable fails when META region is moved around the 
same timeframe  (was: HBaseAdmin#deleteTable fails when META region is moved 
around same time frame)

 HBaseAdmin#deleteTable fails when META region is moved around the same 
 timeframe
 

 Key: HBASE-12377
 URL: https://issues.apache.org/jira/browse/HBASE-12377
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.4
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 2.0.0, 0.98.8, 0.99.2


 This is the same issue that HBASE-10809 tried to address.  The fix of 
 HBASE-10809 refetch the latest meta location in retry-loop.  However, there 
 are 2 problems: (1).  inside the retry loop, there is another try-catch block 
 that would throw the exception before retry can kick in; (2). It looks like 
 that HBaseAdmin::getFirstMetaServerForTable() always tries to get meta data 
 from meta cache, which means if the meta cache is stale and out of date, 
 retries would not solve the problem by fetch the right data.
 Here is the call stack of the issue:
 {noformat}
 2014-10-27 
 10:11:58,495|beaver.machine|INFO|18218|140065036261120|MainThread|org.apache.hadoop.hbase.NotServingRegionException:
  org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is 
 not online on ip-172-31-0-48.ec2.internal,60020,1414403435009
 2014-10-27 
 10:11:58,496|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2774)
 2014-10-27 
 10:11:58,496|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4257)
 2014-10-27 
 10:11:58,497|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3156)
 2014-10-27 
 10:11:58,497|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
 2014-10-27 
 10:11:58,498|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
 2014-10-27 
 10:11:58,498|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 2014-10-27 
 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
 2014-10-27 
 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 2014-10-27 
 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 java.lang.Thread.run(Thread.java:745)
 2014-10-27 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|
 2014-10-27 
 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 sun.reflect.GeneratedConstructorAccessor12.newInstance(Unknown Source)
 2014-10-27 
 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 2014-10-27 
 10:11:58,501|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 2014-10-27 
 10:11:58,501|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
 2014-10-27 
 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
 2014-10-27 
 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:306)
 2014-10-27 
 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:699)
 2014-10-27 
 10:11:58,503|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:654)
 2014-10-27 
 10:11:58,503|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.IntegrationTestManyRegions.tearDown(IntegrationTestManyRegions.java:99)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12377) HBaseAdmin#deleteTable fails when META region is moved around the same timeframe

2014-10-29 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12377:
---
Description: 
This is the same issue that HBASE-10809 tried to address.  The fix of 
HBASE-10809 refetch the latest meta location in retry-loop.  However, there are 
2 problems: (1).  inside the retry loop, there is another try-catch block that 
would throw the exception before retry can kick in; (2). It looks like that 
HBaseAdmin::getFirstMetaServerForTable() always tries to get meta data from 
meta cache, which means if the meta cache is stale and out of date, retries 
would not solve the problem by fetch the right data.

Here is the call stack of the issue:

{noformat}
2014-10-27 
10:11:58,495|beaver.machine|INFO|18218|140065036261120|MainThread|org.apache.hadoop.hbase.NotServingRegionException:
 org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not 
online on ip-172-31-0-48.ec2.internal,60020,1414403435009
2014-10-27 10:11:58,496|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2774)
2014-10-27 10:11:58,496|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4257)
2014-10-27 10:11:58,497|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3156)
2014-10-27 10:11:58,497|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
2014-10-27 10:11:58,498|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
2014-10-27 10:11:58,498|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
2014-10-27 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
2014-10-27 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
2014-10-27 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
java.lang.Thread.run(Thread.java:745)
2014-10-27 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|
2014-10-27 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|at 
sun.reflect.GeneratedConstructorAccessor12.newInstance(Unknown Source)
2014-10-27 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
2014-10-27 10:11:58,501|beaver.machine|INFO|18218|140065036261120|MainThread|at 
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
2014-10-27 10:11:58,501|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
2014-10-27 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
2014-10-27 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:306)
2014-10-27 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:699)
2014-10-27 10:11:58,503|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:654)
2014-10-27 10:11:58,503|beaver.machine|INFO|18218|140065036261120|MainThread|at 
org.apache.hadoop.hbase.IntegrationTestManyRegions.tearDown(IntegrationTestManyRegions.java:99)
{noformat}

The META region was Online in RS1 when the delete table starts, it was moved to 
RS2 during the delete table operation.  And the problem appears.


  was:
This is the same issue that HBASE-10809 tried to address.  The fix of 
HBASE-10809 refetch the latest meta location in retry-loop.  However, there are 
2 problems: (1).  inside the retry loop, there is another try-catch block that 
would throw the exception before retry can kick in; (2). It looks like that 
HBaseAdmin::getFirstMetaServerForTable() always tries to get meta data from 
meta cache, which means if the meta cache is stale and out of date, retries 
would not solve the problem by fetch the right data.

Here is the call stack of the issue:

{noformat}
2014-10-27 
10:11:58,495|beaver.machine|INFO|18218|140065036261120|MainThread|org.apache.hadoop.hbase.NotServingRegionException:
 org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not 
online on 

[jira] [Commented] (HBASE-9712) TestSplitLogManager still fails on occasion

2014-10-29 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188966#comment-14188966
 ] 

Dima Spivak commented on HBASE-9712:


This is hitting a lot of builds on 
[HBase-0.98|https://builds.apache.org/job/HBase-0.98/lastSuccessfulBuild/testReport/org.apache.hadoop.hbase.master/TestSplitLogManager/history/]
 but not 
[HBase-1.0|https://builds.apache.org/job/HBase-1.0/lastSuccessfulBuild/testReport/org.apache.hadoop.hbase.master/TestSplitLogManager/history/]
 or 
[HBase-TRUNK|https://builds.apache.org/job/HBase-TRUNK/lastSuccessfulBuild/testReport/org.apache.hadoop.hbase.master/TestSplitLogManager/history/].
 Mind if I pick this one up, [~stack]?

 TestSplitLogManager still fails on occasion
 ---

 Key: HBASE-9712
 URL: https://issues.apache.org/jira/browse/HBASE-9712
 Project: HBase
  Issue Type: Bug
Reporter: stack
 Attachments: 
 org.apache.hadoop.hbase.master.TestSplitLogManager-output (1).txt


 Opening this issue to keep account of failures.  It failed for me locally 
 just now.
 Failed tests:   
 testTaskResigned(org.apache.hadoop.hbase.master.TestSplitLogManager): 
 version1=2, version=2
 {code}
 durruti:hbase stack$ more 
 hbase-server/target/surefire-reports/org.apache.hadoop.hbase.master.TestSplitLogManager.txt
 ---
 Test set: org.apache.hadoop.hbase.master.TestSplitLogManager
 ---
 Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 86.697 sec 
  FAILURE!
 testTaskResigned(org.apache.hadoop.hbase.master.TestSplitLogManager)  Time 
 elapsed: 0.004 sec   FAILURE!
 java.lang.AssertionError: version1=2, version=2
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.assertTrue(Assert.java:41)
 at 
 org.apache.hadoop.hbase.master.TestSplitLogManager.testTaskResigned(TestSplitLogManager.java:387)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
 at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.junit.runners.Suite.runChild(Suite.java:127)
 at org.junit.runners.Suite.runChild(Suite.java:26)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 at java.lang.Thread.run(Thread.java:680)
 {code}
 Let me attach the log



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12376) HBaseAdmin leaks ZK connections if failure starting watchers (ConnectionLossException)

2014-10-29 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188986#comment-14188986
 ] 

Sean Busbey commented on HBASE-12376:
-

{quote}
So, you say RuntimeException in case an unexpected Exception (OOME or 
something?). The stop does pretty good job at catching all other crap. This a 
style you think we should institute throughout?
{quote}

Yeah, something like that. It's a general code hygiene thing to guard against 
exceptions in finally blocks because of how it hides root causes. I'd be in 
favor of fixing it where we can.

{code}
+try {
+  ct.stop();
+} catch (RuntimeException re) {
+  LOG.error(In cleanup, re);
+}
{code}

Since this is client side code, something with a stronger statement of action 
maybe something like:

{code}
  LOG.error(Failed to clean up HBase's internal catalog tracker after a failed 
initialization.  +
  We may have leaked network connections to ZooKeeper; they won't be 
cleaned up until  +
  the JVM exits. If you see a large number of stale connections to 
ZooKeeper this is likely  +
  the cause. The following exception details will be needed for assistance 
from the  +
  HBase community., re);
{code}

Actually now that I'm reading it, that's a bit long. :) Maybe an additional 
section for the ref guide section on troubleshooting related to ZooKeeper 
(currently 15.11) and then a link to it in the log message?

Otherwise, +1 LGTM (I'm presuming you'll squash before pushing)

 HBaseAdmin leaks ZK connections if failure starting watchers 
 (ConnectionLossException)
 --

 Key: HBASE-12376
 URL: https://issues.apache.org/jira/browse/HBASE-12376
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.98.7, 0.94.24
Reporter: stack
Assignee: stack
Priority: Critical
 Attachments: 
 0001-12376-HBaseAdmin-leaks-ZK-connections-if-failure-sta.patch, 
 0001-12376-HBaseAdmin-leaks-ZK-connections-if-failure-sta.version2.patch


 This is a 0.98 issue that some users have been running into mostly running 
 Canary and for whatever reason, setup of zk connection fails, usually with a 
 ConnectionLossException.  End result is ugly leak zk connections.  ZKWatcher 
 created instances are just left hang out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12372) [WINDOWS] Enable log4j configuration in hbase.cmd

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188990#comment-14188990
 ] 

Hudson commented on HBASE-12372:


FAILURE: Integrated in HBase-1.0 #381 (See 
[https://builds.apache.org/job/HBase-1.0/381/])
HBASE-12372 [WINDOWS] Enable log4j configuration in hbase.cmd (enis: rev 
e79572fa1ceb30d337e6c3e883a8264dc1e66cfb)
* bin/hbase.cmd


 [WINDOWS] Enable log4j configuration in hbase.cmd 
 --

 Key: HBASE-12372
 URL: https://issues.apache.org/jira/browse/HBASE-12372
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Trivial
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: hbase-12372.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12372) [WINDOWS] Enable log4j configuration in hbase.cmd

2014-10-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14188993#comment-14188993
 ] 

Hudson commented on HBASE-12372:


FAILURE: Integrated in HBase-0.98 #636 (See 
[https://builds.apache.org/job/HBase-0.98/636/])
HBASE-12372 [WINDOWS] Enable log4j configuration in hbase.cmd (enis: rev 
b6d6db90daf0eb5e1d293d51264ad70f629d8ef9)
* bin/hbase.cmd


 [WINDOWS] Enable log4j configuration in hbase.cmd 
 --

 Key: HBASE-12372
 URL: https://issues.apache.org/jira/browse/HBASE-12372
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Trivial
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: hbase-12372.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12072) We are doing 35 x 35 retries for master operations

2014-10-29 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14189001#comment-14189001
 ] 

Enis Soztutar commented on HBASE-12072:
---

Can I get a review for this? 

 We are doing 35 x 35 retries for master operations
 --

 Key: HBASE-12072
 URL: https://issues.apache.org/jira/browse/HBASE-12072
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 0.99.2

 Attachments: 12072-v1.txt, 12072-v2.txt, hbase-12072_v1.patch


 For master requests, there are two retry mechanisms in effect. The first one 
 is from HBaseAdmin.executeCallable() 
 {code}
   private V V executeCallable(MasterCallableV callable) throws 
 IOException {
 RpcRetryingCallerV caller = rpcCallerFactory.newCaller();
 try {
   return caller.callWithRetries(callable);
 } finally {
   callable.close();
 }
   }
 {code}
 And inside, the other one is from StubMaker.makeStub():
 {code}
 /**
* Create a stub against the master.  Retry if necessary.
* @return A stub to do codeintf/code against the master
* @throws MasterNotRunningException
*/
   @edu.umd.cs.findbugs.annotations.SuppressWarnings 
 (value=SWL_SLEEP_WITH_LOCK_HELD)
   Object makeStub() throws MasterNotRunningException {
 {code}
 The tests will just hang for 10 min * 35 ~= 6hours. 
 {code}
 2014-09-23 16:19:05,151 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 1 of 35 
 failed; retrying after sleep of 100, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,253 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 2 of 35 
 failed; retrying after sleep of 200, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,456 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 3 of 35 
 failed; retrying after sleep of 300, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,759 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 4 of 35 
 failed; retrying after sleep of 500, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:06,262 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 5 of 35 
 failed; retrying after sleep of 1008, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:07,273 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 6 of 35 
 failed; retrying after sleep of 2011, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:09,286 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 7 of 35 
 failed; retrying after sleep of 4012, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:13,303 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 8 of 35 
 failed; retrying after sleep of 10033, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:23,343 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 9 of 35 
 failed; retrying after sleep of 10089, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:33,439 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 10 of 
 35 failed; retrying after sleep of 10027, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:43,473 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 11 of 
 35 failed; retrying after sleep of 10004, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:53,485 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 12 of 
 35 failed; retrying after sleep of 20160, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:20:13,656 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 13 of 
 35 failed; retrying after sleep of 20006, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:20:33,675 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: 

[jira] [Updated] (HBASE-12072) We are doing 35 x 35 retries for master operations

2014-10-29 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-12072:
--
Fix Version/s: 0.99.2
   2.0.0

 We are doing 35 x 35 retries for master operations
 --

 Key: HBASE-12072
 URL: https://issues.apache.org/jira/browse/HBASE-12072
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 0.99.2

 Attachments: 12072-v1.txt, 12072-v2.txt, hbase-12072_v1.patch


 For master requests, there are two retry mechanisms in effect. The first one 
 is from HBaseAdmin.executeCallable() 
 {code}
   private V V executeCallable(MasterCallableV callable) throws 
 IOException {
 RpcRetryingCallerV caller = rpcCallerFactory.newCaller();
 try {
   return caller.callWithRetries(callable);
 } finally {
   callable.close();
 }
   }
 {code}
 And inside, the other one is from StubMaker.makeStub():
 {code}
 /**
* Create a stub against the master.  Retry if necessary.
* @return A stub to do codeintf/code against the master
* @throws MasterNotRunningException
*/
   @edu.umd.cs.findbugs.annotations.SuppressWarnings 
 (value=SWL_SLEEP_WITH_LOCK_HELD)
   Object makeStub() throws MasterNotRunningException {
 {code}
 The tests will just hang for 10 min * 35 ~= 6hours. 
 {code}
 2014-09-23 16:19:05,151 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 1 of 35 
 failed; retrying after sleep of 100, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,253 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 2 of 35 
 failed; retrying after sleep of 200, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,456 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 3 of 35 
 failed; retrying after sleep of 300, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:05,759 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 4 of 35 
 failed; retrying after sleep of 500, exception=java.io.IOException: Can't get 
 master address from ZooKeeper; znode data == null
 2014-09-23 16:19:06,262 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 5 of 35 
 failed; retrying after sleep of 1008, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:07,273 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 6 of 35 
 failed; retrying after sleep of 2011, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:09,286 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 7 of 35 
 failed; retrying after sleep of 4012, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:13,303 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 8 of 35 
 failed; retrying after sleep of 10033, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:23,343 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 9 of 35 
 failed; retrying after sleep of 10089, exception=java.io.IOException: Can't 
 get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:33,439 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 10 of 
 35 failed; retrying after sleep of 10027, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:43,473 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 11 of 
 35 failed; retrying after sleep of 10004, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:19:53,485 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 12 of 
 35 failed; retrying after sleep of 20160, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:20:13,656 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 13 of 
 35 failed; retrying after sleep of 20006, exception=java.io.IOException: 
 Can't get master address from ZooKeeper; znode data == null
 2014-09-23 16:20:33,675 INFO  [main] 
 client.ConnectionManager$HConnectionImplementation: getMaster attempt 14 of 
 35 failed; 

[jira] [Commented] (HBASE-12377) HBaseAdmin#deleteTable fails when META region is moved around the same timeframe

2014-10-29 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14189017#comment-14189017
 ] 

Enis Soztutar commented on HBASE-12377:
---

I think HBASE-12072 is related since it unifies some paths so that we are using 
the regular rpc retrying mechanism instead of custom build ones inside 
HBaseAdmin. 

For this issue, the problem is that HBaseAdmin.deleteTable() does not use the 
regular scan rpc code (which handles retrying / meta cache, etc correctly) but 
instead does reinvent the stuff in a broken way. 

Another issue with this is that all this logic is in client side vs it should 
have been in the master side, but that is a different and much more involved 
issue.   

Can we do the patch so that it uses MetaReader or MetaScanner to obtain the 
list of regions for the table in the retry loop? 

 HBaseAdmin#deleteTable fails when META region is moved around the same 
 timeframe
 

 Key: HBASE-12377
 URL: https://issues.apache.org/jira/browse/HBASE-12377
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.4
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
 Fix For: 2.0.0, 0.98.8, 0.99.2


 This is the same issue that HBASE-10809 tried to address.  The fix of 
 HBASE-10809 refetch the latest meta location in retry-loop.  However, there 
 are 2 problems: (1).  inside the retry loop, there is another try-catch block 
 that would throw the exception before retry can kick in; (2). It looks like 
 that HBaseAdmin::getFirstMetaServerForTable() always tries to get meta data 
 from meta cache, which means if the meta cache is stale and out of date, 
 retries would not solve the problem by fetch the right data.
 Here is the call stack of the issue:
 {noformat}
 2014-10-27 
 10:11:58,495|beaver.machine|INFO|18218|140065036261120|MainThread|org.apache.hadoop.hbase.NotServingRegionException:
  org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is 
 not online on ip-172-31-0-48.ec2.internal,60020,1414403435009
 2014-10-27 
 10:11:58,496|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2774)
 2014-10-27 
 10:11:58,496|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4257)
 2014-10-27 
 10:11:58,497|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3156)
 2014-10-27 
 10:11:58,497|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
 2014-10-27 
 10:11:58,498|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
 2014-10-27 
 10:11:58,498|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
 2014-10-27 
 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
 2014-10-27 
 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
 2014-10-27 
 10:11:58,499|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 java.lang.Thread.run(Thread.java:745)
 2014-10-27 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|
 2014-10-27 
 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 sun.reflect.GeneratedConstructorAccessor12.newInstance(Unknown Source)
 2014-10-27 
 10:11:58,500|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 2014-10-27 
 10:11:58,501|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 2014-10-27 
 10:11:58,501|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
 2014-10-27 
 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
 2014-10-27 
 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:306)
 2014-10-27 
 10:11:58,502|beaver.machine|INFO|18218|140065036261120|MainThread|at 
 org.apache.hadoop.hbase.client.HBaseAdmin.deleteTable(HBaseAdmin.java:699)
 2014-10-27 
 

[jira] [Updated] (HBASE-12312) Another couple of createTable race conditions

2014-10-29 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12312:
---
Fix Version/s: 0.98.8

I applied this to 0.98 also

 Another couple of createTable race conditions
 -

 Key: HBASE-12312
 URL: https://issues.apache.org/jira/browse/HBASE-12312
 Project: HBase
  Issue Type: Bug
Reporter: Dima Spivak
Assignee: Dima Spivak
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12312_master_v1.patch, 
 HBASE-12312_master_v2.patch, HBASE-12312_master_v3 (1).patch, 
 HBASE-12312_master_v3.patch, HBASE-12312_master_v3.patch, 
 HBASE-12312_master_v3.patch, HBASE-12312_master_v4.patch


 Found a couple more failing tests in TestAccessController and 
 TestScanEarlyTermination caused by my favorite race condition. :) Will post a 
 patch in a second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12312) Another couple of createTable race conditions

2014-10-29 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12312:
---
Attachment: HBASE-12312-0.98.patch

What I applied to 0.98. TestScanEarlyTermination passed for me 10 out of 10 
times locally. Will watch it on Jenkins and our test rigs.

 Another couple of createTable race conditions
 -

 Key: HBASE-12312
 URL: https://issues.apache.org/jira/browse/HBASE-12312
 Project: HBase
  Issue Type: Bug
Reporter: Dima Spivak
Assignee: Dima Spivak
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12312-0.98.patch, HBASE-12312_master_v1.patch, 
 HBASE-12312_master_v2.patch, HBASE-12312_master_v3 (1).patch, 
 HBASE-12312_master_v3.patch, HBASE-12312_master_v3.patch, 
 HBASE-12312_master_v3.patch, HBASE-12312_master_v4.patch


 Found a couple more failing tests in TestAccessController and 
 TestScanEarlyTermination caused by my favorite race condition. :) Will post a 
 patch in a second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >