[jira] [Commented] (HBASE-13885) ZK watches leaks during snapshots

2015-06-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14583011#comment-14583011
 ] 

Lars Hofhansl commented on HBASE-13885:
---

I tried doing a noop change to the abort/ znode, but then there's a bit 
of logic to ignore this instead of failing the procedure.

> ZK watches leaks during snapshots
> -
>
> Key: HBASE-13885
> URL: https://issues.apache.org/jira/browse/HBASE-13885
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.12
>Reporter: Abhishek Singh Chouhan
>Priority: Critical
>
> When taking snapshot of a table a watcher over 
> /hbase/online-snapshot/abort/snapshot-name is created which is never cleared 
> when the snapshot is successful. If we use snapshots to take backups daily we 
> accumulate a lot of watches.
> Steps to reproduce -
> 1) Take snapshot of a table - snapshot 'table_1', 'abc'
> 2) Run the following on zk node or alternatively observe zk watches metric
>  echo "wchc" | nc localhost 2181
> /hbase/online-snapshot/abort/abc can be found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13885) ZK watches leaks during snapshots

2015-06-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582973#comment-14582973
 ] 

Lars Hofhansl commented on HBASE-13885:
---

So apparently before ZOOKEEPER-442 we must trigger a watch in order to remove 
it. Otherwise the watches will linger and accumulate.

[~jesse_yates], [~mbertozzi], any ideas?


> ZK watches leaks during snapshots
> -
>
> Key: HBASE-13885
> URL: https://issues.apache.org/jira/browse/HBASE-13885
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.12
>Reporter: Abhishek Singh Chouhan
>Priority: Critical
>
> When taking snapshot of a table a watcher over 
> /hbase/online-snapshot/abort/snapshot-name is created which is never cleared 
> when the snapshot is successful. If we use snapshots to take backups daily we 
> accumulate a lot of watches.
> Steps to reproduce -
> 1) Take snapshot of a table - snapshot 'table_1', 'abc'
> 2) Run the following on zk node or alternatively observe zk watches metric
>  echo "wchc" | nc localhost 2181
> /hbase/online-snapshot/abort/abc can be found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582945#comment-14582945
 ] 

Hudson commented on HBASE-13892:


FAILURE: Integrated in HBase-TRUNK #6565 (See 
[https://builds.apache.org/job/HBase-TRUNK/6565/])
HBASE-13892 NPE in ClientScanner on null results array (apurtell: rev 
8cef99e5062d889a748c8442595a0e0644e11458)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java


> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582925#comment-14582925
 ] 

Hudson commented on HBASE-13892:


SUCCESS: Integrated in HBase-1.2 #147 (See 
[https://builds.apache.org/job/HBase-1.2/147/])
HBASE-13892 NPE in ClientScanner on null results array (apurtell: rev 
e78572a985fcf76b049b22bf7bbc7734c6601077)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java


> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13885) ZK watches leaks during snapshots

2015-06-11 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-13885:
--
Priority: Critical  (was: Major)

> ZK watches leaks during snapshots
> -
>
> Key: HBASE-13885
> URL: https://issues.apache.org/jira/browse/HBASE-13885
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.12
>Reporter: Abhishek Singh Chouhan
>Priority: Critical
>
> When taking snapshot of a table a watcher over 
> /hbase/online-snapshot/abort/snapshot-name is created which is never cleared 
> when the snapshot is successful. If we use snapshots to take backups daily we 
> accumulate a lot of watches.
> Steps to reproduce -
> 1) Take snapshot of a table - snapshot 'table_1', 'abc'
> 2) Run the following on zk node or alternatively observe zk watches metric
>  echo "wchc" | nc localhost 2181
> /hbase/online-snapshot/abort/abc can be found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13885) ZK watches leaks during snapshots

2015-06-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582916#comment-14582916
 ] 

Lars Hofhansl commented on HBASE-13885:
---

Making critical as it will render the cluster unusable over time.

> ZK watches leaks during snapshots
> -
>
> Key: HBASE-13885
> URL: https://issues.apache.org/jira/browse/HBASE-13885
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.98.12
>Reporter: Abhishek Singh Chouhan
>Priority: Critical
>
> When taking snapshot of a table a watcher over 
> /hbase/online-snapshot/abort/snapshot-name is created which is never cleared 
> when the snapshot is successful. If we use snapshots to take backups daily we 
> accumulate a lot of watches.
> Steps to reproduce -
> 1) Take snapshot of a table - snapshot 'table_1', 'abc'
> 2) Run the following on zk node or alternatively observe zk watches metric
>  echo "wchc" | nc localhost 2181
> /hbase/online-snapshot/abort/abc can be found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582911#comment-14582911
 ] 

Hudson commented on HBASE-13892:


SUCCESS: Integrated in HBase-1.1 #538 (See 
[https://builds.apache.org/job/HBase-1.1/538/])
HBASE-13892 NPE in ClientScanner on null results array (apurtell: rev 
05cef0bbdd87822b0862726c2d946588fe6e1698)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java


> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13605) RegionStates should not keep its list of dead servers

2015-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582910#comment-14582910
 ] 

Hadoop QA commented on HBASE-13605:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12739154/hbase-13605_v3-branch-1.1.patch
  against branch-1.1 branch at commit 9d3422ed16004da1b0f9a874a98bd140b46b7a6f.
  ATTACHMENT ID: 12739154

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3815 checkstyle errors (more than the master's current 3813 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14386//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14386//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14386//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14386//console

This message is automatically generated.

> RegionStates should not keep its list of dead servers
> -
>
> Key: HBASE-13605
> URL: https://issues.apache.org/jira/browse/HBASE-13605
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 2.0.0, 1.0.2, 1.1.1
>
> Attachments: hbase-13605_v1.patch, hbase-13605_v3-branch-1.1.patch
>
>
> As mentioned in 
> https://issues.apache.org/jira/browse/HBASE-9514?focusedCommentId=13769761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13769761
>  and HBASE-12844 we should have only 1 source of cluster membership. 
> The list of dead server and RegionStates doing it's own liveliness check 
> (ServerManager.isServerReachable()) has caused an assignment problem again in 
> a test cluster where the region states "thinks" that the server is dead and 
> SSH will handle the region assignment. However the RS is not dead at all, 
> living happily, and never gets zk expiry or YouAreDeadException or anything. 
> This leaves the list of regions unassigned in OFFLINE state. 
> master assigning the region:
> {code}
> 15-04-20 09:02:25,780 DEBUG [AM.ZK.Worker-pool3-t330] master.RegionStates: 
> Onlined 77dddcd50c22e56bfff133c0e1f9165b on 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268 {ENCODED => 
> 77dddcd50c
> {code}
> Master then disabled the table, and unassigned the region:
> {code}
> 2015-04-20 09:02:27,158 WARN  [ProcedureExecutorThread-1] 
> zookeeper.ZKTableStateManager: Moving table loadtest_d1 state from DISABLING 
> to DISABLING
>  Starting unassign of 
> loadtest_d1,,1429520544378.77dddcd50c22e56bfff133c0e1f9165b. (offlining), 
> current state: {77dddcd50c22e56bfff133c0e1f9165b state=OPEN, 
> ts=1429520545780,   
> server=os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268}
> bleProcedure$BulkDisabler-0] master.AssignmentManager: Sent CLOSE to 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268 for region 
> loadtest_d1,,1429520544378.77dddcd50c22e56bfff133c0e1f9165b.
> 2015-04-20 09:02:27,414 INFO  [AM.ZK.Worker-pool3-t316] master.RegionStates: 
> Offlined 77dddcd50c22e56bfff133c0e1f9165b from 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268
> {code}
> On table re-enable, AM does not assign the region: 
> {code}
> 2015-04-20 09:02:30,415 INFO  [ProcedureExecutorThread-3] 
> balancer.BaseLoadBalancer: Reassigned 25 regions. 25 retained the pre-restart 
> assignment.·
> 2015-04-20 09:02:30,415 INFO  [ProcedureExecutorThread-3] 
> 

[jira] [Commented] (HBASE-13877) Interrupt to flush from TableFlushProcedure causes dataloss in ITBLL

2015-06-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582909#comment-14582909
 ] 

stack commented on HBASE-13877:
---

On patch, in this section:

699 }
700 if (rows.addAndGet(1) < MISSING_ROWS_TO_LOG) {
701   context.getCounter(FOUND_GROUP_KEY, keyStr + "_in_"
702   + context.getInputSplit().toString()).increment(1);

When YARN dumps result, this nice addition gets cut off. How you looking at it 
[~enis]?

Patch is great. +1 after you get hadoopqa to pass.

> Interrupt to flush from TableFlushProcedure causes dataloss in ITBLL
> 
>
> Key: HBASE-13877
> URL: https://issues.apache.org/jira/browse/HBASE-13877
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: hbase-13877_v1.patch, hbase-13877_v2-branch-1.1.patch, 
> hbase-13877_v3-branch-1.1.patch
>
>
> ITBLL with 1.25B rows failed for me (and Stack as reported in 
> https://issues.apache.org/jira/browse/HBASE-13811?focusedCommentId=14577834&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14577834)
>  
> HBASE-13811 and HBASE-13853 fixed an issue with WAL edit filtering. 
> The root cause this time seems to be different. It is due to procedure based 
> flush interrupting the flush request in case the procedure is cancelled from 
> an exception elsewhere. This leaves the memstore snapshot intact without 
> aborting the server. The next flush, then flushes the previous memstore with 
> the current seqId (as opposed to seqId from the memstore snapshot). This 
> creates an hfile with larger seqId than what its contents are. Previous 
> behavior in 0.98 and 1.0 (I believe) is that after flush prepare and 
> interruption / exception will cause RS abort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582902#comment-14582902
 ] 

Hudson commented on HBASE-13892:


FAILURE: Integrated in HBase-1.0 #959 (See 
[https://builds.apache.org/job/HBase-1.0/959/])
HBASE-13892 NPE in ClientScanner on null results array (apurtell: rev 
71baa89ec45393ff5459567b069be6085ea16b8e)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13755) Provide single super user check implementation

2015-06-11 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-13755:

   Resolution: Fixed
Fix Version/s: (was: 1.2.0)
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Fixed AC related merge issues and pushed to master. Looks like branch-1 has 
issues with VC and QOS related classes. Now that you have additional 
privileges, I guess you can push things yourself :). Once done, please don't 
forget to update fix version.

> Provide single super user check implementation
> --
>
> Key: HBASE-13755
> URL: https://issues.apache.org/jira/browse/HBASE-13755
> Project: HBase
>  Issue Type: Improvement
>Reporter: Anoop Sam John
>Assignee: Mikhail Antonov
> Fix For: 2.0.0
>
> Attachments: HBASE-13755-v1.patch, HBASE-13755-v2.patch, 
> HBASE-13755-v3.patch, HBASE-13755-v3.patch, HBASE-13755-v3.patch, 
> HBASE-13755-v3.patch, HBASE-13755-v3.patch, HBASE-13755-v4.patch, 
> HBASE-13755-v5.patch, HBASE-13755-v6.patch
>
>
> Followup for HBASE-13375.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13876) Improving performance of HeapMemoryManager

2015-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582893#comment-14582893
 ] 

Hadoop QA commented on HBASE-13876:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12739151/HBASE-13876-v6.patch
  against master branch at commit 9d3422ed16004da1b0f9a874a98bd140b46b7a6f.
  ATTACHMENT ID: 12739151

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14385//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14385//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14385//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14385//console

This message is automatically generated.

> Improving performance of HeapMemoryManager
> --
>
> Key: HBASE-13876
> URL: https://issues.apache.org/jira/browse/HBASE-13876
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase, regionserver
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 1.1.1
>Reporter: Abhilash
>Assignee: Abhilash
>Priority: Minor
> Attachments: HBASE-13876-v2.patch, HBASE-13876-v3.patch, 
> HBASE-13876-v4.patch, HBASE-13876-v5.patch, HBASE-13876-v5.patch, 
> HBASE-13876-v6.patch, HBASE-13876.patch
>
>
> I am trying to improve the performance of DefaultHeapMemoryTuner by 
> introducing some more checks. The current checks under which the 
> DefaultHeapMemoryTuner works are very rare so I am trying to weaken these 
> checks to improve its performance.
> Check current memstore size and current block cache size. For say if we are 
> using less than 50% of currently available block cache size  we say block 
> cache is sufficient and same for memstore. This check will be very effective 
> when server is either load heavy or write heavy. Earlier version just waited 
> for number of evictions / number of flushes to be zero which are very rare.
> Otherwise based on percent change in number of cache misses and number of 
> flushes we increase / decrease memory provided for caching / memstore. After 
> doing so, on next call of HeapMemoryTuner we verify that last change has 
> indeed decreased number of evictions / flush either of which it was expected 
> to do. We also check that it does not make the other (evictions / flush) 
> increase much. I am doing this analysis by comparing percent change (which is 
> basically nothing but normalized derivative) of number of evictions and 
> number of flushes during last two periods. The main motive for doing this was 
> that if we have random reads then we will be having a lot of cache misses. 
> But even after increasing block cache we wont be able to decrease number of 
> cache misses and we will revert back and eventually we will not waste memory 
> on block caches. This will also help us ignore random short term spikes in 
> reads / writes. I have also tried to take care not to tune memory if do do 
> not have enough hints as unnecessary tuning my slow down the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582892#comment-14582892
 ] 

Hudson commented on HBASE-13892:


FAILURE: Integrated in HBase-0.98 #1027 (See 
[https://builds.apache.org/job/HBase-0.98/1027/])
HBASE-13892 NPE in ClientScanner on null results array (apurtell: rev 
3f31327135b784eceeafb4d417a19d441cfbd712)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582879#comment-14582879
 ] 

Hadoop QA commented on HBASE-13892:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12739148/HBASE-13892.patch
  against master branch at commit 9d3422ed16004da1b0f9a874a98bd140b46b7a6f.
  ATTACHMENT ID: 12739148

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14384//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14384//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14384//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14384//console

This message is automatically generated.

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13879) Add hbase.hstore.compactionThreshold to HConstants

2015-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582860#comment-14582860
 ] 

Hadoop QA commented on HBASE-13879:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12739145/HBASE-13879.1.patch
  against master branch at commit 9d3422ed16004da1b0f9a874a98bd140b46b7a6f.
  ATTACHMENT ID: 12739145

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 127 
new or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14382//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14382//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14382//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14382//console

This message is automatically generated.

> Add hbase.hstore.compactionThreshold to HConstants
> --
>
> Key: HBASE-13879
> URL: https://issues.apache.org/jira/browse/HBASE-13879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Gabor Liptak
>Priority: Minor
> Attachments: HBASE-13879.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13877) Interrupt to flush from TableFlushProcedure causes dataloss in ITBLL

2015-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582858#comment-14582858
 ] 

Hadoop QA commented on HBASE-13877:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12739146/hbase-13877_v3-branch-1.1.patch
  against branch-1.1 branch at commit 9d3422ed16004da1b0f9a874a98bd140b46b7a6f.
  ATTACHMENT ID: 12739146

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.wal.TestSecureWALReplay
  org.apache.hadoop.hbase.regionserver.wal.TestWALReplay
  
org.apache.hadoop.hbase.regionserver.wal.TestWALReplayCompressed

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14383//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14383//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14383//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14383//console

This message is automatically generated.

> Interrupt to flush from TableFlushProcedure causes dataloss in ITBLL
> 
>
> Key: HBASE-13877
> URL: https://issues.apache.org/jira/browse/HBASE-13877
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: hbase-13877_v1.patch, hbase-13877_v2-branch-1.1.patch, 
> hbase-13877_v3-branch-1.1.patch
>
>
> ITBLL with 1.25B rows failed for me (and Stack as reported in 
> https://issues.apache.org/jira/browse/HBASE-13811?focusedCommentId=14577834&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14577834)
>  
> HBASE-13811 and HBASE-13853 fixed an issue with WAL edit filtering. 
> The root cause this time seems to be different. It is due to procedure based 
> flush interrupting the flush request in case the procedure is cancelled from 
> an exception elsewhere. This leaves the memstore snapshot intact without 
> aborting the server. The next flush, then flushes the previous memstore with 
> the current seqId (as opposed to seqId from the memstore snapshot). This 
> creates an hfile with larger seqId than what its contents are. Previous 
> behavior in 0.98 and 1.0 (I believe) is that after flush prepare and 
> interruption / exception will cause RS abort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13892:
---
   Resolution: Fixed
Fix Version/s: 1.0.2
   0.98.14
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to >= 0.98: new test plus fix to 1.1+, only new test to 0.98 and 1.0. 
New test passes on all branches. 

I amended the tests to use try-with-resources (for 0.98, try-finally) to clean 
up resources as Ted suggested.

Happy to help with the branch porting mechanics [~elserj], thanks for finding 
and fixing this.

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13605) RegionStates should not keep its list of dead servers

2015-06-11 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-13605:
--
Attachment: hbase-13605_v3-branch-1.1.patch

We have discovered an issue with the v1 patch which gets surfaced after 
applying the patch. It is due to this logic: 
In AM: 
{code}
  for (ServerName serverName: deadServers) {
if (!serverManager.isServerDead(serverName)) {
  serverManager.expireServer(serverName); // Let SSH do region re-assign
}
  }
{code}

Notice that we are expiring the server IF it is NOT dead. Seems weird, right? I 
assume this was added because to not trigger SSH twice. 

The v1 patch changes {{serverManager.isServerDead}} so that if a new server is 
registered in the online servers, the old server IS considered dead: 

{code}
  public synchronized boolean isServerDead(ServerName serverName) {
if (serverName == null || deadservers.isDeadServer(serverName)
|| queuedDeadServers.contains(serverName)
|| requeuedDeadServers.containsKey(serverName)) {
  return true;
}

// we are not acquiring the lock
ServerName onlineServer = 
findServerWithSameHostnamePortWithLock(serverName);
if (onlineServer != null && serverName.getStartcode() < 
onlineServer.getStartcode()) {
  return true;
}
{code}

In one of our tests, that is exactly what happened. The RS is registered with a 
new identifier (thus onlineServers contains the new definition), and because of 
this SSH for the old guy was never called. 

v3 patch fixes this condition by breaking the isServerDead() into two parts. 
Old callers relies on isServerInDeadList() while the assignment caller will 
rely on the new semantics.

We have been running tests including ITBLL, ITMTTR, etc with the v1 patch. 
Seems stable enough. v3 patch should fix the remaining issues. 

 



> RegionStates should not keep its list of dead servers
> -
>
> Key: HBASE-13605
> URL: https://issues.apache.org/jira/browse/HBASE-13605
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 2.0.0, 1.0.2, 1.1.1
>
> Attachments: hbase-13605_v1.patch, hbase-13605_v3-branch-1.1.patch
>
>
> As mentioned in 
> https://issues.apache.org/jira/browse/HBASE-9514?focusedCommentId=13769761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13769761
>  and HBASE-12844 we should have only 1 source of cluster membership. 
> The list of dead server and RegionStates doing it's own liveliness check 
> (ServerManager.isServerReachable()) has caused an assignment problem again in 
> a test cluster where the region states "thinks" that the server is dead and 
> SSH will handle the region assignment. However the RS is not dead at all, 
> living happily, and never gets zk expiry or YouAreDeadException or anything. 
> This leaves the list of regions unassigned in OFFLINE state. 
> master assigning the region:
> {code}
> 15-04-20 09:02:25,780 DEBUG [AM.ZK.Worker-pool3-t330] master.RegionStates: 
> Onlined 77dddcd50c22e56bfff133c0e1f9165b on 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268 {ENCODED => 
> 77dddcd50c
> {code}
> Master then disabled the table, and unassigned the region:
> {code}
> 2015-04-20 09:02:27,158 WARN  [ProcedureExecutorThread-1] 
> zookeeper.ZKTableStateManager: Moving table loadtest_d1 state from DISABLING 
> to DISABLING
>  Starting unassign of 
> loadtest_d1,,1429520544378.77dddcd50c22e56bfff133c0e1f9165b. (offlining), 
> current state: {77dddcd50c22e56bfff133c0e1f9165b state=OPEN, 
> ts=1429520545780,   
> server=os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268}
> bleProcedure$BulkDisabler-0] master.AssignmentManager: Sent CLOSE to 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268 for region 
> loadtest_d1,,1429520544378.77dddcd50c22e56bfff133c0e1f9165b.
> 2015-04-20 09:02:27,414 INFO  [AM.ZK.Worker-pool3-t316] master.RegionStates: 
> Offlined 77dddcd50c22e56bfff133c0e1f9165b from 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268
> {code}
> On table re-enable, AM does not assign the region: 
> {code}
> 2015-04-20 09:02:30,415 INFO  [ProcedureExecutorThread-3] 
> balancer.BaseLoadBalancer: Reassigned 25 regions. 25 retained the pre-restart 
> assignment.·
> 2015-04-20 09:02:30,415 INFO  [ProcedureExecutorThread-3] 
> procedure.EnableTableProcedure: Bulk assigning 25 region(s) across 5 
> server(s), retainAssignment=true
> l,16000,1429515659726-GeneralBulkAssigner-4] master.RegionStates: Couldn't 
> reach online server 
> os-amb-r6-us-1429512014-hbase4-6.novalocal,16020,1429520535268
> l,16000,1429515659726-GeneralBulkAssigner-4] master.AssignmentManager: 
> Updating the state to OFFLINE to allow

[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582790#comment-14582790
 ] 

Ted Yu commented on HBASE-13892:


Nothing else.

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13891) AM should handle RegionServerStoppedException during assignment

2015-06-11 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13891:
-
Attachment: 13891.patch

Something like this?

> AM should handle RegionServerStoppedException during assignment
> ---
>
> Key: HBASE-13891
> URL: https://issues.apache.org/jira/browse/HBASE-13891
> Project: HBase
>  Issue Type: Bug
>  Components: master, Region Assignment
>Affects Versions: 1.1.0.1
>Reporter: Nick Dimiduk
> Attachments: 13891.patch
>
>
> I noticed the following in the master logs
> {noformat}
> 2015-06-11 11:04:55,278 WARN  [AM.ZK.Worker-pool2-t337] 
> master.AssignmentManager: Failed assignment of 
> SYSTEM.SEQUENCE,\x8E\x00\x00\x00,1434010321127.d2be67cf43d6bd600c7f461701ca908f.
>  to ip-172-31-32-232.ec2.internal,16020,1434020633773, trying to assign 
> elsewhere instead; try=1 of 10
> org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: 
> org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 
> ip-172-31-32-232.ec2.internal,16020,1434020633773 not running, aborting
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:980)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1382)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22117)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
>   at sun.reflect.GeneratedConstructorAccessor26.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:322)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:752)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2136)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1590)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1568)
>   at 
> org.apache.hadoop.hbase.master.handler.ClosedRegionHandler.process(ClosedRegionHandler.java:106)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.handleRegion(AssignmentManager.java:1063)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$6.run(AssignmentManager.java:1511)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:1295)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException):
>  org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 
> ip-172-31-32-232.ec2.internal,16020,1434020633773 not running, aborting
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:980)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1382)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22117)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1206)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChan

[jira] [Commented] (HBASE-13876) Improving performance of HeapMemoryManager

2015-06-11 Thread Abhilash (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582778#comment-14582778
 ] 

Abhilash commented on HBASE-13876:
--

Using stats from few past lookup periods(configurable) to decide tuner step.

> Improving performance of HeapMemoryManager
> --
>
> Key: HBASE-13876
> URL: https://issues.apache.org/jira/browse/HBASE-13876
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase, regionserver
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 1.1.1
>Reporter: Abhilash
>Assignee: Abhilash
>Priority: Minor
> Attachments: HBASE-13876-v2.patch, HBASE-13876-v3.patch, 
> HBASE-13876-v4.patch, HBASE-13876-v5.patch, HBASE-13876-v5.patch, 
> HBASE-13876-v6.patch, HBASE-13876.patch
>
>
> I am trying to improve the performance of DefaultHeapMemoryTuner by 
> introducing some more checks. The current checks under which the 
> DefaultHeapMemoryTuner works are very rare so I am trying to weaken these 
> checks to improve its performance.
> Check current memstore size and current block cache size. For say if we are 
> using less than 50% of currently available block cache size  we say block 
> cache is sufficient and same for memstore. This check will be very effective 
> when server is either load heavy or write heavy. Earlier version just waited 
> for number of evictions / number of flushes to be zero which are very rare.
> Otherwise based on percent change in number of cache misses and number of 
> flushes we increase / decrease memory provided for caching / memstore. After 
> doing so, on next call of HeapMemoryTuner we verify that last change has 
> indeed decreased number of evictions / flush either of which it was expected 
> to do. We also check that it does not make the other (evictions / flush) 
> increase much. I am doing this analysis by comparing percent change (which is 
> basically nothing but normalized derivative) of number of evictions and 
> number of flushes during last two periods. The main motive for doing this was 
> that if we have random reads then we will be having a lot of cache misses. 
> But even after increasing block cache we wont be able to decrease number of 
> cache misses and we will revert back and eventually we will not waste memory 
> on block caches. This will also help us ignore random short term spikes in 
> reads / writes. I have also tried to take care not to tune memory if do do 
> not have enough hints as unnecessary tuning my slow down the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13876) Improving performance of HeapMemoryManager

2015-06-11 Thread Abhilash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhilash updated HBASE-13876:
-
Attachment: HBASE-13876-v6.patch

> Improving performance of HeapMemoryManager
> --
>
> Key: HBASE-13876
> URL: https://issues.apache.org/jira/browse/HBASE-13876
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase, regionserver
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 1.1.1
>Reporter: Abhilash
>Assignee: Abhilash
>Priority: Minor
> Attachments: HBASE-13876-v2.patch, HBASE-13876-v3.patch, 
> HBASE-13876-v4.patch, HBASE-13876-v5.patch, HBASE-13876-v5.patch, 
> HBASE-13876-v6.patch, HBASE-13876.patch
>
>
> I am trying to improve the performance of DefaultHeapMemoryTuner by 
> introducing some more checks. The current checks under which the 
> DefaultHeapMemoryTuner works are very rare so I am trying to weaken these 
> checks to improve its performance.
> Check current memstore size and current block cache size. For say if we are 
> using less than 50% of currently available block cache size  we say block 
> cache is sufficient and same for memstore. This check will be very effective 
> when server is either load heavy or write heavy. Earlier version just waited 
> for number of evictions / number of flushes to be zero which are very rare.
> Otherwise based on percent change in number of cache misses and number of 
> flushes we increase / decrease memory provided for caching / memstore. After 
> doing so, on next call of HeapMemoryTuner we verify that last change has 
> indeed decreased number of evictions / flush either of which it was expected 
> to do. We also check that it does not make the other (evictions / flush) 
> increase much. I am doing this analysis by comparing percent change (which is 
> basically nothing but normalized derivative) of number of evictions and 
> number of flushes during last two periods. The main motive for doing this was 
> that if we have random reads then we will be having a lot of cache misses. 
> But even after increasing block cache we wont be able to decrease number of 
> cache misses and we will revert back and eventually we will not waste memory 
> on block caches. This will also help us ignore random short term spikes in 
> reads / writes. I have also tried to take care not to tune memory if do do 
> not have enough hints as unnecessary tuning my slow down the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582772#comment-14582772
 ] 

Nick Dimiduk commented on HBASE-13892:
--

+1

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582766#comment-14582766
 ] 

Josh Elser commented on HBASE-13892:


Thanks [~andrew.purt...@gmail.com]. Much appreciated.

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582764#comment-14582764
 ] 

Andrew Purtell commented on HBASE-13892:


bq. s should be closed.

Will fix that on commit. 
I have them staged.
Anything else?

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582759#comment-14582759
 ] 

Ted Yu commented on HBASE-13892:


lgtm
nit:
{code}
6429ResultScanner s = table.getScanner(scan);
{code}
s should be closed.

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582758#comment-14582758
 ] 

Josh Elser commented on HBASE-13892:


I think this might have been introduced in HBASE-11544 (cc/ [~jonathan.lawlor]).

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13892:
---
Fix Version/s: 2.0.0

+1

Yikes, will commit shortly. 

I double checked 1.0 and 0.98, the code is different there and the new test 
passes when ported back. Will still take the new test for those branches.

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582755#comment-14582755
 ] 

Enis Soztutar commented on HBASE-13892:
---

Nice catch! +1. 

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582756#comment-14582756
 ] 

Enis Soztutar commented on HBASE-13892:
---

I'm assuming only 1.1+ is affected? 

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13435) Scan with PrefixFilter, Range filter, column filter, or all 3 returns OutOfOrderScannerNextException

2015-06-11 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-13435.

Resolution: Duplicate

We have several JIRAs that duplicate this. Search for 
'OutOfOrderScannerNextException'

> Scan with PrefixFilter, Range filter, column filter, or all 3 returns 
> OutOfOrderScannerNextException
> 
>
> Key: HBASE-13435
> URL: https://issues.apache.org/jira/browse/HBASE-13435
> Project: HBase
>  Issue Type: Bug
>Reporter: William Watson
>
> We've run this with an hbase shell prefix filter, tried with column, range 
> filters, and limits, and tried doing a pig script -- which we knew was going 
> to be less performant but thought it could work with the same, simple 
> purpose. We wanted to select a specific user's data from a few days (14 ish) 
> worth of data. We also tried selecting a few hours worth of data as a work 
> around, to no avail. In pig, we switched it to just give us all the data for 
> the two week time range.
> The errors look like RPC timeouts, but we don't feel it should be happening 
> and that pig/hbase/both should be able to handle these "queries", if you will.
> The error we get in both the hbase shell and in pig boils down to "possible 
> RPC timeout?". Literally says "?" in the message. 
> We saw this stack overflow, but it's not very helpful. I also saw a few hbase 
> tickets, none of which are super helpful and none indicate that this was an 
> issue fixed in hbase 0.99 or anything newer that what we have. 
> http://stackoverflow.com/questions/26437830/hbase-shell-outoforderscannernextexception-error-on-scanner-count-calls
> Here are the down and dirty deets: 
> Pig script: 
> {code} 
> hbase_records = LOAD 'hbase://impression_event_production_hbase' 
> USING org.apache.pig.backend.hadoop.hbase.HBaseStorage( 
> 'cf1:uid:chararray,cf1:ts:chararray,cf1:data_regime_id:chararray,cf1:ago:chararray,cf1:ao:chararray,cf1:aca:chararray,cf1:si:chararray,cf1:ci:chararray,cf1:kv0:chararray,cf1:g_id:chararray,cf1:h_id:chararray,cf1:cg:chararray,cf1:kv1:chararray,cf1:kv2:chararray,cf1:kv3:chararray,cf1:kv4:chararray,cf1:kv5:chararray,cf1:kv6:chararray,cf1:kv7:chararray,cf1:kv8:chararray,cf1:kv9:chararray',
> '-loadKey=false -minTimestamp=142729920 -maxTimestamp=1428551999000') 
> AS 
> (uid,ts,data_regime_id,ago,ao,aca,si,ci,kv0,g_id,h_id,cg,kv1,kv2,kv3,kv4,kv5,kv6,kv7,kv8,kv9);
>  
> store hbase_records into 'output_place'; 
> {code} 
> Error: 
> {code} 
> 2015-04-08 20:18:35,316 [main] INFO 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
>  - Failed! 
> 2015-04-08 20:18:35,610 [main] ERROR org.apache.pig.tools.grunt.GruntParser - 
> ERROR 2997: Unable to recreate exception from backed error: Error: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout? 
> at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:403) 
> at 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:232)
>  
> at 
> org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:138)
>  
> at 
> org.apache.pig.backend.hadoop.hbase.HBaseTableInputFormat$HBaseTableRecordReader.nextKeyValue(HBaseTableInputFormat.java:162)
>  
> at 
> org.apache.pig.backend.hadoop.hbase.HBaseStorage.getNext(HBaseStorage.java:645)
>  
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
>  
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:533)
>  
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
>  
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
>  
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) 
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) 
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) 
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
>  
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) 
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 4919882396333524452 number_of_rows: 100 close_scanner: false next_call_seq: 0 
> at 
> org.apache.hadoop.hbase.regionserver.HRegionSe

[jira] [Updated] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-13892:
---
Status: Patch Available  (was: Open)

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels

2015-06-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582744#comment-14582744
 ] 

Lars Hofhansl commented on HBASE-13378:
---

Follow the Money... Or {{HRegion.getSmallestReadPoint()}} :)

> RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
> 
>
> Key: HBASE-13378
> URL: https://issues.apache.org/jira/browse/HBASE-13378
> Project: HBase
>  Issue Type: New Feature
>Reporter: John Leach
>Assignee: John Leach
>Priority: Minor
> Attachments: HBASE-13378.patch, HBASE-13378.txt
>
>   Original Estimate: 2h
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This block of code below coupled with the close method could be changed so 
> that READ_UNCOMMITTED does not synchronize.  
> {CODE:JAVA}
>   // synchronize on scannerReadPoints so that nobody calculates
>   // getSmallestReadPoint, before scannerReadPoints is updated.
>   IsolationLevel isolationLevel = scan.getIsolationLevel();
>   synchronized(scannerReadPoints) {
> this.readPt = getReadpoint(isolationLevel);
> scannerReadPoints.put(this, this.readPt);
>   }
> {CODE}
> This hotspots for me under heavy get requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-13892:
---
Description: 
Saw a failure during some testing with region_mover.rb

{code}
NativeException: java.lang.NullPointerException: null
__ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
  isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
  isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
 unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
{code}

To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
really simple to just produce the NPE within ClientScanner.

{code}
java.lang.NullPointerException: null
at 
org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
{code}

Patch with fix and test incoming.

  was:
Saw a failure during some testing with region_mover.rb

{code}
NativeException: java.lang.NullPointerException: null
__ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
  isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
  isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
 unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
{code

To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
really simple to just produce the NPE within ClientScanner.

{code}
java.lang.NullPointerException: null
at 
org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
{code}

Patch with fix and test incoming.


> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code}
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-13892:
---
Attachment: HBASE-13892.patch

> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 1.2.0, 1.1.1
>
> Attachments: HBASE-13892.patch
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13877) Interrupt to flush from TableFlushProcedure causes dataloss in ITBLL

2015-06-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582742#comment-14582742
 ] 

Duo Zhang commented on HBASE-13877:
---

+1 on patch v3.

> Interrupt to flush from TableFlushProcedure causes dataloss in ITBLL
> 
>
> Key: HBASE-13877
> URL: https://issues.apache.org/jira/browse/HBASE-13877
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: hbase-13877_v1.patch, hbase-13877_v2-branch-1.1.patch, 
> hbase-13877_v3-branch-1.1.patch
>
>
> ITBLL with 1.25B rows failed for me (and Stack as reported in 
> https://issues.apache.org/jira/browse/HBASE-13811?focusedCommentId=14577834&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14577834)
>  
> HBASE-13811 and HBASE-13853 fixed an issue with WAL edit filtering. 
> The root cause this time seems to be different. It is due to procedure based 
> flush interrupting the flush request in case the procedure is cancelled from 
> an exception elsewhere. This leaves the memstore snapshot intact without 
> aborting the server. The next flush, then flushes the previous memstore with 
> the current seqId (as opposed to seqId from the memstore snapshot). This 
> creates an hfile with larger seqId than what its contents are. Previous 
> behavior in 0.98 and 1.0 (I believe) is that after flush prepare and 
> interruption / exception will cause RS abort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-13892:
---
Description: 
Saw a failure during some testing with region_mover.rb

{code}
NativeException: java.lang.NullPointerException: null
__ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
  isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
  isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
 unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
{code

To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
really simple to just produce the NPE within ClientScanner.

{code}
java.lang.NullPointerException: null
at 
org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
{code}

Patch with fix and test incoming.

  was:
Saw a failure during some testing with region_mover.rb

{code}
NativeException: java.lang.NullPointerException: null
__ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
  isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
  isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
 unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
{code}

To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
really simple to just produce the NPE within ClientScanner.

Patch with fix and test incoming.


> Scanner with all results filtered out results in NPE
> 
>
> Key: HBASE-13892
> URL: https://issues.apache.org/jira/browse/HBASE-13892
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 1.2.0, 1.1.1
>
>
> Saw a failure during some testing with region_mover.rb
> {code}
> NativeException: java.lang.NullPointerException: null
> __ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
>   isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
>  unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
> {code
> To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
> really simple to just produce the NPE within ClientScanner.
> {code}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.getResultsToAddToCache(ClientScanner.java:576)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:492)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
> {code}
> Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13560) Large compaction queue should steal from small compaction queue when idle

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582734#comment-14582734
 ] 

Hudson commented on HBASE-13560:


FAILURE: Integrated in HBase-TRUNK #6564 (See 
[https://builds.apache.org/job/HBase-TRUNK/6564/])
HBASE-13560 large compaction thread pool will steal jobs from small compaction 
pool when idle (eclark: rev 9d3422ed16004da1b0f9a874a98bd140b46b7a6f)
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/StealJobQueue.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestStealJobQueue.java


> Large compaction queue should steal from small compaction queue when idle
> -
>
> Key: HBASE-13560
> URL: https://issues.apache.org/jira/browse/HBASE-13560
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Changgeng Li
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-13560.patch, queuestealwork-v1.patch, 
> queuestealwork-v4.patch, queuestealwork-v5.patch, queuestealwork-v6.patch, 
> queuestealwork-v7.patch
>
>
> If you tune compaction threads so that a server is never over commited when 
> large and small compaction threads are busy then it should be possible to 
> have the large compactions steal work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13877) Interrupt to flush from TableFlushProcedure causes dataloss in ITBLL

2015-06-11 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-13877:
--
Attachment: hbase-13877_v3-branch-1.1.patch

Here is a v3 patch which is a little bit more comprehensive. The semantics for 
what to do if flushCache() and close() fails is hard since the region cannot do 
much by itself. 

The contract for flushCache() and close() when DroppedSnapshotException is 
thrown was already that RS should abort. Now, the patch makes it more explicit, 
as well as adds safeguard so that region itself calls abort if rss is passed. 
Since flush() can be called from multiple different callers (MemstoreFlusher, 
snapshot, etc), we also have to guarantee that before DSE is thrown, we put the 
region in closing state so that no other writes / flushes can happen. This is 
because we cannot call {{close(true)}} in the flushCache() since we cannot 
promote our read lock to a write lock. The caller should receive the DSE, then 
abort himself and RSS, which then calls close(true). But there is a window of 
time before RSS calls close(true), so no other flushes should come in while the 
caller handles the exception. 

[~saint@gmail.com], [~Apache9] does the patch make sense? It also touches 
upon HBASE-10514. 


 



> Interrupt to flush from TableFlushProcedure causes dataloss in ITBLL
> 
>
> Key: HBASE-13877
> URL: https://issues.apache.org/jira/browse/HBASE-13877
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: hbase-13877_v1.patch, hbase-13877_v2-branch-1.1.patch, 
> hbase-13877_v3-branch-1.1.patch
>
>
> ITBLL with 1.25B rows failed for me (and Stack as reported in 
> https://issues.apache.org/jira/browse/HBASE-13811?focusedCommentId=14577834&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14577834)
>  
> HBASE-13811 and HBASE-13853 fixed an issue with WAL edit filtering. 
> The root cause this time seems to be different. It is due to procedure based 
> flush interrupting the flush request in case the procedure is cancelled from 
> an exception elsewhere. This leaves the memstore snapshot intact without 
> aborting the server. The next flush, then flushes the previous memstore with 
> the current seqId (as opposed to seqId from the memstore snapshot). This 
> creates an hfile with larger seqId than what its contents are. Previous 
> behavior in 0.98 and 1.0 (I believe) is that after flush prepare and 
> interruption / exception will cause RS abort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13879) Add hbase.hstore.compactionThreshold to HConstants

2015-06-11 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HBASE-13879:
-
Release Note: Create hbase.hstore.compaction.min in HConstants
  Status: Patch Available  (was: Open)

> Add hbase.hstore.compactionThreshold to HConstants
> --
>
> Key: HBASE-13879
> URL: https://issues.apache.org/jira/browse/HBASE-13879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Gabor Liptak
>Priority: Minor
> Attachments: HBASE-13879.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13879) Add hbase.hstore.compactionThreshold to HConstants

2015-06-11 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HBASE-13879:
-
Attachment: HBASE-13879.1.patch

> Add hbase.hstore.compactionThreshold to HConstants
> --
>
> Key: HBASE-13879
> URL: https://issues.apache.org/jira/browse/HBASE-13879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Gabor Liptak
>Priority: Minor
> Attachments: HBASE-13879.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13892) Scanner with all results filtered out results in NPE

2015-06-11 Thread Josh Elser (JIRA)
Josh Elser created HBASE-13892:
--

 Summary: Scanner with all results filtered out results in NPE
 Key: HBASE-13892
 URL: https://issues.apache.org/jira/browse/HBASE-13892
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.1.0
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Critical
 Fix For: 1.2.0, 1.1.1


Saw a failure during some testing with region_mover.rb

{code}
NativeException: java.lang.NullPointerException: null
__ensure__ at /usr/hdp/current/hbase-master/bin/region_mover.rb:110
  isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:109
  isSuccessfulScan at /usr/hdp/current/hbase-master/bin/region_mover.rb:104
 unloadRegions at /usr/hdp/current/hbase-master/bin/region_mover.rb:328
{code}

To try to get a real stacktrace, I wrote a simple test. Turns out, it was 
really simple to just produce the NPE within ClientScanner.

Patch with fix and test incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13560) Large compaction queue should steal from small compaction queue when idle

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582708#comment-14582708
 ] 

Hudson commented on HBASE-13560:


FAILURE: Integrated in HBase-1.2 #146 (See 
[https://builds.apache.org/job/HBase-1.2/146/])
HBASE-13560 large compaction thread pool will steal jobs from small compaction 
pool when idle (eclark: rev abf1aa603cbab69ea5a2cb6628a699899cf2e4ef)
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestStealJobQueue.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/StealJobQueue.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java


> Large compaction queue should steal from small compaction queue when idle
> -
>
> Key: HBASE-13560
> URL: https://issues.apache.org/jira/browse/HBASE-13560
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Changgeng Li
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-13560.patch, queuestealwork-v1.patch, 
> queuestealwork-v4.patch, queuestealwork-v5.patch, queuestealwork-v6.patch, 
> queuestealwork-v7.patch
>
>
> If you tune compaction threads so that a server is never over commited when 
> large and small compaction threads are busy then it should be possible to 
> have the large compactions steal work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13891) AM should handle RegionServerStoppedException during assignment

2015-06-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582706#comment-14582706
 ] 

Enis Soztutar commented on HBASE-13891:
---

bq. Probably the RegionServerStoppedException should be detected and the 
destination of the plan be added to the dead server list.
Catching this makes sense, but it is not clear how to do handling. We should 
not have more than one source of cluster membership (see HBASE-13605). If we 
for example catch this and run SSH, it means that we are using both zk and the 
RPC failures as a way to detect cluster membership. 

If we can find a way to change the target for the region assignment, that may 
prevent this type of assignment loop. 

> AM should handle RegionServerStoppedException during assignment
> ---
>
> Key: HBASE-13891
> URL: https://issues.apache.org/jira/browse/HBASE-13891
> Project: HBase
>  Issue Type: Bug
>  Components: master, Region Assignment
>Affects Versions: 1.1.0.1
>Reporter: Nick Dimiduk
>
> I noticed the following in the master logs
> {noformat}
> 2015-06-11 11:04:55,278 WARN  [AM.ZK.Worker-pool2-t337] 
> master.AssignmentManager: Failed assignment of 
> SYSTEM.SEQUENCE,\x8E\x00\x00\x00,1434010321127.d2be67cf43d6bd600c7f461701ca908f.
>  to ip-172-31-32-232.ec2.internal,16020,1434020633773, trying to assign 
> elsewhere instead; try=1 of 10
> org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: 
> org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 
> ip-172-31-32-232.ec2.internal,16020,1434020633773 not running, aborting
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:980)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1382)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22117)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
>   at sun.reflect.GeneratedConstructorAccessor26.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:322)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:752)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2136)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1590)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1568)
>   at 
> org.apache.hadoop.hbase.master.handler.ClosedRegionHandler.process(ClosedRegionHandler.java:106)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.handleRegion(AssignmentManager.java:1063)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$6.run(AssignmentManager.java:1511)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:1295)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException):
>  org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 
> ip-172-31-32-232.ec2.internal,16020,1434020633773 not running, aborting
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:980)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1382)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22117)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunn

[jira] [Commented] (HBASE-13881) Bugs in HTable incrementColumnValue implementation

2015-06-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582695#comment-14582695
 ] 

Ted Yu commented on HBASE-13881:


bq. writeToWAL = true is equal to Durability.SYNC_WAL?

I think so.

> Bugs in HTable incrementColumnValue implementation
> --
>
> Key: HBASE-13881
> URL: https://issues.apache.org/jira/browse/HBASE-13881
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.6.1, 1.0.1
>Reporter: Jerry Lam
>
> The exact method I'm talking about is:
> {code}
> @Deprecated
>   @Override
>   public long incrementColumnValue(final byte [] row, final byte [] family,
>   final byte [] qualifier, final long amount, final boolean writeToWAL)
>   throws IOException {
> return incrementColumnValue(row, family, qualifier, amount,
>   writeToWAL? Durability.SKIP_WAL: Durability.USE_DEFAULT);
>   }
> {code}
> Setting writeToWAL to true, Durability will be set to SKIP_WAL which does not 
> make much sense unless the meaning of SKIP_WAL is negated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13891) AM should handle RegionServerStoppedException during assignment

2015-06-11 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-13891:


 Summary: AM should handle RegionServerStoppedException during 
assignment
 Key: HBASE-13891
 URL: https://issues.apache.org/jira/browse/HBASE-13891
 Project: HBase
  Issue Type: Bug
  Components: master, Region Assignment
Affects Versions: 1.1.0.1
Reporter: Nick Dimiduk


I noticed the following in the master logs

{noformat}
2015-06-11 11:04:55,278 WARN  [AM.ZK.Worker-pool2-t337] 
master.AssignmentManager: Failed assignment of 
SYSTEM.SEQUENCE,\x8E\x00\x00\x00,1434010321127.d2be67cf43d6bd600c7f461701ca908f.
 to ip-172-31-32-232.ec2.internal,16020,1434020633773, trying to assign 
elsewhere instead; try=1 of 10
org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: 
org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 
ip-172-31-32-232.ec2.internal,16020,1434020633773 not running, aborting
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:980)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1382)
at 
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22117)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)

at sun.reflect.GeneratedConstructorAccessor26.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:322)
at 
org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:752)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2136)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1590)
at 
org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1568)
at 
org.apache.hadoop.hbase.master.handler.ClosedRegionHandler.process(ClosedRegionHandler.java:106)
at 
org.apache.hadoop.hbase.master.AssignmentManager.handleRegion(AssignmentManager.java:1063)
at 
org.apache.hadoop.hbase.master.AssignmentManager$6.run(AssignmentManager.java:1511)
at 
org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:1295)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.regionserver.RegionServerStoppedException):
 org.apache.hadoop.hbase.regionserver.RegionServerStoppedException: Server 
ip-172-31-32-232.ec2.internal,16020,1434020633773 not running, aborting
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:980)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1382)
at 
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22117)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)

at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1206)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at 
org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:23003)
at 
org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:749)
... 12 more
...
2015-06-11 11:04:55,289 INFO  [AM

[jira] [Commented] (HBASE-13864) HColumnDescriptor should parse the output from master and from describe for ttl

2015-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582657#comment-14582657
 ] 

Hadoop QA commented on HBASE-13864:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12739122/HBASE-13864.patch
  against master branch at commit 47bd7de6d87f35b2c07ca83be209f73ae2b41c27.
  ATTACHMENT ID: 12739122

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1916 checkstyle errors (more than the master's current 1912 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14381//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14381//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14381//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14381//console

This message is automatically generated.

> HColumnDescriptor should parse the output from master and from describe for 
> ttl
> ---
>
> Key: HBASE-13864
> URL: https://issues.apache.org/jira/browse/HBASE-13864
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Elliott Clark
>Assignee: Ashu Pachauri
> Attachments: HBASE-13864.patch
>
>
> The TTL printing on HColumnDescriptor adds a human readable time. When using 
> that string for the create command it throws an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13890) Get/Scan from MemStore only (Client API)

2015-06-11 Thread Vladimir Rodionov (JIRA)
Vladimir Rodionov created HBASE-13890:
-

 Summary: Get/Scan from MemStore only (Client API)
 Key: HBASE-13890
 URL: https://issues.apache.org/jira/browse/HBASE-13890
 Project: HBase
  Issue Type: New Feature
  Components: API, Client, Scanners
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov


This is short-circuit read for get/scan when recent data (version) of a cell 
can be found only in MemStore (with very high probability). 

Good examples are: Atomic counters and appends. This feature will allow to 
bypass completely store file scanners and improve performance and latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13889) hbase-shaded-client artifact is missing dependency (therefore, does not work)

2015-06-11 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582572#comment-14582572
 ] 

Nick Dimiduk commented on HBASE-13889:
--

I agree, that bit looks suspicious. I'm not sure what other classes are 
provided in {{javax}}; I'll have a look as well.

What other errors did you get? For a start, think you can write a unit or 
integration test that exposes the error? Maybe it'll have to go into 
{{hbase-it}}.

> hbase-shaded-client artifact is missing dependency (therefore, does not work)
> -
>
> Key: HBASE-13889
> URL: https://issues.apache.org/jira/browse/HBASE-13889
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0, 1.1.0.1
> Environment: N/A?
>Reporter: Dmitry Minkovsky
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: Screen Shot 2015-06-11 at 10.59.55 AM.png
>
>
> The {{hbase-shaded-client}} artifact was introduced in 
> [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
> very much for this, as I am new to Java building and was having a very 
> slow-moving time resolving conflicts. However, the shaded client artifact 
> seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
> the JAR, which does not have this package/class.
> Steps to reproduce:
> Java: 
> {code}
> package com.mycompany.app;
>   
>   
>   
>   
>   
> import org.apache.hadoop.conf.Configuration;  
>   
>   
> import org.apache.hadoop.hbase.HBaseConfiguration;
>   
>   
> import org.apache.hadoop.hbase.client.Connection; 
>   
>   
> import org.apache.hadoop.hbase.client.ConnectionFactory;  
>   
>   
>   
>   
>   
> public class App {
>   
>
> public static void main( String[] args ) throws java.io.IOException { 
>   
>   
> 
> Configuration config = HBaseConfiguration.create();   
>   
>   
> Connection connection = ConnectionFactory.createConnection(config);   
>   
>   
> } 
>   
>   
> }
> {code}
> POM:
> {code}
> http://maven.apache.org/POM/4.0.0"; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
>  
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/xsd/maven-4.0.0.xsd";> 
> 
>   4.0.0  
>   
>   
>   
>   
>   
>   com.mycompany.app
>   
>   
>   my-app 
>   
>  

[jira] [Commented] (HBASE-13889) hbase-shaded-client artifact is missing dependency (therefore, does not work)

2015-06-11 Thread Dmitry Minkovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582562#comment-14582562
 ] 

Dmitry Minkovsky commented on HBASE-13889:
--

So can [these 
lines|https://github.com/apache/hbase/blob/47bd7de6d87f35b2c07ca83be209f73ae2b41c27/hbase-shaded/pom.xml#L98-L101]
 simply be removed? Or somehow further qualified beyond {{javax}}. I built 
locally without these. I didn't get the same error. I did however get many 
other errors. Not sure if they are related though. 



> hbase-shaded-client artifact is missing dependency (therefore, does not work)
> -
>
> Key: HBASE-13889
> URL: https://issues.apache.org/jira/browse/HBASE-13889
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0, 1.1.0.1
> Environment: N/A?
>Reporter: Dmitry Minkovsky
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: Screen Shot 2015-06-11 at 10.59.55 AM.png
>
>
> The {{hbase-shaded-client}} artifact was introduced in 
> [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
> very much for this, as I am new to Java building and was having a very 
> slow-moving time resolving conflicts. However, the shaded client artifact 
> seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
> the JAR, which does not have this package/class.
> Steps to reproduce:
> Java: 
> {code}
> package com.mycompany.app;
>   
>   
>   
>   
>   
> import org.apache.hadoop.conf.Configuration;  
>   
>   
> import org.apache.hadoop.hbase.HBaseConfiguration;
>   
>   
> import org.apache.hadoop.hbase.client.Connection; 
>   
>   
> import org.apache.hadoop.hbase.client.ConnectionFactory;  
>   
>   
>   
>   
>   
> public class App {
>   
>
> public static void main( String[] args ) throws java.io.IOException { 
>   
>   
> 
> Configuration config = HBaseConfiguration.create();   
>   
>   
> Connection connection = ConnectionFactory.createConnection(config);   
>   
>   
> } 
>   
>   
> }
> {code}
> POM:
> {code}
> http://maven.apache.org/POM/4.0.0"; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
>  
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/xsd/maven-4.0.0.xsd";> 
> 
>   4.0.0  
>   
>   
>   
>   
>   
>   com.mycompany.app
>   
>   
>   my-app 
> 

[jira] [Updated] (HBASE-13560) Large compaction queue should steal from small compaction queue when idle

2015-06-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13560:
--
   Resolution: Fixed
Fix Version/s: 1.2.0
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the patch.

> Large compaction queue should steal from small compaction queue when idle
> -
>
> Key: HBASE-13560
> URL: https://issues.apache.org/jira/browse/HBASE-13560
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Changgeng Li
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-13560.patch, queuestealwork-v1.patch, 
> queuestealwork-v4.patch, queuestealwork-v5.patch, queuestealwork-v6.patch, 
> queuestealwork-v7.patch
>
>
> If you tune compaction threads so that a server is never over commited when 
> large and small compaction threads are busy then it should be possible to 
> have the large compactions steal work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels

2015-06-11 Thread John Leach (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582547#comment-14582547
 ] 

John Leach commented on HBASE-13378:


Can you point me to the line of code that uses the scannerReadPoints for 
determining whether to flush or compact the data?

> RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
> 
>
> Key: HBASE-13378
> URL: https://issues.apache.org/jira/browse/HBASE-13378
> Project: HBase
>  Issue Type: New Feature
>Reporter: John Leach
>Assignee: John Leach
>Priority: Minor
> Attachments: HBASE-13378.patch, HBASE-13378.txt
>
>   Original Estimate: 2h
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This block of code below coupled with the close method could be changed so 
> that READ_UNCOMMITTED does not synchronize.  
> {CODE:JAVA}
>   // synchronize on scannerReadPoints so that nobody calculates
>   // getSmallestReadPoint, before scannerReadPoints is updated.
>   IsolationLevel isolationLevel = scan.getIsolationLevel();
>   synchronized(scannerReadPoints) {
> this.readPt = getReadpoint(isolationLevel);
> scannerReadPoints.put(this, this.readPt);
>   }
> {CODE}
> This hotspots for me under heavy get requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels

2015-06-11 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582545#comment-14582545
 ] 

Lars Hofhansl commented on HBASE-13378:
---

Fair enough. Also... Now that I think about it again. Maybe this isn't so 
benign after all.

Is that even what you want [~jleach]? It means that as you use this scanner 
HBase can at any point flush or compact the data away that the scanner should 
see based in MVCC.


> RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
> 
>
> Key: HBASE-13378
> URL: https://issues.apache.org/jira/browse/HBASE-13378
> Project: HBase
>  Issue Type: New Feature
>Reporter: John Leach
>Assignee: John Leach
>Priority: Minor
> Attachments: HBASE-13378.patch, HBASE-13378.txt
>
>   Original Estimate: 2h
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This block of code below coupled with the close method could be changed so 
> that READ_UNCOMMITTED does not synchronize.  
> {CODE:JAVA}
>   // synchronize on scannerReadPoints so that nobody calculates
>   // getSmallestReadPoint, before scannerReadPoints is updated.
>   IsolationLevel isolationLevel = scan.getIsolationLevel();
>   synchronized(scannerReadPoints) {
> this.readPt = getReadpoint(isolationLevel);
> scannerReadPoints.put(this, this.readPt);
>   }
> {CODE}
> This hotspots for me under heavy get requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582512#comment-14582512
 ] 

Hadoop QA commented on HBASE-13833:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12739101/HBASE-13833.02.branch-1.0.patch
  against branch-1.0 branch at commit 47bd7de6d87f35b2c07ca83be209f73ae2b41c27.
  ATTACHMENT ID: 12739101

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14380//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14380//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14380//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14380//console

This message is automatically generated.

> LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
> connections when using SecureBulkLoad
> ---
>
> Key: HBASE-13833
> URL: https://issues.apache.org/jira/browse/HBASE-13833
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0.1
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 1.1.1
>
> Attachments: HBASE-13833.00.branch-1.1.patch, 
> HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.0.patch, 
> HBASE-13833.02.branch-1.1.patch, HBASE-13833.02.branch-1.patch
>
>
> Seems HBASE-13328 wasn't quite sufficient.
> {noformat}
> 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
> hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
> 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
> 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
> hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
>  first=\x80\x00\x00\x00 last=\x80LK?
> 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
> started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
> started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> ...
> ...
> 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
> 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
> failed due to exception.
> 2015-06-02 
> 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
> BulkLoad encountered an unrecoverable problem
> 2015-06-02 05:01:56,121|beaver.machine|INFO|78

[jira] [Commented] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582495#comment-14582495
 ] 

Enis Soztutar commented on HBASE-13833:
---

Should we also close this? 
{code}
+conn.getRegionLocator(t.getName()).getStartEndKeys();
{code}



> LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
> connections when using SecureBulkLoad
> ---
>
> Key: HBASE-13833
> URL: https://issues.apache.org/jira/browse/HBASE-13833
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0.1
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 1.1.1
>
> Attachments: HBASE-13833.00.branch-1.1.patch, 
> HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.0.patch, 
> HBASE-13833.02.branch-1.1.patch, HBASE-13833.02.branch-1.patch
>
>
> Seems HBASE-13328 wasn't quite sufficient.
> {noformat}
> 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
> hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
> 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
> 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
> hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
>  first=\x80\x00\x00\x00 last=\x80LK?
> 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
> started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
> started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> ...
> ...
> 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
> 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
> failed due to exception.
> 2015-06-02 
> 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
> BulkLoad encountered an unrecoverable problem
> 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
> java.lang.Thread.run(Thread.java:745)
> ...
> ...
> ...
> 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
> org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
> connection has to be unmanaged.
> 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
> 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:708)
> 2015-06-02 05:58:

[jira] [Updated] (HBASE-13864) HColumnDescriptor should parse the output from master and from describe for ttl

2015-06-11 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-13864:
--
Attachment: (was: HBASE-13684.patch)

> HColumnDescriptor should parse the output from master and from describe for 
> ttl
> ---
>
> Key: HBASE-13864
> URL: https://issues.apache.org/jira/browse/HBASE-13864
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Elliott Clark
>Assignee: Ashu Pachauri
> Attachments: HBASE-13864.patch
>
>
> The TTL printing on HColumnDescriptor adds a human readable time. When using 
> that string for the create command it throws an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13864) HColumnDescriptor should parse the output from master and from describe for ttl

2015-06-11 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-13864:
--
Attachment: HBASE-13864.patch

> HColumnDescriptor should parse the output from master and from describe for 
> ttl
> ---
>
> Key: HBASE-13864
> URL: https://issues.apache.org/jira/browse/HBASE-13864
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Elliott Clark
>Assignee: Ashu Pachauri
> Attachments: HBASE-13864.patch
>
>
> The TTL printing on HColumnDescriptor adds a human readable time. When using 
> that string for the create command it throws an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13864) HColumnDescriptor should parse the output from master and from describe for ttl

2015-06-11 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-13864:
--
Status: Patch Available  (was: Open)

> HColumnDescriptor should parse the output from master and from describe for 
> ttl
> ---
>
> Key: HBASE-13864
> URL: https://issues.apache.org/jira/browse/HBASE-13864
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Elliott Clark
>Assignee: Ashu Pachauri
> Attachments: HBASE-13864.patch
>
>
> The TTL printing on HColumnDescriptor adds a human readable time. When using 
> that string for the create command it throws an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582484#comment-14582484
 ] 

Hadoop QA commented on HBASE-13833:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12739097/HBASE-13833.02.branch-1.patch
  against branch-1 branch at commit 47bd7de6d87f35b2c07ca83be209f73ae2b41c27.
  ATTACHMENT ID: 12739097

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3801 checkstyle errors (more than the master's current 3800 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestRegionRebalancing

 {color:red}-1 core zombie tests{color}.  There are 4 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testSplitWithRegionReplicas(TestSplitTransactionOnCluster.java:996)
at 
org.apache.camel.component.jetty.jettyproducer.HttpJettyProducerRecipientListCustomThreadPoolTest.testRecipientList(HttpJettyProducerRecipientListCustomThreadPoolTest.java:40)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14379//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14379//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14379//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14379//console

This message is automatically generated.

> LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
> connections when using SecureBulkLoad
> ---
>
> Key: HBASE-13833
> URL: https://issues.apache.org/jira/browse/HBASE-13833
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0.1
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 1.1.1
>
> Attachments: HBASE-13833.00.branch-1.1.patch, 
> HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.0.patch, 
> HBASE-13833.02.branch-1.1.patch, HBASE-13833.02.branch-1.patch
>
>
> Seems HBASE-13328 wasn't quite sufficient.
> {noformat}
> 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
> hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
> 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
> 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
> hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
>  first=\x80\x00\x00\x00 last=\x80LK?
> 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
> started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
> started=88255 ms ago, cancelled=false, msg=row '' on table

[jira] [Commented] (HBASE-13470) High level Integration test for master DDL operations

2015-06-11 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582478#comment-14582478
 ] 

Stephen Yuan Jiang commented on HBASE-13470:


The patch can be reviewed in https://reviews.apache.org/r/33603/ 

> High level Integration test for master DDL operations
> -
>
> Key: HBASE-13470
> URL: https://issues.apache.org/jira/browse/HBASE-13470
> Project: HBase
>  Issue Type: Sub-task
>  Components: master
>Reporter: Enis Soztutar
>Assignee: Sophia Feng
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: HBASE-13470-v0.patch, HBASE-13470-v1.patch, 
> HBASE-13470-v2.patch, HBASE-13470-v3.patch, HBASE-13470-v4.patch
>
>
> Our [~fengs] has an integration test which executes DDL operations with a new 
> monkey to kill the active master as a high level test for the proc v2 
> changes. 
> The test does random DDL operations from 20 client threads. The DDL 
> statements are create / delete / modify / enable / disable table and CF 
> operations. It runs HBCK to verify the end state. 
> The test can be run on a single master, or multi master setup. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-11 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
-
Attachment: HBASE-13833.02.branch-1.0.patch

Here's a patch for branch-1.0. The original does not apply cleanly, so maybe 
[~enis] wants to take a look? Test\{Secure,\}LoadIncrementalHFiles passes 
locally for me.

> LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
> connections when using SecureBulkLoad
> ---
>
> Key: HBASE-13833
> URL: https://issues.apache.org/jira/browse/HBASE-13833
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0.1
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 1.1.1
>
> Attachments: HBASE-13833.00.branch-1.1.patch, 
> HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.0.patch, 
> HBASE-13833.02.branch-1.1.patch, HBASE-13833.02.branch-1.patch
>
>
> Seems HBASE-13328 wasn't quite sufficient.
> {noformat}
> 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
> hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
> 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
> 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
> hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
>  first=\x80\x00\x00\x00 last=\x80LK?
> 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
> started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
> started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> ...
> ...
> 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
> 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
> failed due to exception.
> 2015-06-02 
> 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
> BulkLoad encountered an unrecoverable problem
> 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
> java.lang.Thread.run(Thread.java:745)
> ...
> ...
> ...
> 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
> org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
> connection has to be unmanaged.
> 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
> 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(

[jira] [Updated] (HBASE-13833) LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad

2015-06-11 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13833:
-
Attachment: HBASE-13833.02.branch-1.patch

> LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged 
> connections when using SecureBulkLoad
> ---
>
> Key: HBASE-13833
> URL: https://issues.apache.org/jira/browse/HBASE-13833
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0.1
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 1.1.1
>
> Attachments: HBASE-13833.00.branch-1.1.patch, 
> HBASE-13833.01.branch-1.1.patch, HBASE-13833.02.branch-1.1.patch, 
> HBASE-13833.02.branch-1.patch
>
>
> Seems HBASE-13328 wasn't quite sufficient.
> {noformat}
> 015-06-02 05:49:23,578|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory 
> hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/_SUCCESS
> 2015-06-02 05:49:23,720|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO hfile.CacheConfig: CacheConfig:disabled
> 2015-06-02 05:49:23,859|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:49:23 INFO mapreduce.LoadIncrementalHFiles: Trying to load 
> hfile=hdfs://dal-pqc1:8020/tmp/192f21dd-cc89-4354-8ba1-78d1f228e7c7/LARGE_TABLE/0/00870fd0a7544373b32b6f1e976bf47f
>  first=\x80\x00\x00\x00 last=\x80LK?
> 2015-06-02 05:50:32,028|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:32 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, 
> started=68154 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> 2015-06-02 05:50:52,128|beaver.machine|INFO|2828|7140|MainThread|15/06/02 
> 05:50:52 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, 
> started=88255 ms ago, cancelled=false, msg=row '' on table 'LARGE_TABLE' at 
> region=LARGE_TABLE,,1433222865285.e01e02483f30a060d3f7abb1846ea029., 
> hostname=dal-pqc5,16020,1433222547221, seqNum=2
> ...
> ...
> 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|15/06/02 
> 05:01:56 ERROR mapreduce.CsvBulkLoadTool: Import job on table=LARGE_TABLE 
> failed due to exception.
> 2015-06-02 
> 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|java.io.IOException: 
> BulkLoad encountered an unrecoverable problem
> 2015-06-02 05:01:56,121|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:474)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:405)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:300)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:517)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool$TableLoader.call(CsvBulkLoadTool.java:466)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
> 2015-06-02 05:01:56,122|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 2015-06-02 05:01:56,124|beaver.machine|INFO|7800|2276|MainThread|at 
> java.lang.Thread.run(Thread.java:745)
> ...
> ...
> ...
> 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|Caused by: 
> org.apache.hadoop.hbase.client.NeedUnmanagedConnectionException: The 
> connection has to be unmanaged.
> 2015-06-02 05:58:34,993|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:724)
> 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.java:708)
> 2015-06-02 05:58:34,994|beaver.machine|INFO|2828|7140|MainThread|at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getTable(ConnectionManager.ja

[jira] [Commented] (HBASE-13889) hbase-shaded-client artifact is missing dependency (therefore, does not work)

2015-06-11 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582305#comment-14582305
 ] 

Elliott Clark commented on HBASE-13889:
---

Blah nevermind. Yeah it includes some but not all of the javax stuff.

> hbase-shaded-client artifact is missing dependency (therefore, does not work)
> -
>
> Key: HBASE-13889
> URL: https://issues.apache.org/jira/browse/HBASE-13889
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0, 1.1.0.1
> Environment: N/A?
>Reporter: Dmitry Minkovsky
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: Screen Shot 2015-06-11 at 10.59.55 AM.png
>
>
> The {{hbase-shaded-client}} artifact was introduced in 
> [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
> very much for this, as I am new to Java building and was having a very 
> slow-moving time resolving conflicts. However, the shaded client artifact 
> seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
> the JAR, which does not have this package/class.
> Steps to reproduce:
> Java: 
> {code}
> package com.mycompany.app;
>   
>   
>   
>   
>   
> import org.apache.hadoop.conf.Configuration;  
>   
>   
> import org.apache.hadoop.hbase.HBaseConfiguration;
>   
>   
> import org.apache.hadoop.hbase.client.Connection; 
>   
>   
> import org.apache.hadoop.hbase.client.ConnectionFactory;  
>   
>   
>   
>   
>   
> public class App {
>   
>
> public static void main( String[] args ) throws java.io.IOException { 
>   
>   
> 
> Configuration config = HBaseConfiguration.create();   
>   
>   
> Connection connection = ConnectionFactory.createConnection(config);   
>   
>   
> } 
>   
>   
> }
> {code}
> POM:
> {code}
> http://maven.apache.org/POM/4.0.0"; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
>  
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/xsd/maven-4.0.0.xsd";> 
> 
>   4.0.0  
>   
>   
>   
>   
>   
>   com.mycompany.app
>   
>   
>   my-app 
>   
>   
>   1.0-SNAPSHOT 
>   
>   
>   jar 

[jira] [Commented] (HBASE-13877) Interrupt to flush from TableFlushProcedure causes dataloss in ITBLL

2015-06-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582298#comment-14582298
 ] 

Enis Soztutar commented on HBASE-13877:
---

bq. Enis Soztutar any comment on Duo Zhang remark?
Let me see how I can simplify the contract. 
bq. Are dealing w/ above, I'm +1 on commit. I did not find incidence of the 
original issue in my run after looking in all logs. In my case, I am seeing 
double-assignment over a master restart.
Agreed, lets commit this and continue on yet another issue. 


> Interrupt to flush from TableFlushProcedure causes dataloss in ITBLL
> 
>
> Key: HBASE-13877
> URL: https://issues.apache.org/jira/browse/HBASE-13877
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: hbase-13877_v1.patch, hbase-13877_v2-branch-1.1.patch
>
>
> ITBLL with 1.25B rows failed for me (and Stack as reported in 
> https://issues.apache.org/jira/browse/HBASE-13811?focusedCommentId=14577834&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14577834)
>  
> HBASE-13811 and HBASE-13853 fixed an issue with WAL edit filtering. 
> The root cause this time seems to be different. It is due to procedure based 
> flush interrupting the flush request in case the procedure is cancelled from 
> an exception elsewhere. This leaves the memstore snapshot intact without 
> aborting the server. The next flush, then flushes the previous memstore with 
> the current seqId (as opposed to seqId from the memstore snapshot). This 
> creates an hfile with larger seqId than what its contents are. Previous 
> behavior in 0.98 and 1.0 (I believe) is that after flush prepare and 
> interruption / exception will cause RS abort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13889) hbase-shaded-client artifact is missing dependency (therefore, does not work)

2015-06-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13889:
--
Attachment: Screen Shot 2015-06-11 at 10.59.55 AM.png

> hbase-shaded-client artifact is missing dependency (therefore, does not work)
> -
>
> Key: HBASE-13889
> URL: https://issues.apache.org/jira/browse/HBASE-13889
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0, 1.1.0.1
> Environment: N/A?
>Reporter: Dmitry Minkovsky
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
> Attachments: Screen Shot 2015-06-11 at 10.59.55 AM.png
>
>
> The {{hbase-shaded-client}} artifact was introduced in 
> [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
> very much for this, as I am new to Java building and was having a very 
> slow-moving time resolving conflicts. However, the shaded client artifact 
> seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
> the JAR, which does not have this package/class.
> Steps to reproduce:
> Java: 
> {code}
> package com.mycompany.app;
>   
>   
>   
>   
>   
> import org.apache.hadoop.conf.Configuration;  
>   
>   
> import org.apache.hadoop.hbase.HBaseConfiguration;
>   
>   
> import org.apache.hadoop.hbase.client.Connection; 
>   
>   
> import org.apache.hadoop.hbase.client.ConnectionFactory;  
>   
>   
>   
>   
>   
> public class App {
>   
>
> public static void main( String[] args ) throws java.io.IOException { 
>   
>   
> 
> Configuration config = HBaseConfiguration.create();   
>   
>   
> Connection connection = ConnectionFactory.createConnection(config);   
>   
>   
> } 
>   
>   
> }
> {code}
> POM:
> {code}
> http://maven.apache.org/POM/4.0.0"; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
>  
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/xsd/maven-4.0.0.xsd";> 
> 
>   4.0.0  
>   
>   
>   
>   
>   
>   com.mycompany.app
>   
>   
>   my-app 
>   
>   
>   1.0-SNAPSHOT 
>   
>   
>   jar  
>   

[jira] [Commented] (HBASE-13889) hbase-shaded-client artifact is missing dependency (therefore, does not work)

2015-06-11 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582286#comment-14582286
 ] 

Elliott Clark commented on HBASE-13889:
---

H it looks like the shaded jar doesn't include any shaded classes.

> hbase-shaded-client artifact is missing dependency (therefore, does not work)
> -
>
> Key: HBASE-13889
> URL: https://issues.apache.org/jira/browse/HBASE-13889
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0, 1.1.0.1
> Environment: N/A?
>Reporter: Dmitry Minkovsky
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
>
> The {{hbase-shaded-client}} artifact was introduced in 
> [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
> very much for this, as I am new to Java building and was having a very 
> slow-moving time resolving conflicts. However, the shaded client artifact 
> seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
> the JAR, which does not have this package/class.
> Steps to reproduce:
> Java: 
> {code}
> package com.mycompany.app;
>   
>   
>   
>   
>   
> import org.apache.hadoop.conf.Configuration;  
>   
>   
> import org.apache.hadoop.hbase.HBaseConfiguration;
>   
>   
> import org.apache.hadoop.hbase.client.Connection; 
>   
>   
> import org.apache.hadoop.hbase.client.ConnectionFactory;  
>   
>   
>   
>   
>   
> public class App {
>   
>
> public static void main( String[] args ) throws java.io.IOException { 
>   
>   
> 
> Configuration config = HBaseConfiguration.create();   
>   
>   
> Connection connection = ConnectionFactory.createConnection(config);   
>   
>   
> } 
>   
>   
> }
> {code}
> POM:
> {code}
> http://maven.apache.org/POM/4.0.0"; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
>  
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/xsd/maven-4.0.0.xsd";> 
> 
>   4.0.0  
>   
>   
>   
>   
>   
>   com.mycompany.app
>   
>   
>   my-app 
>   
>   
>   1.0-SNAPSHOT 
>   
>   
>   jar  
>   

[jira] [Commented] (HBASE-13889) hbase-shaded-client artifact is missing dependency (therefore, does not work)

2015-06-11 Thread Dmitry Minkovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582272#comment-14582272
 ] 

Dmitry Minkovsky commented on HBASE-13889:
--

Oh yes obviously, good call. Excellent opportunity for me to try to fix this 
myself while maybe [~eclark] or someone else who knows how to fix this quickly 
can step in. 

> hbase-shaded-client artifact is missing dependency (therefore, does not work)
> -
>
> Key: HBASE-13889
> URL: https://issues.apache.org/jira/browse/HBASE-13889
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0, 1.1.0.1
> Environment: N/A?
>Reporter: Dmitry Minkovsky
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
>
> The {{hbase-shaded-client}} artifact was introduced in 
> [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
> very much for this, as I am new to Java building and was having a very 
> slow-moving time resolving conflicts. However, the shaded client artifact 
> seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
> the JAR, which does not have this package/class.
> Steps to reproduce:
> Java: 
> {code}
> package com.mycompany.app;
>   
>   
>   
>   
>   
> import org.apache.hadoop.conf.Configuration;  
>   
>   
> import org.apache.hadoop.hbase.HBaseConfiguration;
>   
>   
> import org.apache.hadoop.hbase.client.Connection; 
>   
>   
> import org.apache.hadoop.hbase.client.ConnectionFactory;  
>   
>   
>   
>   
>   
> public class App {
>   
>
> public static void main( String[] args ) throws java.io.IOException { 
>   
>   
> 
> Configuration config = HBaseConfiguration.create();   
>   
>   
> Connection connection = ConnectionFactory.createConnection(config);   
>   
>   
> } 
>   
>   
> }
> {code}
> POM:
> {code}
> http://maven.apache.org/POM/4.0.0"; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
>  
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/xsd/maven-4.0.0.xsd";> 
> 
>   4.0.0  
>   
>   
>   
>   
>   
>   com.mycompany.app
>   
>   
>   my-app 
>   
>   
>   1.0-SNAPSHOT 
>   
>   

[jira] [Commented] (HBASE-13889) hbase-shaded-client artifact is missing dependency (therefore, does not work)

2015-06-11 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582271#comment-14582271
 ] 

Nick Dimiduk commented on HBASE-13889:
--

This is a JDK class, so should be excluded from the shade.

http://docs.oracle.com/javase/7/docs/api/javax/xml/transform/TransformerException.html

> hbase-shaded-client artifact is missing dependency (therefore, does not work)
> -
>
> Key: HBASE-13889
> URL: https://issues.apache.org/jira/browse/HBASE-13889
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0, 1.1.0.1
> Environment: N/A?
>Reporter: Dmitry Minkovsky
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
>
> The {{hbase-shaded-client}} artifact was introduced in 
> [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
> very much for this, as I am new to Java building and was having a very 
> slow-moving time resolving conflicts. However, the shaded client artifact 
> seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
> the JAR, which does not have this package/class.
> Steps to reproduce:
> Java: 
> {code}
> package com.mycompany.app;
>   
>   
>   
>   
>   
> import org.apache.hadoop.conf.Configuration;  
>   
>   
> import org.apache.hadoop.hbase.HBaseConfiguration;
>   
>   
> import org.apache.hadoop.hbase.client.Connection; 
>   
>   
> import org.apache.hadoop.hbase.client.ConnectionFactory;  
>   
>   
>   
>   
>   
> public class App {
>   
>
> public static void main( String[] args ) throws java.io.IOException { 
>   
>   
> 
> Configuration config = HBaseConfiguration.create();   
>   
>   
> Connection connection = ConnectionFactory.createConnection(config);   
>   
>   
> } 
>   
>   
> }
> {code}
> POM:
> {code}
> http://maven.apache.org/POM/4.0.0"; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
>  
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/xsd/maven-4.0.0.xsd";> 
> 
>   4.0.0  
>   
>   
>   
>   
>   
>   com.mycompany.app
>   
>   
>   my-app 
>   
>   
>   1.0-SNAPSHOT 
>   
>   
>   jar 

[jira] [Updated] (HBASE-13889) hbase-shaded-client artifact is missing dependency (therefore, does not work)

2015-06-11 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13889:
-
Fix Version/s: 1.1.1
   1.2.0
   2.0.0

> hbase-shaded-client artifact is missing dependency (therefore, does not work)
> -
>
> Key: HBASE-13889
> URL: https://issues.apache.org/jira/browse/HBASE-13889
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.0, 1.1.0.1
> Environment: N/A?
>Reporter: Dmitry Minkovsky
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.1.1
>
>
> The {{hbase-shaded-client}} artifact was introduced in 
> [HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you 
> very much for this, as I am new to Java building and was having a very 
> slow-moving time resolving conflicts. However, the shaded client artifact 
> seems to be missing {{javax.xml.transform.TransformerException}}.  I examined 
> the JAR, which does not have this package/class.
> Steps to reproduce:
> Java: 
> {code}
> package com.mycompany.app;
>   
>   
>   
>   
>   
> import org.apache.hadoop.conf.Configuration;  
>   
>   
> import org.apache.hadoop.hbase.HBaseConfiguration;
>   
>   
> import org.apache.hadoop.hbase.client.Connection; 
>   
>   
> import org.apache.hadoop.hbase.client.ConnectionFactory;  
>   
>   
>   
>   
>   
> public class App {
>   
>
> public static void main( String[] args ) throws java.io.IOException { 
>   
>   
> 
> Configuration config = HBaseConfiguration.create();   
>   
>   
> Connection connection = ConnectionFactory.createConnection(config);   
>   
>   
> } 
>   
>   
> }
> {code}
> POM:
> {code}
> http://maven.apache.org/POM/4.0.0"; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
>  
>   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/xsd/maven-4.0.0.xsd";> 
> 
>   4.0.0  
>   
>   
>   
>   
>   
>   com.mycompany.app
>   
>   
>   my-app 
>   
>   
>   1.0-SNAPSHOT 
>   
>   
>   jar  
>   
>  

[jira] [Created] (HBASE-13889) hbase-shaded-client artifact is missing dependency (therefore, does not work)

2015-06-11 Thread Dmitry Minkovsky (JIRA)
Dmitry Minkovsky created HBASE-13889:


 Summary: hbase-shaded-client artifact is missing dependency 
(therefore, does not work)
 Key: HBASE-13889
 URL: https://issues.apache.org/jira/browse/HBASE-13889
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.1.0, 1.1.0.1
 Environment: N/A?
Reporter: Dmitry Minkovsky
Priority: Blocker


The {{hbase-shaded-client}} artifact was introduced in 
[HBASE-13517|https://issues.apache.org/jira/browse/HBASE-13517]. Thank you very 
much for this, as I am new to Java building and was having a very slow-moving 
time resolving conflicts. However, the shaded client artifact seems to be 
missing {{javax.xml.transform.TransformerException}}.  I examined the JAR, 
which does not have this package/class.

Steps to reproduce:

Java: 

{code}
package com.mycompany.app;  

  


  
import org.apache.hadoop.conf.Configuration;

  
import org.apache.hadoop.hbase.HBaseConfiguration;  

  
import org.apache.hadoop.hbase.client.Connection;   

  
import org.apache.hadoop.hbase.client.ConnectionFactory;

  


  
public class App {  

   
public static void main( String[] args ) throws java.io.IOException {   


  
Configuration config = HBaseConfiguration.create(); 

  
Connection connection = ConnectionFactory.createConnection(config); 

  
}   

  
}
{code}

POM:

{code}
http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";   
   
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>   
  
  4.0.0

  


  
  com.mycompany.app  

  
  my-app   

  
  1.0-SNAPSHOT   

  
  jar

  


  
  my-app   

  
  http://maven.apache.org
  

[jira] [Commented] (HBASE-13887) Document 0.98 release build differences

2015-06-11 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1458#comment-1458
 ] 

Nick Dimiduk commented on HBASE-13887:
--

bq. This JDK suffers from JDK-6521495

Ouch.

> Document 0.98 release build differences
> ---
>
> Key: HBASE-13887
> URL: https://issues.apache.org/jira/browse/HBASE-13887
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, site
>Reporter: Andrew Purtell
> Fix For: 2.0.0
>
>
> The build instructions in the online manual do not describe the extra steps 
> we need to take to build 0.98. Add a section on this.
> A quick enumeration of the differences:
> 1. Source assemblies will be missing the hbase-hadoop1-compat module. This 
> should be fixed in the POM somehow. What I do now is untar the src tarball, 
> cp -a the module over, then tar up the result. (It's a hack in a release 
> script.)
> 2. We must munge POMs for building hadoop1 and hadoop2 variants and then 
> execute two builds pointing Maven at each munged POM. The 
> generate-hadoop-X-poms script requires bash
> {noformat}
> $ bash dev-support/generate-hadoopX-poms.sh $version $version-hadoop1
> $ bash dev-support/generate-hadoopX-poms.sh $version $version-hadoop2
> {noformat}
> Build Hadoop 1
> {noformat}
>   $ mvn -f pom.xml.hadoop1 clean install -DskipTests -Prelease && \
>   mvn -f pom.xml.hadoop1 install -DskipTests site assembly:single \
> -Prelease && \
>   mvn -f pom.xml.hadoop1 deploy -DskipTests -Papache-release
>   $ cp hbase-assembly/target/hbase*-bin.tar.gz $release_dir
> {noformat}
> Build Hadoop 2
> {noformat}
>   $ mvn -f pom.xml.hadoop2 clean install -DskipTests -Prelease && \
>   mvn -f pom.xml.hadoop2 install -DskipTests site assembly:single \
> -Prelease && \
>   mvn -f pom.xml.hadoop2 deploy -DskipTests -Papache-release
>   $ cp hbase-assembly/target/hbase*-bin.tar.gz $release_dir
> {noformat}
> 3. Current HEAD of 0.98 branch enforces a requirement that the release be 
> built with a JDK no more recent than the compile language level. For 0.98, 
> that is 1.6, therefore the ancient 6u45 JDK. This JDK suffers from 
> [JDK-6521495|http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6521495] so 
> the following workaround is required in order to deploy artifacts to Apache's 
> Nexus:
> 3.a. Download https://www.bouncycastle.org/download/bcprov-jdk15on-152.jar 
> and https://www.bouncycastle.org/download/bcprov-ext-jdk15on-152.jar into 
> $JAVA_HOME/lib/ext.
> 3.b. Edit $JAVA_HOME/lib/security/java.security and add the BouncyCastle 
> provider as the first provider: 
> {noformat}
> security.provider.1=org.bouncycastle.jce.provider.BouncyCastleProvider
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13560) Large compaction queue should steal from small compaction queue when idle

2015-06-11 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582193#comment-14582193
 ] 

Elliott Clark commented on HBASE-13560:
---

bq.Should there be a config which disables this feature ?
Please no more configs.

> Large compaction queue should steal from small compaction queue when idle
> -
>
> Key: HBASE-13560
> URL: https://issues.apache.org/jira/browse/HBASE-13560
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Elliott Clark
>Assignee: Changgeng Li
> Attachments: HBASE-13560.patch, queuestealwork-v1.patch, 
> queuestealwork-v4.patch, queuestealwork-v5.patch, queuestealwork-v6.patch, 
> queuestealwork-v7.patch
>
>
> If you tune compaction threads so that a server is never over commited when 
> large and small compaction threads are busy then it should be possible to 
> have the large compactions steal work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13881) Bugs in HTable incrementColumnValue implementation

2015-06-11 Thread Jerry Lam (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14582038#comment-14582038
 ] 

Jerry Lam commented on HBASE-13881:
---

No I don't but I can create one. 

I believe the semantic of writeToWAL = true is equal to Durability.SYNC_WAL? 
Please confirm.

Thank you.

> Bugs in HTable incrementColumnValue implementation
> --
>
> Key: HBASE-13881
> URL: https://issues.apache.org/jira/browse/HBASE-13881
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.6.1, 1.0.1
>Reporter: Jerry Lam
>
> The exact method I'm talking about is:
> {code}
> @Deprecated
>   @Override
>   public long incrementColumnValue(final byte [] row, final byte [] family,
>   final byte [] qualifier, final long amount, final boolean writeToWAL)
>   throws IOException {
> return incrementColumnValue(row, family, qualifier, amount,
>   writeToWAL? Durability.SKIP_WAL: Durability.USE_DEFAULT);
>   }
> {code}
> Setting writeToWAL to true, Durability will be set to SKIP_WAL which does not 
> make much sense unless the meaning of SKIP_WAL is negated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13888) refill bug from HBASE-13686

2015-06-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581954#comment-14581954
 ] 

Ted Yu commented on HBASE-13888:


Any chance of a unit test that shows the problem ?

Thanks

> refill bug from HBASE-13686
> ---
>
> Key: HBASE-13888
> URL: https://issues.apache.org/jira/browse/HBASE-13888
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-13888-v1.patch
>
>
> As I report the RateLimiter fail to limit in HBASE-13686, then [~ashish 
> singhi] fix that problem by support two kinds of RateLimiter:  
> AverageIntervalRateLimiter and FixedIntervalRateLimiter. But in my use of the 
> code, I found a new bug about refill() in AverageIntervalRateLimiter.
> {code}
> long delta = (limit * (now - nextRefillTime)) / 
> super.getTimeUnitInMillis();
> if (delta > 0) {
>   this.nextRefillTime = now;
>   return Math.min(limit, available + delta);
> }   
> {code}
> When delta > 0, refill maybe return available + delta. Then in the 
> canExecute(), avail will add refillAmount again. So the new avail maybe 2 * 
> avail + delta.
> {code}
> long refillAmount = refill(limit, avail);
> if (refillAmount == 0 && avail < amount) {
>   return false;
> }   
> // check for positive overflow
> if (avail <= Long.MAX_VALUE - refillAmount) {
>   avail = Math.max(0, Math.min(avail + refillAmount, limit));
> } else {
>   avail = Math.max(0, limit);
> } 
> {code}
> I will add more unit tests for RateLimiter in the next days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13888) refill bug from HBASE-13686

2015-06-11 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-13888:
---
Attachment: HBASE-13888-v1.patch

> refill bug from HBASE-13686
> ---
>
> Key: HBASE-13888
> URL: https://issues.apache.org/jira/browse/HBASE-13888
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-13888-v1.patch
>
>
> As I report the RateLimiter fail to limit in HBASE-13686, then [~ashish 
> singhi] fix that problem by support two kinds of RateLimiter:  
> AverageIntervalRateLimiter and FixedIntervalRateLimiter. But in my use of the 
> code, I found a new bug about refill() in AverageIntervalRateLimiter.
> {code}
> long delta = (limit * (now - nextRefillTime)) / 
> super.getTimeUnitInMillis();
> if (delta > 0) {
>   this.nextRefillTime = now;
>   return Math.min(limit, available + delta);
> }   
> {code}
> When delta > 0, refill maybe return available + delta. Then in the 
> canExecute(), avail will add refillAmount again. So the new avail maybe 2 * 
> avail + delta.
> {code}
> long refillAmount = refill(limit, avail);
> if (refillAmount == 0 && avail < amount) {
>   return false;
> }   
> // check for positive overflow
> if (avail <= Long.MAX_VALUE - refillAmount) {
>   avail = Math.max(0, Math.min(avail + refillAmount, limit));
> } else {
>   avail = Math.max(0, limit);
> } 
> {code}
> I will add more unit tests for RateLimiter in the next days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13888) refill bug from HBASE-13686

2015-06-11 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-13888:
--

 Summary: refill bug from HBASE-13686
 Key: HBASE-13888
 URL: https://issues.apache.org/jira/browse/HBASE-13888
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Guanghao Zhang
Assignee: Guanghao Zhang


As I report the RateLimiter fail to limit in HBASE-13686, then [~ashish singhi] 
fix that problem by support two kinds of RateLimiter:  
AverageIntervalRateLimiter and FixedIntervalRateLimiter. But in my use of the 
code, I found a new bug about refill() in AverageIntervalRateLimiter.
{code}
long delta = (limit * (now - nextRefillTime)) / super.getTimeUnitInMillis();
if (delta > 0) {
  this.nextRefillTime = now;
  return Math.min(limit, available + delta);
}   
{code}
When delta > 0, refill maybe return available + delta. Then in the 
canExecute(), avail will add refillAmount again. So the new avail maybe 2 * 
avail + delta.
{code}
long refillAmount = refill(limit, avail);
if (refillAmount == 0 && avail < amount) {
  return false;
}   
// check for positive overflow
if (avail <= Long.MAX_VALUE - refillAmount) {
  avail = Math.max(0, Math.min(avail + refillAmount, limit));
} else {
  avail = Math.max(0, limit);
} 
{code}
I will add more unit tests for RateLimiter in the next days.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12713) IncreasingToUpperBoundRegionSplitPolicy invalid when region count greater than 100

2015-06-11 Thread Liu Junhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Junhong resolved HBASE-12713.
-
Resolution: Won't Fix

> IncreasingToUpperBoundRegionSplitPolicy invalid when region count greater 
> than 100
> --
>
> Key: HBASE-12713
> URL: https://issues.apache.org/jira/browse/HBASE-12713
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.98.6
>Reporter: Liu Junhong
>   Original Estimate: 4m
>  Remaining Estimate: 4m
>
> I find that the policy of region split which in 
> IncreasingToUpperBoundRegionSplitPolicy will be use the value of 
> "maxfilesize" when the count of region is greater than 100. But sometimes 100 
> regions is not too much for a cluster that has 50 or more regionservers.
> So i think this policy should consider the density of the regions but not the 
> total count of the regions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13828) Add group permissions testing coverage to AC.

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581782#comment-14581782
 ] 

Hudson commented on HBASE-13828:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #978 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/978/])
HBASE-13828 Add group permissions testing coverage to AC (apurtell: rev 
95cc075a8acbe6313852c37836b8ed145cfdbb33)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController2.java


> Add group permissions testing coverage to AC.
> -
>
> Key: HBASE-13828
> URL: https://issues.apache.org/jira/browse/HBASE-13828
> Project: HBase
>  Issue Type: Improvement
>Reporter: Srikanth Srungarapu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13828-0.98-v2.patch, 
> HBASE-13828-branch-1-v2.patch, HBASE-13828-branch-1.0-v2.patch, 
> HBASE-13828-branch-1.1-v2.patch, HBASE-13828-v1.patch, HBASE-13828-v2.patch, 
> HBASE-13828-v3.patch, HBASE-13828.patch
>
>
> We suffered a regression HBASE-13826 recently due to lack of testing coverage 
> for group permissions for AC. With the recent perf boost provided by 
> HBASE-13658, it wouldn't be a bad idea to add checks for group level users to 
> applicable unit tests in TestAccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13855) Race in multi threaded PartitionedMobCompactor causes NPE

2015-06-11 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John resolved HBASE-13855.

  Resolution: Fixed
Hadoop Flags: Reviewed

Thanks Jingcheng.

> Race in multi threaded PartitionedMobCompactor causes NPE
> -
>
> Key: HBASE-13855
> URL: https://issues.apache.org/jira/browse/HBASE-13855
> Project: HBase
>  Issue Type: Sub-task
>  Components: mob
>Affects Versions: hbase-11339
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
>Priority: Critical
> Fix For: hbase-11339
>
> Attachments: HBASE-13855.diff
>
>
> In PartitionedMobCompactor, mob files are split into partitions, the 
> compactions of partitions run in parallel.
> The partitions share  the same set of del files. There might be race 
> conditions when open readers of del store files in each partition which can 
> cause NPE.
> In this patch, we will pre-create the reader for each del store file to avoid 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13855) Race in multi threaded PartitionedMobCompactor causes NPE

2015-06-11 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13855:
---
Summary: Race in multi threaded PartitionedMobCompactor causes NPE  (was: 
NPE in PartitionedMobCompactor)

> Race in multi threaded PartitionedMobCompactor causes NPE
> -
>
> Key: HBASE-13855
> URL: https://issues.apache.org/jira/browse/HBASE-13855
> Project: HBase
>  Issue Type: Sub-task
>  Components: mob
>Affects Versions: hbase-11339
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
>Priority: Critical
> Fix For: hbase-11339
>
> Attachments: HBASE-13855.diff
>
>
> In PartitionedMobCompactor, mob files are split into partitions, the 
> compactions of partitions run in parallel.
> The partitions share  the same set of del files. There might be race 
> conditions when open readers of del store files in each partition which can 
> cause NPE.
> In this patch, we will pre-create the reader for each del store file to avoid 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13876) Improving performance of HeapMemoryManager

2015-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581710#comment-14581710
 ] 

Hadoop QA commented on HBASE-13876:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12738995/HBASE-13876-v5.patch
  against master branch at commit 349cbe102a130b50852201e84dc7ac3bea4fc1f5.
  ATTACHMENT ID: 12738995

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestRegionRebalancing

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14378//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14378//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14378//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14378//console

This message is automatically generated.

> Improving performance of HeapMemoryManager
> --
>
> Key: HBASE-13876
> URL: https://issues.apache.org/jira/browse/HBASE-13876
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase, regionserver
>Affects Versions: 2.0.0, 1.0.1, 1.1.0, 1.1.1
>Reporter: Abhilash
>Assignee: Abhilash
>Priority: Minor
> Attachments: HBASE-13876-v2.patch, HBASE-13876-v3.patch, 
> HBASE-13876-v4.patch, HBASE-13876-v5.patch, HBASE-13876-v5.patch, 
> HBASE-13876.patch
>
>
> I am trying to improve the performance of DefaultHeapMemoryTuner by 
> introducing some more checks. The current checks under which the 
> DefaultHeapMemoryTuner works are very rare so I am trying to weaken these 
> checks to improve its performance.
> Check current memstore size and current block cache size. For say if we are 
> using less than 50% of currently available block cache size  we say block 
> cache is sufficient and same for memstore. This check will be very effective 
> when server is either load heavy or write heavy. Earlier version just waited 
> for number of evictions / number of flushes to be zero which are very rare.
> Otherwise based on percent change in number of cache misses and number of 
> flushes we increase / decrease memory provided for caching / memstore. After 
> doing so, on next call of HeapMemoryTuner we verify that last change has 
> indeed decreased number of evictions / flush either of which it was expected 
> to do. We also check that it does not make the other (evictions / flush) 
> increase much. I am doing this analysis by comparing percent change (which is 
> basically nothing but normalized derivative) of number of evictions and 
> number of flushes during last two periods. The main motive for doing this was 
> that if we have random reads then we will be having a lot of cache misses. 
> But even after increasing block cache we wont be able to decrease number of 
> cache misses and we will revert back and eventually we will not waste memory 
> on block caches. This will also help us ignore random short term spikes in 
> reads / writes. I have also tried to take care not to tune memory if do do 
> not have enough hints as unnecessary tuning my slow down the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13828) Add group permissions testing coverage to AC.

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581703#comment-14581703
 ] 

Hudson commented on HBASE-13828:


SUCCESS: Integrated in HBase-0.98 #1025 (See 
[https://builds.apache.org/job/HBase-0.98/1025/])
HBASE-13828 Add group permissions testing coverage to AC (apurtell: rev 
95cc075a8acbe6313852c37836b8ed145cfdbb33)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController2.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java


> Add group permissions testing coverage to AC.
> -
>
> Key: HBASE-13828
> URL: https://issues.apache.org/jira/browse/HBASE-13828
> Project: HBase
>  Issue Type: Improvement
>Reporter: Srikanth Srungarapu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13828-0.98-v2.patch, 
> HBASE-13828-branch-1-v2.patch, HBASE-13828-branch-1.0-v2.patch, 
> HBASE-13828-branch-1.1-v2.patch, HBASE-13828-v1.patch, HBASE-13828-v2.patch, 
> HBASE-13828-v3.patch, HBASE-13828.patch
>
>
> We suffered a regression HBASE-13826 recently due to lack of testing coverage 
> for group permissions for AC. With the recent perf boost provided by 
> HBASE-13658, it wouldn't be a bad idea to add checks for group level users to 
> applicable unit tests in TestAccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13855) NPE in PartitionedMobCompactor

2015-06-11 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581623#comment-14581623
 ] 

Anoop Sam John commented on HBASE-13855:


This is a critical issue and am going to commit it now.  [~jmhsieh] pls check 
once the test caused NPE for you before this patch.

> NPE in PartitionedMobCompactor
> --
>
> Key: HBASE-13855
> URL: https://issues.apache.org/jira/browse/HBASE-13855
> Project: HBase
>  Issue Type: Sub-task
>  Components: mob
>Affects Versions: hbase-11339
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
>Priority: Critical
> Fix For: hbase-11339
>
> Attachments: HBASE-13855.diff
>
>
> In PartitionedMobCompactor, mob files are split into partitions, the 
> compactions of partitions run in parallel.
> The partitions share  the same set of del files. There might be race 
> conditions when open readers of del store files in each partition which can 
> cause NPE.
> In this patch, we will pre-create the reader for each del store file to avoid 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13855) NPE in PartitionedMobCompactor

2015-06-11 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13855:
---
Priority: Critical  (was: Major)

> NPE in PartitionedMobCompactor
> --
>
> Key: HBASE-13855
> URL: https://issues.apache.org/jira/browse/HBASE-13855
> Project: HBase
>  Issue Type: Sub-task
>  Components: mob
>Affects Versions: hbase-11339
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
>Priority: Critical
> Fix For: hbase-11339
>
> Attachments: HBASE-13855.diff
>
>
> In PartitionedMobCompactor, mob files are split into partitions, the 
> compactions of partitions run in parallel.
> The partitions share  the same set of del files. There might be race 
> conditions when open readers of del store files in each partition which can 
> cause NPE.
> In this patch, we will pre-create the reader for each del store file to avoid 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13828) Add group permissions testing coverage to AC.

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581591#comment-14581591
 ] 

Hudson commented on HBASE-13828:


FAILURE: Integrated in HBase-TRUNK #6562 (See 
[https://builds.apache.org/job/HBase-TRUNK/6562/])
HBASE-13828 Add group permissions testing coverage to AC (apurtell: rev 
349cbe102a130b50852201e84dc7ac3bea4fc1f5)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController2.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java


> Add group permissions testing coverage to AC.
> -
>
> Key: HBASE-13828
> URL: https://issues.apache.org/jira/browse/HBASE-13828
> Project: HBase
>  Issue Type: Improvement
>Reporter: Srikanth Srungarapu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13828-0.98-v2.patch, 
> HBASE-13828-branch-1-v2.patch, HBASE-13828-branch-1.0-v2.patch, 
> HBASE-13828-branch-1.1-v2.patch, HBASE-13828-v1.patch, HBASE-13828-v2.patch, 
> HBASE-13828-v3.patch, HBASE-13828.patch
>
>
> We suffered a regression HBASE-13826 recently due to lack of testing coverage 
> for group permissions for AC. With the recent perf boost provided by 
> HBASE-13658, it wouldn't be a bad idea to add checks for group level users to 
> applicable unit tests in TestAccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13855) NPE in PartitionedMobCompactor

2015-06-11 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581585#comment-14581585
 ] 

Jingcheng Du commented on HBASE-13855:
--

Hi Jon [~jmhsieh], do you want to take a look?

> NPE in PartitionedMobCompactor
> --
>
> Key: HBASE-13855
> URL: https://issues.apache.org/jira/browse/HBASE-13855
> Project: HBase
>  Issue Type: Sub-task
>  Components: mob
>Affects Versions: hbase-11339
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Fix For: hbase-11339
>
> Attachments: HBASE-13855.diff
>
>
> In PartitionedMobCompactor, mob files are split into partitions, the 
> compactions of partitions run in parallel.
> The partitions share  the same set of del files. There might be race 
> conditions when open readers of del store files in each partition which can 
> cause NPE.
> In this patch, we will pre-create the reader for each del store file to avoid 
> this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13828) Add group permissions testing coverage to AC.

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581572#comment-14581572
 ] 

Hudson commented on HBASE-13828:


SUCCESS: Integrated in HBase-1.1 #537 (See 
[https://builds.apache.org/job/HBase-1.1/537/])
HBASE-13828 Add group permissions testing coverage to AC (apurtell: rev 
7125dd4f97cb691462a9b79c2a818534c5c2de17)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController2.java


> Add group permissions testing coverage to AC.
> -
>
> Key: HBASE-13828
> URL: https://issues.apache.org/jira/browse/HBASE-13828
> Project: HBase
>  Issue Type: Improvement
>Reporter: Srikanth Srungarapu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13828-0.98-v2.patch, 
> HBASE-13828-branch-1-v2.patch, HBASE-13828-branch-1.0-v2.patch, 
> HBASE-13828-branch-1.1-v2.patch, HBASE-13828-v1.patch, HBASE-13828-v2.patch, 
> HBASE-13828-v3.patch, HBASE-13828.patch
>
>
> We suffered a regression HBASE-13826 recently due to lack of testing coverage 
> for group permissions for AC. With the recent perf boost provided by 
> HBASE-13658, it wouldn't be a bad idea to add checks for group level users to 
> applicable unit tests in TestAccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13828) Add group permissions testing coverage to AC.

2015-06-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581569#comment-14581569
 ] 

Hudson commented on HBASE-13828:


FAILURE: Integrated in HBase-1.0 #958 (See 
[https://builds.apache.org/job/HBase-1.0/958/])
HBASE-13828 Add group permissions testing coverage to AC (apurtell: rev 
904ec1e4c3e7556b6ce290180c62f22469a8a608)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController2.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java


> Add group permissions testing coverage to AC.
> -
>
> Key: HBASE-13828
> URL: https://issues.apache.org/jira/browse/HBASE-13828
> Project: HBase
>  Issue Type: Improvement
>Reporter: Srikanth Srungarapu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1
>
> Attachments: HBASE-13828-0.98-v2.patch, 
> HBASE-13828-branch-1-v2.patch, HBASE-13828-branch-1.0-v2.patch, 
> HBASE-13828-branch-1.1-v2.patch, HBASE-13828-v1.patch, HBASE-13828-v2.patch, 
> HBASE-13828-v3.patch, HBASE-13828.patch
>
>
> We suffered a regression HBASE-13826 recently due to lack of testing coverage 
> for group permissions for AC. With the recent perf boost provided by 
> HBASE-13658, it wouldn't be a bad idea to add checks for group level users to 
> applicable unit tests in TestAccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)