[jira] [Updated] (HBASE-16672) Add option for bulk load to always copy hfile(s) instead of renaming

2016-09-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16672:
---
   Resolution: Fixed
Fix Version/s: 1.4.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> Add option for bulk load to always copy hfile(s) instead of renaming
> 
>
> Key: HBASE-16672
> URL: https://issues.apache.org/jira/browse/HBASE-16672
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16672.v1.txt, 16672.v10.txt, 16672.v11.txt, 
> 16672.v2.txt, 16672.v3.txt, 16672.v4.txt, 16672.v5.txt, 16672.v6.txt, 
> 16672.v7.txt, 16672.v8.txt, 16672.v9.txt
>
>
> This is related to HBASE-14417, to support incrementally restoring to 
> multiple destinations, this issue adds option which would always copy 
> hfile(s) during bulk load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16731) Add Scan#setLoadColumnFamiliesOnDemand method to Get.

2016-09-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537637#comment-15537637
 ] 

Hadoop QA commented on HBASE-16731:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 8s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 54s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 40s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
36s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 148m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestHRegion |
| Timed out junit tests | org.apache.hadoop.hbase.client.TestReplicasClient |
|   | org.apache.hadoop.hbase.client.TestFromClientSide |
|   | org.apache.hadoop.hbase.client.TestFromClientSide3 |
|   | org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient |
|   | org.apache.hadoop.hbase.client.TestTableSnapshotScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831188/HBASE-16731.v2.patch |
| JIRA Issue | HBASE-16731 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  cc  hbaseprotoc  |
| uname | Linux e2137f83ee40 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537615#comment-15537615
 ] 

Hudson commented on HBASE-16678:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1703 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1703/])
HBASE-16678 MapReduce jobs do not update counters from ScanMetrics (enis: rev 
c3c82f3558b80b23c8f997d5bacfa78de384208a)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java


> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537598#comment-15537598
 ] 

Hudson commented on HBASE-16678:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK7 #26 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/26/])
HBASE-16678 MapReduce jobs do not update counters from ScanMetrics (enis: rev 
9f364084a2a69800b4a4658cb80e3315b881ad8e)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java


> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537564#comment-15537564
 ] 

Hudson commented on HBASE-16678:


FAILURE: Integrated in Jenkins build HBase-1.1-JDK7 #1789 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1789/])
HBASE-16678 MapReduce jobs do not update counters from ScanMetrics (enis: rev 
88bf5b3b1d2ffd11a30c06905c1e51e9d6b2a65b)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java


> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537552#comment-15537552
 ] 

Hudson commented on HBASE-16678:


FAILURE: Integrated in Jenkins build HBase-1.1-JDK8 #1873 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1873/])
HBASE-16678 MapReduce jobs do not update counters from ScanMetrics (enis: rev 
88bf5b3b1d2ffd11a30c06905c1e51e9d6b2a65b)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java


> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16741) Amend the generate protobufs out-of-band build step to include shade, pulling in protobuf source and a hook for patching protobuf

2016-09-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537529#comment-15537529
 ] 

Hudson commented on HBASE-16741:


FAILURE: Integrated in Jenkins build HBASE-16264 #9 (See 
[https://builds.apache.org/job/HBASE-16264/9/])
HBASE-16741 Amend the generate protobufs out-of-band build step to (stack: rev 
32be831ce56beab404d463cd7ada54a98f9e99f8)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/SnapshotProtos.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/HBaseProtos.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSink.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/LoadBalancerProtos.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ErrorHandlingProtos.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ClusterIdProtos.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ComparatorProtos.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPriorityRpc.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RegionServerStatusProtos.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/WALProtos.java
* (edit) hbase-protocol-shaded/pom.xml
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/Reference.java
* (edit) 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/AdminProtos.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ZooKeeperProtos.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/HFileProtos.java
* (delete) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/util/ByteStringer.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/RPCProtos.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProcedureProtos.java
* (edit) pom.xml
* (delete) hbase-protocol-shaded/src/main/protobuf/RSGroup.proto
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/ipc/protobuf/generated/TestProcedureProtos.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureUtil.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/FSProtos.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/codec/MessageCodec.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/EncryptionUtil.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV1.java
* (edit) 

[jira] [Created] (HBASE-16744) Procedure V2 - Lock procedures to allow clients to acquire locks on tables/namespaces/regions

2016-09-30 Thread Appy (JIRA)
Appy created HBASE-16744:


 Summary: Procedure V2 - Lock procedures to allow clients to 
acquire locks on tables/namespaces/regions
 Key: HBASE-16744
 URL: https://issues.apache.org/jira/browse/HBASE-16744
 Project: HBase
  Issue Type: New Feature
Reporter: Appy
Assignee: Appy


Will help us get rid of ZK locks.
Will be useful for external tools like hbck, future backup manager, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16743) TestSimpleRpcScheduler#testCoDelScheduling is broke

2016-09-30 Thread stack (JIRA)
stack created HBASE-16743:
-

 Summary: TestSimpleRpcScheduler#testCoDelScheduling is broke
 Key: HBASE-16743
 URL: https://issues.apache.org/jira/browse/HBASE-16743
 Project: HBase
  Issue Type: Bug
  Components: rpc
Reporter: stack


The testCoDelScheduling test is broke. Here are some notes on it. I have 
disabled it in the HBASE-15638 shading patch.

{code}
I don't get this test. When I time this test, the minDelay is > 2 * codel delay 
from the get go. So we are always overloaded. The test below would seem to 
complete the queuing of all the CallRunners inside the codel check interval. I 
don't think we are skipping codel checking. Second, I think this test has been  
broken since HBASE-16089 Add on FastPath for CoDel went in. The thread name we 
were looking for was the name BEFORE we updated: i.e. 
"RpcServer.CodelBQ.default.handler". But same patch changed the name of the 
codel  fastpath thread to: new 
FastPathBalancedQueueRpcExecutor("CodelFPBQ.default", handlerCount, 
numCallQueues...

Codel is hard to test. This test is going to be flakey given it all 
timer-based. Disabling for now till chat
{code}

FYI [~mantonov]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16644) Errors when reading legit HFile' Trailer on branch 1.3

2016-09-30 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537429#comment-15537429
 ] 

Gary Helmling commented on HBASE-16644:
---

The patch looks good to me, though I'm not that familiar with this code.  Seems 
right that whether the checksum is included in the header size should be based 
on the file format itself, instead of whether we want to consult the checksum 
at the given moment.  So it seems correct.

+1 from me, assuming the 3 test failures are unrelated.

> Errors when reading legit HFile' Trailer on branch 1.3
> --
>
> Key: HBASE-16644
> URL: https://issues.apache.org/jira/browse/HBASE-16644
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Critical
> Fix For: 1.3.0
>
> Attachments: HBASE-16644.branch-1.3.patch, 
> HBASE-16644.branch-1.3.patch
>
>
> There seems to be a regression in branch 1.3 where we can't read HFile 
> trailer(getting "CorruptHFileException: Problem reading HFile Trailer") on 
> some HFiles that could be successfully read on 1.2.
> I've seen this error manifesting in two ways so far.
> {code}Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: 
> Problem reading HFile Trailer from file  
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x04\x00\x00\x00\x00\x00
>   at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:155)
>   at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:344)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1735)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:156)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:485)
> {code}
> and second
> {code}
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164)
>   at 
> org.apache.hadoop.hbase.io.HalfStoreFileReader.(HalfStoreFileReader.java:104)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:256)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.io.IOException: Premature EOF from inputStream (read returned 
> -1, was trying to read 10083 necessary bytes and 24 extra bytes, successfully 
> read 1072
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:737)
>   at 
> 

[jira] [Created] (HBASE-16742) Add chapter for devs on how we do protobufs going forward

2016-09-30 Thread stack (JIRA)
stack created HBASE-16742:
-

 Summary: Add chapter for devs on how we do protobufs going forward
 Key: HBASE-16742
 URL: https://issues.apache.org/jira/browse/HBASE-16742
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: stack
Assignee: stack


Add chapter on shaded vs non-shaded, CPEPs vs internal usage, checked-in and 
patched protobuf (3.1.0?) vs the protobuf that other components include 
(pb2.5.0).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16644) Errors when reading legit HFile' Trailer on branch 1.3

2016-09-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537332#comment-15537332
 ] 

Hadoop QA commented on HBASE-16644:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 38s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 104m 26s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 161m 26s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.mapred.TestMultiTableSnapshotInputFormat |
|   | hadoop.hbase.regionserver.TestFailedAppendAndSync |
|   | hadoop.hbase.mapreduce.TestMultiTableSnapshotInputFormat |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:date2016-09-30 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831158/HBASE-16644.branch-1.3.patch
 |
| JIRA Issue | HBASE-16644 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  

[jira] [Commented] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537301#comment-15537301
 ] 

Hudson commented on HBASE-16678:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK7 #35 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/35/])
HBASE-16678 MapReduce jobs do not update counters from ScanMetrics (enis: rev 
2e381ee2d29b69fa3b47e02176a53c029f7ddcb0)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java


> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537271#comment-15537271
 ] 

Hudson commented on HBASE-16678:


FAILURE: Integrated in Jenkins build HBase-1.4 #439 (See 
[https://builds.apache.org/job/HBase-1.4/439/])
HBASE-16678 MapReduce jobs do not update counters from ScanMetrics (enis: rev 
911f9b9eb7dc59abe3c01aa75ede88ccdede7a08)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java


> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16741) Amend the generate protobufs out-of-band build step to include shade, pulling in protobuf source and a hook for patching protobuf

2016-09-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16741:
--
Attachment: 16741.patch

Here is patch. Means we are checking in protobuf src file. They have their 
3-clause BSD license on them which should be ok according to 
https://www.apache.org/legal/resolved.html

> Amend the generate protobufs out-of-band build step to include shade, pulling 
> in protobuf source and a hook for patching protobuf
> -
>
> Key: HBASE-16741
> URL: https://issues.apache.org/jira/browse/HBASE-16741
> Project: HBase
>  Issue Type: Sub-task
>  Components: Protobufs
>Reporter: stack
>Assignee: stack
> Attachments: 16741.patch
>
>
> As part of the protobuf shading work, I need to amend the build step that 
> builds protobuf sources. For the module used by hbase internally -- the one 
> that has our protos and that does the protobuf shading -- I need to enhance 
> the generate protobuf sources step to also do:
>  * Shading/relocating so we avoid clashing with protos used by CPEPs out in 
> the hbase-protocol module.
>  * Pulling down the protobuf lib and including its sources to make IDEs happy 
> else they'll moan about missing (shaded) protobuf.
>  * A hook that allows us to patch protobuf lib, at least temporarily until 
> our needed changes make it upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16731) Add Scan#setLoadColumnFamiliesOnDemand method to Get.

2016-09-30 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16731:
--
Status: Patch Available  (was: Open)

> Add Scan#setLoadColumnFamiliesOnDemand method to Get.
> -
>
> Key: HBASE-16731
> URL: https://issues.apache.org/jira/browse/HBASE-16731
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-16731.v0.patch, HBASE-16731.v1.patch, 
> HBASE-16731.v2.patch
>
>
> RSRpcServices#get() converts the Get to Scan without 
> scan#setLoadColumnFamiliesOnDemand. It causes two disadvantage.
> 1) The result retrieved from Get and Scan will be different if we use the 
> empty filter. Scan doesn't return any data but Get does.
> see [HBASE-16729 |https://issues.apache.org/jira/browse/HBASE-16729]
> 2) unable to read CF data lazily for Get operation.
> Any comments? Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16731) Add Scan#setLoadColumnFamiliesOnDemand method to Get.

2016-09-30 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16731:
--
Attachment: HBASE-16731.v2.patch

fix the NPE

> Add Scan#setLoadColumnFamiliesOnDemand method to Get.
> -
>
> Key: HBASE-16731
> URL: https://issues.apache.org/jira/browse/HBASE-16731
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-16731.v0.patch, HBASE-16731.v1.patch, 
> HBASE-16731.v2.patch
>
>
> RSRpcServices#get() converts the Get to Scan without 
> scan#setLoadColumnFamiliesOnDemand. It causes two disadvantage.
> 1) The result retrieved from Get and Scan will be different if we use the 
> empty filter. Scan doesn't return any data but Get does.
> see [HBASE-16729 |https://issues.apache.org/jira/browse/HBASE-16729]
> 2) unable to read CF data lazily for Get operation.
> Any comments? Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16731) Add Scan#setLoadColumnFamiliesOnDemand method to Get.

2016-09-30 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16731:
--
Status: Open  (was: Patch Available)

> Add Scan#setLoadColumnFamiliesOnDemand method to Get.
> -
>
> Key: HBASE-16731
> URL: https://issues.apache.org/jira/browse/HBASE-16731
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-16731.v0.patch, HBASE-16731.v1.patch
>
>
> RSRpcServices#get() converts the Get to Scan without 
> scan#setLoadColumnFamiliesOnDemand. It causes two disadvantage.
> 1) The result retrieved from Get and Scan will be different if we use the 
> empty filter. Scan doesn't return any data but Get does.
> see [HBASE-16729 |https://issues.apache.org/jira/browse/HBASE-16729]
> 2) unable to read CF data lazily for Get operation.
> Any comments? Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537188#comment-15537188
 ] 

Hudson commented on HBASE-16678:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK8 #32 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/32/])
HBASE-16678 MapReduce jobs do not update counters from ScanMetrics (enis: rev 
2e381ee2d29b69fa3b47e02176a53c029f7ddcb0)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java


> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-09-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537173#comment-15537173
 ] 

stack commented on HBASE-15638:
---

HBASE-16741 subtask is about enhancing the build step to add shading and 
pulling in of protobuf lib srcs.

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 15638v2.patch, HBASE-15638.master.001.patch, 
> HBASE-15638.master.002.patch, HBASE-15638.master.003 (1).patch, 
> HBASE-15638.master.003 (1).patch, HBASE-15638.master.003 (1).patch, 
> HBASE-15638.master.003.patch, HBASE-15638.master.003.patch, 
> HBASE-15638.master.004.patch, HBASE-15638.master.005.patch, 
> HBASE-15638.master.006.patch, HBASE-15638.master.007.patch, 
> HBASE-15638.master.007.patch, HBASE-15638.master.008.patch, 
> HBASE-15638.master.009.patch, as.far.as.server.patch
>
>
> We need to change our protobuf. Currently it is pb2.5.0. As is, protobufs 
> expect all buffers to be on-heap byte arrays. It does not have facility for 
> dealing in ByteBuffers and off-heap ByteBuffers in particular. This fact 
> frustrates the off-heaping-of-the-write-path project as 
> marshalling/unmarshalling of protobufs involves a copy on-heap first.
> So, we need to patch our protobuf so it supports off-heap ByteBuffers. To 
> ensure we pick up the patched protobuf always, we need to relocate/shade our 
> protobuf and adjust all protobuf references accordingly.
> Given as we have protobufs in our public facing API, Coprocessor Endpoints -- 
> which use protobuf Service to describe new API -- a blind relocation/shading 
> of com.google.protobuf.* will break our API for CoProcessor EndPoints (CPEP) 
> in particular. For example, in the Table Interface, to invoke a method on a 
> registered CPEP, we have:
> {code} Map 
> coprocessorService(
> Class service, byte[] startKey, byte[] endKey, 
> org.apache.hadoop.hbase.client.coprocessor.Batch.Call 
> callable)
> throws com.google.protobuf.ServiceException, Throwable{code}
> This issue is how we intend to shade protobuf for hbase-2.0.0 while 
> preserving our API as is so CPEPs continue to work on the new hbase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16741) Amend the generate protobufs out-of-band build step to include shade, pulling in protobuf source and a hook for patching protobuf

2016-09-30 Thread stack (JIRA)
stack created HBASE-16741:
-

 Summary: Amend the generate protobufs out-of-band build step to 
include shade, pulling in protobuf source and a hook for patching protobuf
 Key: HBASE-16741
 URL: https://issues.apache.org/jira/browse/HBASE-16741
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
Assignee: stack


As part of the protobuf shading work, I need to amend the build step that 
builds protobuf sources. For the module used by hbase internally -- the one 
that has our protos and that does the protobuf shading -- I need to enhance the 
generate protobuf sources step to also do:

 * Shading/relocating so we avoid clashing with protos used by CPEPs out in the 
hbase-protocol module.
 * Pulling down the protobuf lib and including its sources to make IDEs happy 
else they'll moan about missing (shaded) protobuf.
 * A hook that allows us to patch protobuf lib, at least temporarily until our 
needed changes make it upstream.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16740) start-docker.sh fails to run by complaining bzip2 error

2016-09-30 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HBASE-16740:
--
Description: 
./bin/start-docker.sh fails to run correctly by prompting
{noformat} 
Google Test not present.  Fetching gtest-1.5.0 from the web...
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  1586  100  15860 0   7821  0 --:--:-- --:--:-- --:--:--  7851
bzip2: (stdin) is not a bzip2 file.
tar: Child returned status 2
tar: Error is not recoverable: exiting now
{noformat} 

It turns out protobuf autogen.sh tries to download gtest but fail with 
incorrect url
{noformat}
if test ! -e gtest; then
  echo "Google Test not present.  Fetching gtest-1.5.0 from the web..."
  curl http://googletest.googlecode.com/files/gtest-1.5.0.tar.bz2 | tar jx
  mv gtest-1.5.0 gtest
fi
{noformat}

This needs to be fixed to have docker-based build infra work smoothly.

  was:
./bin/start-docker.sh fails to run correctly by prompting
{noformat} 
Google Test not present.  Fetching gtest-1.5.0 from the web...
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  1586  100  15860 0   7821  0 --:--:-- --:--:-- --:--:--  7851
bzip2: (stdin) is not a bzip2 file.
tar: Child returned status 2
tar: Error is not recoverable: exiting now
{noformat} 

It turns out protobuf autogen.sh tries to download gtest but fail with 
incorrect url.
{noformat}
if test ! -e gtest; then
  echo "Google Test not present.  Fetching gtest-1.5.0 from the web..."
  curl http://googletest.googlecode.com/files/gtest-1.5.0.tar.bz2 | tar jx
  mv gtest-1.5.0 gtest
fi
{noformat}


> start-docker.sh fails to run by complaining bzip2 error
> ---
>
> Key: HBASE-16740
> URL: https://issues.apache.org/jira/browse/HBASE-16740
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> ./bin/start-docker.sh fails to run correctly by prompting
> {noformat} 
> Google Test not present.  Fetching gtest-1.5.0 from the web...
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  1586  100  15860 0   7821  0 --:--:-- --:--:-- --:--:--  7851
> bzip2: (stdin) is not a bzip2 file.
> tar: Child returned status 2
> tar: Error is not recoverable: exiting now
> {noformat} 
> It turns out protobuf autogen.sh tries to download gtest but fail with 
> incorrect url
> {noformat}
> if test ! -e gtest; then
>   echo "Google Test not present.  Fetching gtest-1.5.0 from the web..."
>   curl http://googletest.googlecode.com/files/gtest-1.5.0.tar.bz2 | tar jx
>   mv gtest-1.5.0 gtest
> fi
> {noformat}
> This needs to be fixed to have docker-based build infra work smoothly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16740) start-docker.sh fails to run by complaining bzip2 error

2016-09-30 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HBASE-16740:
--
Description: 
./bin/start-docker.sh fails to run correctly by prompting
{noformat} 
Google Test not present.  Fetching gtest-1.5.0 from the web...
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  1586  100  15860 0   7821  0 --:--:-- --:--:-- --:--:--  7851
bzip2: (stdin) is not a bzip2 file.
tar: Child returned status 2
tar: Error is not recoverable: exiting now
{noformat} 

It turns out protobuf autogen.sh tries to download gtest but fail with 
incorrect url.
{noformat}
if test ! -e gtest; then
  echo "Google Test not present.  Fetching gtest-1.5.0 from the web..."
  curl http://googletest.googlecode.com/files/gtest-1.5.0.tar.bz2 | tar jx
  mv gtest-1.5.0 gtest
fi
{noformat}

> start-docker.sh fails to run by complaining bzip2 error
> ---
>
> Key: HBASE-16740
> URL: https://issues.apache.org/jira/browse/HBASE-16740
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> ./bin/start-docker.sh fails to run correctly by prompting
> {noformat} 
> Google Test not present.  Fetching gtest-1.5.0 from the web...
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  1586  100  15860 0   7821  0 --:--:-- --:--:-- --:--:--  7851
> bzip2: (stdin) is not a bzip2 file.
> tar: Child returned status 2
> tar: Error is not recoverable: exiting now
> {noformat} 
> It turns out protobuf autogen.sh tries to download gtest but fail with 
> incorrect url.
> {noformat}
> if test ! -e gtest; then
>   echo "Google Test not present.  Fetching gtest-1.5.0 from the web..."
>   curl http://googletest.googlecode.com/files/gtest-1.5.0.tar.bz2 | tar jx
>   mv gtest-1.5.0 gtest
> fi
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16308) Contain protobuf references

2016-09-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537150#comment-15537150
 ] 

stack commented on HBASE-16308:
---

Just to say that I have now gone back some from this idea of moving all to do 
with protobuf back into one module.

Rather, protobufs should be packaged with the module that uses them in the case 
of coprocessor endpoints or say for REST. This issue made sense at one time 
when trying to sort out our protobuf mess but now we have a clue and a 
separation between 'internal' protobuf use and external -- e.g. CPEP -- we can 
get back to good module encapsulation. I'll write up 'rules' in a new protobuf 
chapter in book.

> Contain protobuf references
> ---
>
> Key: HBASE-16308
> URL: https://issues.apache.org/jira/browse/HBASE-16308
> Project: HBase
>  Issue Type: Sub-task
>  Components: Protobufs
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: HBASE-16308.master.001.patch, 
> HBASE-16308.master.002.patch, HBASE-16308.master.003.patch, 
> HBASE-16308.master.004.patch, HBASE-16308.master.005.patch, 
> HBASE-16308.master.006.patch, HBASE-16308.master.006.patch, 
> HBASE-16308.master.007.patch, HBASE-16308.master.008.patch, 
> HBASE-16308.master.009.patch, HBASE-16308.master.010.patch, 
> HBASE-16308.master.011.patch, HBASE-16308.master.012.patch, 
> HBASE-16308.master.013.patch, HBASE-16308.master.014.patch, 
> HBASE-16308.master.015.patch, HBASE-16308.master.015.patch
>
>
> Clean up our protobuf references so contained to just a few classes rather 
> than being spread about the codebase. Doing this work will make it easier 
> landing the parent issue and will make it more clear where the division 
> between shaded protobuf and unshaded protobuf lies (we need to continue with 
> unshaded protobuf for HDFS references by AsyncWAL and probably EndPoint 
> Coprocessors)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537145#comment-15537145
 ] 

Hudson commented on HBASE-16678:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #27 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/27/])
HBASE-16678 MapReduce jobs do not update counters from ScanMetrics (enis: rev 
9f364084a2a69800b4a4658cb80e3315b881ad8e)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java


> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16740) start-docker.sh fails to run by complaining bzip2 error

2016-09-30 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HBASE-16740:
--
Issue Type: Sub-task  (was: Task)
Parent: HBASE-14850

> start-docker.sh fails to run by complaining bzip2 error
> ---
>
> Key: HBASE-16740
> URL: https://issues.apache.org/jira/browse/HBASE-16740
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16740) start-docker.sh fails to run by complaining bzip2 error

2016-09-30 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HBASE-16740:
--
Issue Type: Task  (was: Bug)

> start-docker.sh fails to run by complaining bzip2 error
> ---
>
> Key: HBASE-16740
> URL: https://issues.apache.org/jira/browse/HBASE-16740
> Project: HBase
>  Issue Type: Task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16740) start-docker.sh fails to run by complaining bzip2 error

2016-09-30 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HBASE-16740:
-

 Summary: start-docker.sh fails to run by complaining bzip2 error
 Key: HBASE-16740
 URL: https://issues.apache.org/jira/browse/HBASE-16740
 Project: HBase
  Issue Type: Bug
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16738) L1 cache caching shared memory HFile block when blocks promoted from L2 to L1

2016-09-30 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537119#comment-15537119
 ] 

Mikhail Antonov commented on HBASE-16738:
-

2.0 only?

> L1 cache caching shared memory HFile block when blocks promoted from L2 to L1
> -
>
> Key: HBASE-16738
> URL: https://issues.apache.org/jira/browse/HBASE-16738
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-16738.patch
>
>
> This is an issue when L1 and L2 cache used with combinedMode = false.
> See in getBlock
> {code}
> if (victimHandler != null && !repeat) {
> Cacheable result = victimHandler.getBlock(cacheKey, caching, repeat, 
> updateCacheMetrics);
> // Promote this to L1.
> if (result != null && caching) {
>   cacheBlock(cacheKey, result, /* inMemory = */ false, /* cacheData = 
> */ true);
> }
> return result;
>   }
> {code}
> When block is not there in L1 and have it in L2, we will return the block 
> read from L2 and promote that block to L1 by adding it in LRUCache.  But if 
> the Block buffer is having shared memory (Off heap bucket cache for eg:) , we 
> can not directly cache this block. The buffer memory area under this block 
> can get cleaned up at any time. So we may get block data corruption.
> In such a case, we need to do a deep copy of the block (Including its buffer) 
> and then add that to L1 cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16644) Errors when reading legit HFile' Trailer on branch 1.3

2016-09-30 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-16644:

Status: Open  (was: Patch Available)

> Errors when reading legit HFile' Trailer on branch 1.3
> --
>
> Key: HBASE-16644
> URL: https://issues.apache.org/jira/browse/HBASE-16644
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Critical
> Fix For: 1.3.0
>
> Attachments: HBASE-16644.branch-1.3.patch, 
> HBASE-16644.branch-1.3.patch
>
>
> There seems to be a regression in branch 1.3 where we can't read HFile 
> trailer(getting "CorruptHFileException: Problem reading HFile Trailer") on 
> some HFiles that could be successfully read on 1.2.
> I've seen this error manifesting in two ways so far.
> {code}Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: 
> Problem reading HFile Trailer from file  
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x04\x00\x00\x00\x00\x00
>   at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:155)
>   at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:344)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1735)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:156)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:485)
> {code}
> and second
> {code}
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164)
>   at 
> org.apache.hadoop.hbase.io.HalfStoreFileReader.(HalfStoreFileReader.java:104)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:256)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.io.IOException: Premature EOF from inputStream (read returned 
> -1, was trying to read 10083 necessary bytes and 24 extra bytes, successfully 
> read 1072
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:737)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1459)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1712)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> 

[jira] [Updated] (HBASE-16644) Errors when reading legit HFile' Trailer on branch 1.3

2016-09-30 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-16644:

Status: Patch Available  (was: Open)

> Errors when reading legit HFile' Trailer on branch 1.3
> --
>
> Key: HBASE-16644
> URL: https://issues.apache.org/jira/browse/HBASE-16644
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Critical
> Fix For: 1.3.0
>
> Attachments: HBASE-16644.branch-1.3.patch, 
> HBASE-16644.branch-1.3.patch
>
>
> There seems to be a regression in branch 1.3 where we can't read HFile 
> trailer(getting "CorruptHFileException: Problem reading HFile Trailer") on 
> some HFiles that could be successfully read on 1.2.
> I've seen this error manifesting in two ways so far.
> {code}Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: 
> Problem reading HFile Trailer from file  
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x04\x00\x00\x00\x00\x00
>   at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:155)
>   at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:344)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1735)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:156)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:485)
> {code}
> and second
> {code}
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164)
>   at 
> org.apache.hadoop.hbase.io.HalfStoreFileReader.(HalfStoreFileReader.java:104)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:256)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.io.IOException: Premature EOF from inputStream (read returned 
> -1, was trying to read 10083 necessary bytes and 24 extra bytes, successfully 
> read 1072
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:737)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1459)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1712)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> 

[jira] [Commented] (HBASE-16731) Add Scan#setLoadColumnFamiliesOnDemand method to Get.

2016-09-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537045#comment-15537045
 ] 

Hadoop QA commented on HBASE-16731:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 55s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.filter.TestInvocationRecordFilter |
|   | hadoop.hbase.procedure.TestProcedureManager |
|   | hadoop.hbase.regionserver.querymatcher.TestUserScanQueryMatcher |
|   | hadoop.hbase.master.balancer.TestRegionLocationFinder |
|   | hadoop.hbase.regionserver.TestKeepDeletes |
|   | hadoop.hbase.regionserver.TestMinVersions |
|   | hadoop.hbase.regionserver.TestScanner |
|   | hadoop.hbase.regionserver.querymatcher.TestCompactionScanQueryMatcher |
|   | hadoop.hbase.mob.mapreduce.TestMobSweepMapper |
|   | hadoop.hbase.regionserver.TestResettingCounters |
|   | hadoop.hbase.regionserver.TestStoreFileRefresherChore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831151/HBASE-16731.v1.patch |
| JIRA Issue | HBASE-16731 |
| Optional Tests |  asflicense  javac  

[jira] [Commented] (HBASE-16739) Timed out exception message should include encoded region name

2016-09-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536944#comment-15536944
 ] 

Hadoop QA commented on HBASE-16739:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 22s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 114m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hbase.client.TestReplicasClient |
|   | org.apache.hadoop.hbase.client.TestAdmin2 |
|   | org.apache.hadoop.hbase.client.TestHCM |
|   | 
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas |
|   | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831135/16739.v1.txt |
| JIRA Issue | HBASE-16739 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux acbe15e43f9d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 3757da6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Updated] (HBASE-16644) Errors when reading legit HFile' Trailer on branch 1.3

2016-09-30 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-16644:

Attachment: HBASE-16644.branch-1.3.patch

Ok, the first draft version of the patch broke HFIle tool, bulkload and other 
codepaths not going through Store* interfaces, since those don't use 
verifyChecksum flag as expected. So I digged in and found that before that 
major refactoring of this codebase we used to use 
hfileContext.isHBaseChecksum..() call to verify whether we should be using 
checksums or not (which simply checks minor HFile version from the trailer).

With updated patch, i'm able to handle both 2.0 and new files. The tests broken 
by previous version of the patch pass for me now. pinging [~stack]

> Errors when reading legit HFile' Trailer on branch 1.3
> --
>
> Key: HBASE-16644
> URL: https://issues.apache.org/jira/browse/HBASE-16644
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Critical
> Fix For: 1.3.0
>
> Attachments: HBASE-16644.branch-1.3.patch, 
> HBASE-16644.branch-1.3.patch
>
>
> There seems to be a regression in branch 1.3 where we can't read HFile 
> trailer(getting "CorruptHFileException: Problem reading HFile Trailer") on 
> some HFiles that could be successfully read on 1.2.
> I've seen this error manifesting in two ways so far.
> {code}Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: 
> Problem reading HFile Trailer from file  
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x04\x00\x00\x00\x00\x00
>   at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:155)
>   at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:344)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1735)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.(HFileReaderV2.java:156)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:485)
> {code}
> and second
> {code}
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file 
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1164)
>   at 
> org.apache.hadoop.hbase.io.HalfStoreFileReader.(HalfStoreFileReader.java:104)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:256)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>   at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>   ... 6 more
> Caused by: java.io.IOException: 

[jira] [Updated] (HBASE-16731) Add Scan#setLoadColumnFamiliesOnDemand method to Get.

2016-09-30 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16731:
--
Status: Patch Available  (was: Open)

fix the error and all failed test are passed locally.

> Add Scan#setLoadColumnFamiliesOnDemand method to Get.
> -
>
> Key: HBASE-16731
> URL: https://issues.apache.org/jira/browse/HBASE-16731
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-16731.v0.patch, HBASE-16731.v1.patch
>
>
> RSRpcServices#get() converts the Get to Scan without 
> scan#setLoadColumnFamiliesOnDemand. It causes two disadvantage.
> 1) The result retrieved from Get and Scan will be different if we use the 
> empty filter. Scan doesn't return any data but Get does.
> see [HBASE-16729 |https://issues.apache.org/jira/browse/HBASE-16729]
> 2) unable to read CF data lazily for Get operation.
> Any comments? Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16731) Add Scan#setLoadColumnFamiliesOnDemand method to Get.

2016-09-30 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16731:
--
Status: Open  (was: Patch Available)

> Add Scan#setLoadColumnFamiliesOnDemand method to Get.
> -
>
> Key: HBASE-16731
> URL: https://issues.apache.org/jira/browse/HBASE-16731
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-16731.v0.patch, HBASE-16731.v1.patch
>
>
> RSRpcServices#get() converts the Get to Scan without 
> scan#setLoadColumnFamiliesOnDemand. It causes two disadvantage.
> 1) The result retrieved from Get and Scan will be different if we use the 
> empty filter. Scan doesn't return any data but Get does.
> see [HBASE-16729 |https://issues.apache.org/jira/browse/HBASE-16729]
> 2) unable to read CF data lazily for Get operation.
> Any comments? Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16731) Add Scan#setLoadColumnFamiliesOnDemand method to Get.

2016-09-30 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16731:
--
Attachment: HBASE-16731.v1.patch

> Add Scan#setLoadColumnFamiliesOnDemand method to Get.
> -
>
> Key: HBASE-16731
> URL: https://issues.apache.org/jira/browse/HBASE-16731
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-16731.v0.patch, HBASE-16731.v1.patch
>
>
> RSRpcServices#get() converts the Get to Scan without 
> scan#setLoadColumnFamiliesOnDemand. It causes two disadvantage.
> 1) The result retrieved from Get and Scan will be different if we use the 
> empty filter. Scan doesn't return any data but Get does.
> see [HBASE-16729 |https://issues.apache.org/jira/browse/HBASE-16729]
> 2) unable to read CF data lazily for Get operation.
> Any comments? Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-30 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16678:
--
Fix Version/s: (was: 1.1.7)
   1.1.8

> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13126) Move HBaseTestingUtility and associated support classes into hbase-testing-utility module

2016-09-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536642#comment-15536642
 ] 

Sean Busbey commented on HBASE-13126:
-

updated description to better reflect the discussion of approach that happened.

> Move HBaseTestingUtility and associated support classes into 
> hbase-testing-utility module
> -
>
> Key: HBASE-13126
> URL: https://issues.apache.org/jira/browse/HBASE-13126
> Project: HBase
>  Issue Type: Task
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Over in the review for HBASE-12972, [~enis] mentioned that one of the HBTU 
> methods wasn't intended for public consumption.
> Can we build a list of such methods across the API, appropriately annotate 
> them for 2.0, and deprecate them in earlier versions with a warning that 
> they're going to be restricted?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13126) Move HBaseTestingUtility and associated support classes into hbase-testing-utility module

2016-09-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13126:

Summary: Move HBaseTestingUtility and associated support classes into 
hbase-testing-utility module  (was: Clean up API for unintended methods within 
non-private classes.)

> Move HBaseTestingUtility and associated support classes into 
> hbase-testing-utility module
> -
>
> Key: HBASE-13126
> URL: https://issues.apache.org/jira/browse/HBASE-13126
> Project: HBase
>  Issue Type: Task
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Over in the review for HBASE-12972, [~enis] mentioned that one of the HBTU 
> methods wasn't intended for public consumption.
> Can we build a list of such methods across the API, appropriately annotate 
> them for 2.0, and deprecate them in earlier versions with a warning that 
> they're going to be restricted?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16517) Make a 1.2.3 release

2016-09-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-16517.
---
   Resolution: Fixed
 Assignee: stack
Fix Version/s: 1.2.3

Resovling. Done.

> Make a 1.2.3 release
> 
>
> Key: HBASE-16517
> URL: https://issues.apache.org/jira/browse/HBASE-16517
> Project: HBase
>  Issue Type: Umbrella
>Reporter: stack
>Assignee: stack
> Fix For: 1.2.3
>
>
> Umbrella issue under which will do all tasks related to making a 1.2.3 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16517) Make a 1.2.3 release

2016-09-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536633#comment-15536633
 ] 

Sean Busbey commented on HBASE-16517:
-

I think this is fine to close now?

> Make a 1.2.3 release
> 
>
> Key: HBASE-16517
> URL: https://issues.apache.org/jira/browse/HBASE-16517
> Project: HBase
>  Issue Type: Umbrella
>Reporter: stack
>
> Umbrella issue under which will do all tasks related to making a 1.2.3 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12909) Junit listed at compile scope instead of test

2016-09-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536628#comment-15536628
 ] 

Sean Busbey commented on HBASE-12909:
-

bump again. will start a dev@hbase thread next week-ish presuming no feedback 
here.

> Junit listed at compile scope instead of test
> -
>
> Key: HBASE-12909
> URL: https://issues.apache.org/jira/browse/HBASE-12909
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0
>
> Attachments: HBASE-12909.1.patch.txt
>
>
> Right now our top level pom lists junit as a dependency for every module in 
> the compile scope, which makes it subject to our compatibility promises.
> It should instead be test scope.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16739) Timed out exception message should include encoded region name

2016-09-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16739:
---
Status: Patch Available  (was: Open)

> Timed out exception message should include encoded region name
> --
>
> Key: HBASE-16739
> URL: https://issues.apache.org/jira/browse/HBASE-16739
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 16739.v1.txt
>
>
> Saw the following in region server log repeatedly:
> {code}
> 2016-09-26 10:13:33,219 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed getting lock in batch put, row=1
> java.io.IOException: Timed out waiting for lock for row: 1
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5151)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3046)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2902)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2844)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2040)
> {code}
> Region name was not logged - making troubleshooting a bit difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16739) Timed out exception message should include encoded region name

2016-09-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16739:
---
Attachment: 16739.v1.txt

> Timed out exception message should include encoded region name
> --
>
> Key: HBASE-16739
> URL: https://issues.apache.org/jira/browse/HBASE-16739
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 16739.v1.txt
>
>
> Saw the following in region server log repeatedly:
> {code}
> 2016-09-26 10:13:33,219 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed getting lock in batch put, row=1
> java.io.IOException: Timed out waiting for lock for row: 1
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5151)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3046)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2902)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2844)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2040)
> {code}
> Region name was not logged - making troubleshooting a bit difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16739) Timed out exception message should include encoded region name

2016-09-30 Thread Ted Yu (JIRA)
Ted Yu created HBASE-16739:
--

 Summary: Timed out exception message should include encoded region 
name
 Key: HBASE-16739
 URL: https://issues.apache.org/jira/browse/HBASE-16739
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


Saw the following in region server log repeatedly:
{code}
2016-09-26 10:13:33,219 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
Failed getting lock in batch put, row=1
java.io.IOException: Timed out waiting for lock for row: 1
  at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5151)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3046)
  at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2902)
  at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2844)
  at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)
  at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)
  at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2040)
{code}
Region name was not logged - making troubleshooting a bit difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16736) Add getter to ResizableBlockCache for max size

2016-09-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536577#comment-15536577
 ] 

Hadoop QA commented on HBASE-16736:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 40s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 20s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 3s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 48s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 27s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 3s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 11m 42s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 13m 29s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 15m 13s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 30s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 111m 8s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 

[jira] [Commented] (HBASE-15984) Given failure to parse a given WAL that was closed cleanly, replay the WAL.

2016-09-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536576#comment-15536576
 ] 

Sean Busbey commented on HBASE-15984:
-

{quote}
Want me to look at a 1.1 port too?
{quote}

That would be wonderful, yes.

> Given failure to parse a given WAL that was closed cleanly, replay the WAL.
> ---
>
> Key: HBASE-15984
> URL: https://issues.apache.org/jira/browse/HBASE-15984
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.0.4, 1.4.0, 1.3.1, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-15984.1.patch, HBASE-15984.2.patch
>
>
> subtask for a general work around for "underlying reader failed / is in a bad 
> state" just for the case where a WAL 1) was closed cleanly and 2) we can tell 
> that our current offset ought not be the end of parseable entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16731) Add Scan#setLoadColumnFamiliesOnDemand method to Get.

2016-09-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536370#comment-15536370
 ] 

Hadoop QA commented on HBASE-16731:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 46s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 53s {color} 
| {color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 3s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestGet |
|   | hadoop.hbase.client.TestClientNoCluster |
|   | hadoop.hbase.regionserver.TestScanner |
|   | hadoop.hbase.regionserver.TestMinVersions |
|   | hadoop.hbase.filter.TestInvocationRecordFilter |
|   | hadoop.hbase.regionserver.querymatcher.TestUserScanQueryMatcher |
|   | hadoop.hbase.mob.mapreduce.TestMobSweepMapper |
|   | hadoop.hbase.regionserver.TestKeepDeletes |
|   | hadoop.hbase.regionserver.querymatcher.TestCompactionScanQueryMatcher |
|   | hadoop.hbase.regionserver.TestStoreFileRefresherChore |
|   | hadoop.hbase.procedure.TestProcedureManager |
|   | hadoop.hbase.regionserver.TestResettingCounters |
|   | hadoop.hbase.master.balancer.TestRegionLocationFinder |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 

[jira] [Commented] (HBASE-16736) Add getter to ResizableBlockCache for max size

2016-09-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536360#comment-15536360
 ] 

Ted Yu commented on HBASE-16736:


bq. In case of CombinedCache why to get from L1 cache alone?

The setMaxSize() method is changing lruCache only. Hence I make getMaxSize() 
symmetrical to the setter.

bq. Then do we need L2 size also considered?

This is related to the above aspect where max size doesn't consider off heap 
size. I think we should keep the current behavior.

I prefer patch v1.

> Add getter to ResizableBlockCache for max size
> --
>
> Key: HBASE-16736
> URL: https://issues.apache.org/jira/browse/HBASE-16736
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16736.v1.txt, 16736.v2.txt
>
>
> Currently ResizableBlockCache only has one method for setting max size.
> As more first level cache type is added, we need the ability to retrieve the 
> max size.
> This issue is to add getter to ResizableBlockCache for retrieving max size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16738) L1 cache caching shared memory HFile block when blocks promoted from L2 to L1

2016-09-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536341#comment-15536341
 ] 

Anoop Sam John commented on HBASE-16738:


Ping [~saint@gmail.com] [~ram_krish]

> L1 cache caching shared memory HFile block when blocks promoted from L2 to L1
> -
>
> Key: HBASE-16738
> URL: https://issues.apache.org/jira/browse/HBASE-16738
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-16738.patch
>
>
> This is an issue when L1 and L2 cache used with combinedMode = false.
> See in getBlock
> {code}
> if (victimHandler != null && !repeat) {
> Cacheable result = victimHandler.getBlock(cacheKey, caching, repeat, 
> updateCacheMetrics);
> // Promote this to L1.
> if (result != null && caching) {
>   cacheBlock(cacheKey, result, /* inMemory = */ false, /* cacheData = 
> */ true);
> }
> return result;
>   }
> {code}
> When block is not there in L1 and have it in L2, we will return the block 
> read from L2 and promote that block to L1 by adding it in LRUCache.  But if 
> the Block buffer is having shared memory (Off heap bucket cache for eg:) , we 
> can not directly cache this block. The buffer memory area under this block 
> can get cleaned up at any time. So we may get block data corruption.
> In such a case, we need to do a deep copy of the block (Including its buffer) 
> and then add that to L1 cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16736) Add getter to ResizableBlockCache for max size

2016-09-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536339#comment-15536339
 ] 

Anoop Sam John commented on HBASE-16736:


Ya. 
But few things to consider.  First of all this is not exactly the heap size.  
In case of L2 cache this can be off heap size or even file size.
In case of CombinedCache why to get from L1 cache alone?
In case of L1 cache also, some times there can be victim L2 cache (L2 also 
present with combinedMode = false). Then do we need L2 size also considered?


> Add getter to ResizableBlockCache for max size
> --
>
> Key: HBASE-16736
> URL: https://issues.apache.org/jira/browse/HBASE-16736
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16736.v1.txt, 16736.v2.txt
>
>
> Currently ResizableBlockCache only has one method for setting max size.
> As more first level cache type is added, we need the ability to retrieve the 
> max size.
> This issue is to add getter to ResizableBlockCache for retrieving max size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16736) Add getter to ResizableBlockCache for max size

2016-09-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16736:
---
Attachment: 16736.v2.txt

You mean something like this ?

> Add getter to ResizableBlockCache for max size
> --
>
> Key: HBASE-16736
> URL: https://issues.apache.org/jira/browse/HBASE-16736
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16736.v1.txt, 16736.v2.txt
>
>
> Currently ResizableBlockCache only has one method for setting max size.
> As more first level cache type is added, we need the ability to retrieve the 
> max size.
> This issue is to add getter to ResizableBlockCache for retrieving max size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16737) NPE during close of RegionScanner

2016-09-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536331#comment-15536331
 ] 

Anoop Sam John commented on HBASE-16737:


Not sure why this was added this way.  Dont see any issue in iterating over all 
scanner and close each. Want to attach a patch?

> NPE during close of RegionScanner
> -
>
> Key: HBASE-16737
> URL: https://issues.apache.org/jira/browse/HBASE-16737
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Mark Christiaens
>
> We encountered the following stack trace during high load:
> {noformat}
> Unexpected throwable object 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.CellComparator.compareRows(CellComparator.java:186)
>   at 
> org.apache.hadoop.hbase.CellComparator.compare(CellComparator.java:63)
>   at 
> org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:2021)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:202)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:178)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:168)
>   at 
> java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:719)
>   at java.util.PriorityQueue.siftDown(PriorityQueue.java:687)
>   at java.util.PriorityQueue.poll(PriorityQueue.java:595)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:218)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:5608)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.close(BaseScannerRegionObserver.java:279)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$1.close(BaseScannerRegionObserver.java:186)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2378)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> What I suspect is happening is that the {{RegionScannerImpl}} is being closed 
> while the scanner's lease is expired.  During this close, the underlying 
> {{KeyValueHeap}} is being polled.  the {{heap}} tries to read data from 
> {{KeyValueScanners}} that then return {{null}} which causes the crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15870) Specify columns in REST multi gets

2016-09-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536298#comment-15536298
 ] 

Sean Busbey commented on HBASE-15870:
-

We don't add new downstream facing functionality in maintenance releases. If 
you'd like to see this feature in a release sooner, best to ask for an update 
on 1.3's progress on the dev@hbase mailing list.

> Specify columns in REST multi gets
> --
>
> Key: HBASE-15870
> URL: https://issues.apache.org/jira/browse/HBASE-15870
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Dean Gurvitz
>Assignee: Matt Warhaftig
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: hbase-15870-v1.patch
>
>
> The REST multi-gets feature currently does not allow specifying only certain 
> columns or column families. Adding support for these should be quite simple 
> and improve the usability of the multi-gets feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15870) Specify columns in REST multi gets

2016-09-30 Thread Dean Gurvitz (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536279#comment-15536279
 ] 

Dean Gurvitz commented on HBASE-15870:
--

Maybe we can commit this to the 1.2.4 rather than wait for 1.3? The RC is 
finalized only on Monday

> Specify columns in REST multi gets
> --
>
> Key: HBASE-15870
> URL: https://issues.apache.org/jira/browse/HBASE-15870
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Reporter: Dean Gurvitz
>Assignee: Matt Warhaftig
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21
>
> Attachments: hbase-15870-v1.patch
>
>
> The REST multi-gets feature currently does not allow specifying only certain 
> columns or column families. Adding support for these should be quite simple 
> and improve the usability of the multi-gets feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16736) Add getter to ResizableBlockCache for max size

2016-09-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536273#comment-15536273
 ] 

Anoop Sam John commented on HBASE-16736:


Am sorry.. Typo..  I mean BlockCache.  Does it make sense to add a getter for 
the max capacity of the cache in the BlockCache interface itself. setter make 
sense in ResizableBlockCache interface. The getter need not be restricted for 
Resizable cache. wdyt?

> Add getter to ResizableBlockCache for max size
> --
>
> Key: HBASE-16736
> URL: https://issues.apache.org/jira/browse/HBASE-16736
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16736.v1.txt
>
>
> Currently ResizableBlockCache only has one method for setting max size.
> As more first level cache type is added, we need the ability to retrieve the 
> max size.
> This issue is to add getter to ResizableBlockCache for retrieving max size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16738) L1 cache caching shared memory HFile block when blocks promoted from L2 to L1

2016-09-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536244#comment-15536244
 ] 

Hadoop QA commented on HBASE-16738:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 50s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 26s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
13s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 122m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
|   | 
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpointNoMaster
 |
|   | 
org.apache.hadoop.hbase.master.procedure.TestDispatchMergingRegionsProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestRestoreSnapshotProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestTruncateTableProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestMasterProcedureWalLease |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831105/HBASE-16738.patch |
| JIRA Issue | HBASE-16738 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux eb3bbd6ec0a8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 3757da6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3782/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  

[jira] [Commented] (HBASE-15984) Given failure to parse a given WAL that was closed cleanly, replay the WAL.

2016-09-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536238#comment-15536238
 ] 

Sean Busbey commented on HBASE-15984:
-

frustratingly, I don't think I'll have time to work out the backport from 
branch-1.2 to branch-1.1 until next week. [~apurtell], do you think you can 
work out the 0.98 version from branch-1.2?

> Given failure to parse a given WAL that was closed cleanly, replay the WAL.
> ---
>
> Key: HBASE-15984
> URL: https://issues.apache.org/jira/browse/HBASE-15984
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.0.4, 1.4.0, 1.3.1, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-15984.1.patch, HBASE-15984.2.patch
>
>
> subtask for a general work around for "underlying reader failed / is in a bad 
> state" just for the case where a WAL 1) was closed cleanly and 2) we can tell 
> that our current offset ought not be the end of parseable entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16737) NPE during close of RegionScanner

2016-09-30 Thread Mark Christiaens (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536161#comment-15536161
 ] 

Mark Christiaens commented on HBASE-16737:
--

[~anoop.hbase] The call through {{preScannerClose}} seems OK to me.

To me, this code looks surprising 
(org.apache.hadoop.hbase.regionserver.KeyValueHeap#close):
{noformat}
public void close() {
if (this.current != null) {
  this.current.close();
}
if (this.heap != null) {
  KeyValueScanner scanner;
  while ((scanner = this.heap.poll()) != null) {
scanner.close();
  }
}
  }
{noformat}

Looks like it wants to close all scanners that are in the {{heap}}.  Instead of 
iterating over them and closing them, the code performs a _poll_ to get them?

> NPE during close of RegionScanner
> -
>
> Key: HBASE-16737
> URL: https://issues.apache.org/jira/browse/HBASE-16737
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Mark Christiaens
>
> We encountered the following stack trace during high load:
> {noformat}
> Unexpected throwable object 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.CellComparator.compareRows(CellComparator.java:186)
>   at 
> org.apache.hadoop.hbase.CellComparator.compare(CellComparator.java:63)
>   at 
> org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:2021)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:202)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:178)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:168)
>   at 
> java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:719)
>   at java.util.PriorityQueue.siftDown(PriorityQueue.java:687)
>   at java.util.PriorityQueue.poll(PriorityQueue.java:595)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:218)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:5608)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.close(BaseScannerRegionObserver.java:279)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$1.close(BaseScannerRegionObserver.java:186)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2378)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> What I suspect is happening is that the {{RegionScannerImpl}} is being closed 
> while the scanner's lease is expired.  During this close, the underlying 
> {{KeyValueHeap}} is being polled.  the {{heap}} tries to read data from 
> {{KeyValueScanners}} that then return {{null}} which causes the crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16731) Add Scan#setLoadColumnFamiliesOnDemand method to Get.

2016-09-30 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16731:
--
Assignee: ChiaPing Tsai
  Status: Patch Available  (was: Open)

> Add Scan#setLoadColumnFamiliesOnDemand method to Get.
> -
>
> Key: HBASE-16731
> URL: https://issues.apache.org/jira/browse/HBASE-16731
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-16731.v0.patch
>
>
> RSRpcServices#get() converts the Get to Scan without 
> scan#setLoadColumnFamiliesOnDemand. It causes two disadvantage.
> 1) The result retrieved from Get and Scan will be different if we use the 
> empty filter. Scan doesn't return any data but Get does.
> see [HBASE-16729 |https://issues.apache.org/jira/browse/HBASE-16729]
> 2) unable to read CF data lazily for Get operation.
> Any comments? Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16731) Add Scan#setLoadColumnFamiliesOnDemand method to Get.

2016-09-30 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16731:
--
Attachment: HBASE-16731.v0.patch

> Add Scan#setLoadColumnFamiliesOnDemand method to Get.
> -
>
> Key: HBASE-16731
> URL: https://issues.apache.org/jira/browse/HBASE-16731
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-16731.v0.patch
>
>
> RSRpcServices#get() converts the Get to Scan without 
> scan#setLoadColumnFamiliesOnDemand. It causes two disadvantage.
> 1) The result retrieved from Get and Scan will be different if we use the 
> empty filter. Scan doesn't return any data but Get does.
> see [HBASE-16729 |https://issues.apache.org/jira/browse/HBASE-16729]
> 2) unable to read CF data lazily for Get operation.
> Any comments? Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16736) Add getter to ResizableBlockCache for max size

2016-09-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536080#comment-15536080
 ] 

Ted Yu commented on HBASE-16736:


BucketCache already has:
{code}
  public long getMaxSize() {
return this.cacheCapacity;
  }
{code}

> Add getter to ResizableBlockCache for max size
> --
>
> Key: HBASE-16736
> URL: https://issues.apache.org/jira/browse/HBASE-16736
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16736.v1.txt
>
>
> Currently ResizableBlockCache only has one method for setting max size.
> As more first level cache type is added, we need the ability to retrieve the 
> max size.
> This issue is to add getter to ResizableBlockCache for retrieving max size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16642) Use DelayQueue instead of TimeoutBlockingQueue

2016-09-30 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536065#comment-15536065
 ] 

Matteo Bertozzi commented on HBASE-16642:
-

what is the rationale behind the change?
to me looks like there are more objects allocation and the logic around remove 
and wake up of the queue is less clean.

if you want to move forward with the change at least wrap all this stuff in a 
ProcedureDelayedQueue class or something like that, so we don't have to deal 
with creating those DelayedContainer and know about the POISON stuff.

> Use DelayQueue instead of TimeoutBlockingQueue
> --
>
> Key: HBASE-16642
> URL: https://issues.apache.org/jira/browse/HBASE-16642
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Priority: Minor
> Attachments: HBASE-16642.master.V1.patch
>
>
> Enqueue poisons in order to wake up and end the internal threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16737) NPE during close of RegionScanner

2016-09-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535960#comment-15535960
 ] 

Anoop Sam John commented on HBASE-16737:


{code}
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:5608)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.close(BaseScannerRegionObserver.java:279)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$1.close(BaseScannerRegionObserver.java:186)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2378)
{code}
Seems this is preScannerClose() CP hook flow. But I can not see traces of that. 
The Observer is trying to close a RegionScanner on its own? Is this scanner 
created by the CP?

> NPE during close of RegionScanner
> -
>
> Key: HBASE-16737
> URL: https://issues.apache.org/jira/browse/HBASE-16737
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Mark Christiaens
>
> We encountered the following stack trace during high load:
> {noformat}
> Unexpected throwable object 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.CellComparator.compareRows(CellComparator.java:186)
>   at 
> org.apache.hadoop.hbase.CellComparator.compare(CellComparator.java:63)
>   at 
> org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:2021)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:202)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:178)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:168)
>   at 
> java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:719)
>   at java.util.PriorityQueue.siftDown(PriorityQueue.java:687)
>   at java.util.PriorityQueue.poll(PriorityQueue.java:595)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:218)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:5608)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.close(BaseScannerRegionObserver.java:279)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$1.close(BaseScannerRegionObserver.java:186)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2378)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> What I suspect is happening is that the {{RegionScannerImpl}} is being closed 
> while the scanner's lease is expired.  During this close, the underlying 
> {{KeyValueHeap}} is being polled.  the {{heap}} tries to read data from 
> {{KeyValueScanners}} that then return {{null}} which causes the crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16738) L1 cache caching shared memory HFile block when blocks promoted from L2 to L1

2016-09-30 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16738:
---
Status: Patch Available  (was: Open)

> L1 cache caching shared memory HFile block when blocks promoted from L2 to L1
> -
>
> Key: HBASE-16738
> URL: https://issues.apache.org/jira/browse/HBASE-16738
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-16738.patch
>
>
> This is an issue when L1 and L2 cache used with combinedMode = false.
> See in getBlock
> {code}
> if (victimHandler != null && !repeat) {
> Cacheable result = victimHandler.getBlock(cacheKey, caching, repeat, 
> updateCacheMetrics);
> // Promote this to L1.
> if (result != null && caching) {
>   cacheBlock(cacheKey, result, /* inMemory = */ false, /* cacheData = 
> */ true);
> }
> return result;
>   }
> {code}
> When block is not there in L1 and have it in L2, we will return the block 
> read from L2 and promote that block to L1 by adding it in LRUCache.  But if 
> the Block buffer is having shared memory (Off heap bucket cache for eg:) , we 
> can not directly cache this block. The buffer memory area under this block 
> can get cleaned up at any time. So we may get block data corruption.
> In such a case, we need to do a deep copy of the block (Including its buffer) 
> and then add that to L1 cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16738) L1 cache caching shared memory HFile block when blocks promoted from L2 to L1

2016-09-30 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-16738:
---
Attachment: HBASE-16738.patch

> L1 cache caching shared memory HFile block when blocks promoted from L2 to L1
> -
>
> Key: HBASE-16738
> URL: https://issues.apache.org/jira/browse/HBASE-16738
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-16738.patch
>
>
> This is an issue when L1 and L2 cache used with combinedMode = false.
> See in getBlock
> {code}
> if (victimHandler != null && !repeat) {
> Cacheable result = victimHandler.getBlock(cacheKey, caching, repeat, 
> updateCacheMetrics);
> // Promote this to L1.
> if (result != null && caching) {
>   cacheBlock(cacheKey, result, /* inMemory = */ false, /* cacheData = 
> */ true);
> }
> return result;
>   }
> {code}
> When block is not there in L1 and have it in L2, we will return the block 
> read from L2 and promote that block to L1 by adding it in LRUCache.  But if 
> the Block buffer is having shared memory (Off heap bucket cache for eg:) , we 
> can not directly cache this block. The buffer memory area under this block 
> can get cleaned up at any time. So we may get block data corruption.
> In such a case, we need to do a deep copy of the block (Including its buffer) 
> and then add that to L1 cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16738) L1 cache caching shared memory HFile block when blocks promoted from L2 to L1

2016-09-30 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-16738:
--

 Summary: L1 cache caching shared memory HFile block when blocks 
promoted from L2 to L1
 Key: HBASE-16738
 URL: https://issues.apache.org/jira/browse/HBASE-16738
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 2.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0


This is an issue when L1 and L2 cache used with combinedMode = false.
See in getBlock
{code}
if (victimHandler != null && !repeat) {
Cacheable result = victimHandler.getBlock(cacheKey, caching, repeat, 
updateCacheMetrics);

// Promote this to L1.
if (result != null && caching) {
  cacheBlock(cacheKey, result, /* inMemory = */ false, /* cacheData = 
*/ true);
}
return result;
  }
{code}
When block is not there in L1 and have it in L2, we will return the block read 
from L2 and promote that block to L1 by adding it in LRUCache.  But if the 
Block buffer is having shared memory (Off heap bucket cache for eg:) , we can 
not directly cache this block. The buffer memory area under this block can get 
cleaned up at any time. So we may get block data corruption.
In such a case, we need to do a deep copy of the block (Including its buffer) 
and then add that to L1 cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16372) References to previous cell in read path should be avoided

2016-09-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535857#comment-15535857
 ] 

Hadoop QA commented on HBASE-16372:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
1s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 23s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 11s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 119m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
|   | 
org.apache.hadoop.hbase.master.procedure.TestDispatchMergingRegionsProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestRestoreSnapshotProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestTruncateTableProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestMasterProcedureWalLease |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831097/HBASE-16372_3.patch |
| JIRA Issue | HBASE-16372 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux b06edd54d5ea 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 3757da6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3781/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/3781/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3781/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-16608) Introducing the ability to merge ImmutableSegments without copy-compaction or SQM usage

2016-09-30 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535688#comment-15535688
 ] 

ramkrishna.s.vasudevan commented on HBASE-16608:


bq.We would prefer this way. What do you think?
If you feel it could take time to resolve this problem then fine. But I would 
still think it is better to resolve and commit. And i understand your pain of 
rebasing the patch with the updated trunk code. 
Anyway if others are fine with commit, am also not going to block the commit.
Thank you.

> Introducing the ability to merge ImmutableSegments without copy-compaction or 
> SQM usage
> ---
>
> Key: HBASE-16608
> URL: https://issues.apache.org/jira/browse/HBASE-16608
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-16417-V02.patch, HBASE-16417-V04.patch, 
> HBASE-16417-V06.patch, HBASE-16417-V07.patch, HBASE-16417-V08.patch, 
> HBASE-16417-V10.patch, HBASE-16608-V01.patch, HBASE-16608-V03.patch, 
> HBASE-16608-V04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15871) Memstore flush doesn't finish because of backwardseek() in memstore scanner.

2016-09-30 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535690#comment-15535690
 ] 

ramkrishna.s.vasudevan commented on HBASE-15871:


Any chance of review here.

> Memstore flush doesn't finish because of backwardseek() in memstore scanner.
> 
>
> Key: HBASE-15871
> URL: https://issues.apache.org/jira/browse/HBASE-15871
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.1.2
>Reporter: Jeongdae Kim
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-15871-branch-1.patch, 
> HBASE-15871.branch-1.1.001.patch, HBASE-15871.branch-1.1.002.patch, 
> HBASE-15871.branch-1.1.003.patch, HBASE-15871.patch, HBASE-15871_1.patch, 
> HBASE-15871_1.patch, HBASE-15871_2.patch, HBASE-15871_3.patch, 
> HBASE-15871_4.patch, HBASE-15871_6.patch, memstore_backwardSeek().PNG
>
>
> Sometimes in our production hbase cluster, it takes a long time to finish 
> memstore flush.( for about more than 30 minutes)
> the reason is that a memstore flusher thread calls 
> StoreScanner.updateReaders(), waits for acquiring a lock that store scanner 
> holds in StoreScanner.next() and backwardseek() in memstore scanner runs for 
> a long time.
> I think that this condition could occur in reverse scan by the following 
> process.
> 1) create a reversed store scanner by requesting a reverse scan.
> 2) flush a memstore in the same HStore.
> 3) puts a lot of cells in memstore and memstore is almost full.
> 4) call the reverse scanner.next() and re-create all scanners in this store 
> because all scanners was already closed by 2)'s flush() and backwardseek() 
> with store's lastTop for all new scanners.
> 5) in this status, memstore is almost full by 2) and all cells in memstore 
> have sequenceID greater than this scanner's readPoint because of 2)'s 
> flush(). this condition causes searching all cells in memstore, and 
> seekToPreviousRow() repeatly seach cells that are already searched if a row 
> has one column. (described this in more detail in a attached file.)
> 6) flush a memstore again in the same HStore, and wait until 4-5) process 
> finished, to update store files in the same HStore after flusing.
> I searched HBase jira. and found a similar issue. (HBASE-14497) but, 
> HBASE-14497's fix can't solve this issue because that fix just changed 
> recursive call to loop.(and already applied to our HBase version)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16372) References to previous cell in read path should be avoided

2016-09-30 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16372:
---
Attachment: HBASE-16372_3.patch

Patch for trunk. 
On the read side after discussion, have ensured that every time on a shipped 
call we do a copy of all the cells that are used in read path as state 
variables (in StoreScanner, SQM and CT).
Similarly the Write path also now adds a CellSink to all the Writers which is 
of type Shipper. So when the compactor calls shipped on the KVScanner we do 
call shipped() on all the writers. Thus copying all the CElls in the write path.

> References to previous cell in read path should be avoided
> --
>
> Key: HBASE-16372
> URL: https://issues.apache.org/jira/browse/HBASE-16372
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-16372_1.patch, HBASE-16372_2.patch, 
> HBASE-16372_3.patch, HBASE-16372_testcase.patch, HBASE-16372_testcase_1.patch
>
>
> Came as part of review discussion in HBASE-15554. If there are references 
> kept to previous cells in the read path, with the Ref count based eviction 
> mechanism in trunk, then chances are there to evict a block backing the 
> previous cell but the read path still does some operations on that garbage 
> collected previous cell leading to incorrect results.
> Areas to target
> -> Storescanner
> -> Bloom filters (particularly in compaction path)
> Thanks to [~anoop.hbase] to point out this in bloomfilter path. But we found 
> it could be in other areas also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16642) Use DelayQueue instead of TimeoutBlockingQueue

2016-09-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535585#comment-15535585
 ] 

Hadoop QA commented on HBASE-16642:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 7m 50s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
34s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 9s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 0s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
10s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 50s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831079/HBASE-16642.master.V1.patch
 |
| JIRA Issue | HBASE-16642 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a6b9f9469438 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 3757da6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3780/testReport/ |
| modules | C: hbase-procedure U: hbase-procedure |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3780/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Use DelayQueue instead of TimeoutBlockingQueue
> --
>
> Key: HBASE-16642
> URL: https://issues.apache.org/jira/browse/HBASE-16642
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Priority: Minor
> Attachments: 

[jira] [Commented] (HBASE-16737) NPE during close of RegionScanner

2016-09-30 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535497#comment-15535497
 ] 

ramkrishna.s.vasudevan commented on HBASE-16737:


I think you have some phoenix coprocessor also. That lease removal is to ensure 
that the scanner lease expiry does occur while scan is in process.
The scanner does not return any cell on peeking and that causes this NPE. But 
why scanners are not returning a cell is something to see. It could be because 
of the region observer logic also. 


> NPE during close of RegionScanner
> -
>
> Key: HBASE-16737
> URL: https://issues.apache.org/jira/browse/HBASE-16737
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Mark Christiaens
>
> We encountered the following stack trace during high load:
> {noformat}
> Unexpected throwable object 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.CellComparator.compareRows(CellComparator.java:186)
>   at 
> org.apache.hadoop.hbase.CellComparator.compare(CellComparator.java:63)
>   at 
> org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:2021)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:202)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:178)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:168)
>   at 
> java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:719)
>   at java.util.PriorityQueue.siftDown(PriorityQueue.java:687)
>   at java.util.PriorityQueue.poll(PriorityQueue.java:595)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:218)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:5608)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.close(BaseScannerRegionObserver.java:279)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$1.close(BaseScannerRegionObserver.java:186)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2378)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> What I suspect is happening is that the {{RegionScannerImpl}} is being closed 
> while the scanner's lease is expired.  During this close, the underlying 
> {{KeyValueHeap}} is being polled.  the {{heap}} tries to read data from 
> {{KeyValueScanners}} that then return {{null}} which causes the crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16737) NPE during close of RegionScanner

2016-09-30 Thread Mark Christiaens (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535479#comment-15535479
 ] 

Mark Christiaens commented on HBASE-16737:
--

I suspect that this ticket is related to HBASE-2077.  Looking at the code of 
{{org.apache.hadoop.hbase.regionserver.RSRpcServices#scan}}, you see that 
during normal operation of the scan, the {{lease}} is removed and added again 
once the scan is completed.
{noformat}
...
lease = regionServer.leases.removeLease(scannerName);
...
if (lease != null) regionServer.leases.addLease(lease);
...
{noformat}
However, the {{scanner.close()}} call occurs after re-instating the lease.

> NPE during close of RegionScanner
> -
>
> Key: HBASE-16737
> URL: https://issues.apache.org/jira/browse/HBASE-16737
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Mark Christiaens
>
> We encountered the following stack trace during high load:
> {noformat}
> Unexpected throwable object 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.CellComparator.compareRows(CellComparator.java:186)
>   at 
> org.apache.hadoop.hbase.CellComparator.compare(CellComparator.java:63)
>   at 
> org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:2021)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:202)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:178)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:168)
>   at 
> java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:719)
>   at java.util.PriorityQueue.siftDown(PriorityQueue.java:687)
>   at java.util.PriorityQueue.poll(PriorityQueue.java:595)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:218)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:5608)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.close(BaseScannerRegionObserver.java:279)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$1.close(BaseScannerRegionObserver.java:186)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2378)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> What I suspect is happening is that the {{RegionScannerImpl}} is being closed 
> while the scanner's lease is expired.  During this close, the underlying 
> {{KeyValueHeap}} is being polled.  the {{heap}} tries to read data from 
> {{KeyValueScanners}} that then return {{null}} which causes the crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16737) NPE during close of RegionScanner

2016-09-30 Thread Mark Christiaens (JIRA)
Mark Christiaens created HBASE-16737:


 Summary: NPE during close of RegionScanner
 Key: HBASE-16737
 URL: https://issues.apache.org/jira/browse/HBASE-16737
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Mark Christiaens


We encountered the following stack trace during high load:
{noformat}
Unexpected throwable object 
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.CellComparator.compareRows(CellComparator.java:186)
at 
org.apache.hadoop.hbase.CellComparator.compare(CellComparator.java:63)
at 
org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:2021)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:202)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:178)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:168)
at 
java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:719)
at java.util.PriorityQueue.siftDown(PriorityQueue.java:687)
at java.util.PriorityQueue.poll(PriorityQueue.java:595)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:218)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:5608)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.close(BaseScannerRegionObserver.java:279)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$1.close(BaseScannerRegionObserver.java:186)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2378)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
{noformat}

What I suspect is happening is that the {{RegionScannerImpl}} is being closed 
while the scanner's lease is expired.  During this close, the underlying 
{{KeyValueHeap}} is being polled.  the {{heap}} tries to read data from 
{{KeyValueScanners}} that then return {{null}} which causes the crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16642) Use DelayQueue instead of TimeoutBlockingQueue

2016-09-30 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-16642:
--
Attachment: HBASE-16642.master.V1.patch

> Use DelayQueue instead of TimeoutBlockingQueue
> --
>
> Key: HBASE-16642
> URL: https://issues.apache.org/jira/browse/HBASE-16642
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Priority: Minor
> Attachments: HBASE-16642.master.V1.patch
>
>
> Enqueue poisons in order to wake up and end the internal threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16642) Use DelayQueue instead of TimeoutBlockingQueue

2016-09-30 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-16642:
--
Status: Patch Available  (was: Open)

> Use DelayQueue instead of TimeoutBlockingQueue
> --
>
> Key: HBASE-16642
> URL: https://issues.apache.org/jira/browse/HBASE-16642
> Project: HBase
>  Issue Type: Improvement
>Reporter: Hiroshi Ikeda
>Priority: Minor
> Attachments: HBASE-16642.master.V1.patch
>
>
> Enqueue poisons in order to wake up and end the internal threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16736) Add getter to ResizableBlockCache for max size

2016-09-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535439#comment-15535439
 ] 

Anoop Sam John commented on HBASE-16736:


I mean does it make sense to add to BucketCache itself.

> Add getter to ResizableBlockCache for max size
> --
>
> Key: HBASE-16736
> URL: https://issues.apache.org/jira/browse/HBASE-16736
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16736.v1.txt
>
>
> Currently ResizableBlockCache only has one method for setting max size.
> As more first level cache type is added, we need the ability to retrieve the 
> max size.
> This issue is to add getter to ResizableBlockCache for retrieving max size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16721) Concurrency issue in WAL unflushed seqId tracking

2016-09-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535340#comment-15535340
 ] 

Hudson commented on HBASE-16721:


FAILURE: Integrated in Jenkins build HBase-1.1-JDK8 #1872 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1872/])
HBASE-16721 Concurrency issue in WAL unflushed seqId tracking (enis: rev 
06c3dec2da32dcb588f0eb31e5db87796668bd39)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestFSHLog.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Concurrency issue in WAL unflushed seqId tracking
> -
>
> Key: HBASE-16721
> URL: https://issues.apache.org/jira/browse/HBASE-16721
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.0.0, 1.1.0, 1.2.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8
>
> Attachments: hbase-16721_v1.branch-1.patch, 
> hbase-16721_v2.branch-1.patch, hbase-16721_v2.master.patch
>
>
> I'm inspecting an interesting case where in a production cluster, some 
> regionservers ends up accumulating hundreds of WAL files, even with force 
> flushes going on due to max logs. This happened multiple times on the 
> cluster, but not on other clusters. The cluster has periodic memstore flusher 
> disabled, however, this still does not explain why the force flush of regions 
> due to max limit is not working. I think the periodic memstore flusher just 
> masks the underlying problem, which is why we do not see this in other 
> clusters. 
> The problem starts like this: 
> {code}
> 2016-09-21 17:49:18,272 INFO  [regionserver//10.2.0.55:16020.logRoller] 
> wal.FSHLog: Too many wals: logs=33, maxlogs=32; forcing flush of 1 
> regions(s): d4cf39dc40ea79f5da4d0cf66d03cb1f
> 2016-09-21 17:49:18,273 WARN  [regionserver//10.2.0.55:16020.logRoller] 
> regionserver.LogRoller: Failed to schedule flush of 
> d4cf39dc40ea79f5da4d0cf66d03cb1f, region=null, requester=null
> {code}
> then, it continues until the RS is restarted: 
> {code}
> 2016-09-23 17:43:49,356 INFO  [regionserver//10.2.0.55:16020.logRoller] 
> wal.FSHLog: Too many wals: logs=721, maxlogs=32; forcing flush of 1 
> regions(s): d4cf39dc40ea79f5da4d0cf66d03cb1f
> 2016-09-23 17:43:49,357 WARN  [regionserver//10.2.0.55:16020.logRoller] 
> regionserver.LogRoller: Failed to schedule flush of 
> d4cf39dc40ea79f5da4d0cf66d03cb1f, region=null, requester=null
> {code}
> The problem is that region {{d4cf39dc40ea79f5da4d0cf66d03cb1f}} is already 
> split some time ago, and was able to flush its data and split without any 
> problems. However, the FSHLog still thinks that there is some unflushed data 
> for this region. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16736) Add getter to ResizableBlockCache for max size

2016-09-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535337#comment-15535337
 ] 

Hadoop QA commented on HBASE-16736:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
1s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
36m 10s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 121m 44s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 200m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.procedure.TestMasterProcedureWalLease |
| Timed out junit tests | 
org.apache.hadoop.hbase.mob.compactions.TestMobCompactor |
|   | 
org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelReplicationWithExpAsString
 |
|   | org.apache.hadoop.hbase.replication.TestSerialReplication |
|   | org.apache.hadoop.hbase.replication.TestMasterReplication |
|   | org.apache.hadoop.hbase.TestPartialResultsFromClientSide |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.1 Server=1.12.1 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831060/16736.v1.txt |
| JIRA Issue | HBASE-16736 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 46476352cd53 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 3757da6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Resolved] (HBASE-16493) MapReduce counters not updated with ScanMetrics

2016-09-30 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi resolved HBASE-16493.
---
Resolution: Duplicate

Duplicate of HBASE-16678

> MapReduce counters not updated with ScanMetrics 
> 
>
> Key: HBASE-16493
> URL: https://issues.apache.org/jira/browse/HBASE-16493
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 1.0.0, 2.0.0, 1.1.0
>Reporter: Jacobo Coll
>
> ScanMetrics were introduced in the [HBASE-4145]. These metrics were able to 
> work even in a parallel environment such as MapReduce.
> The TableRecordReader creates a Scanner with a copy of the given "scan", 
> called "currentScan". The ScanMetrics are captured by the Scanner and 
> modifies the given Scan instance, "currentScan". The TableRecordReader, after 
> reading the last value, updates the job counters to aggregate them. 
> But since [HBASE-13030], the TableRecordReader reads the scanMetrics from the 
> object "scan" instead of using the "currentScan"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-09-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535165#comment-15535165
 ] 

stack commented on HBASE-15638:
---

Update:

 * HBASE-16264 subtask is good to go; does shading of protobuf but makes it so 
CPEPs keep working
 * HBASE-16567, upgrade to 3.1 seems to work. Patch depends on HBASE-16264 
being in first.

Currently working on the checking in shaded protobuf and shaded generated 
source. Trying to elaborate on our current generate-protos step so it does 
generation, shading, and copy-local of the shaded protobuf files. Idea is we 
check this all in and thereafter the build will use the generated sources and 
stuff like IDEs will just work. If we need to patch the protobufs, this will be 
done as part of this -Pcompile-protobuf out-of-band step.

Have my little jenkins build and branch going on to help work through this 
stuff:

https://builds.apache.org/view/H-L/view/HBase/job/HBASE-16264/
https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=log;h=refs/heads/HBASE-16264





> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: 15638v2.patch, HBASE-15638.master.001.patch, 
> HBASE-15638.master.002.patch, HBASE-15638.master.003 (1).patch, 
> HBASE-15638.master.003 (1).patch, HBASE-15638.master.003 (1).patch, 
> HBASE-15638.master.003.patch, HBASE-15638.master.003.patch, 
> HBASE-15638.master.004.patch, HBASE-15638.master.005.patch, 
> HBASE-15638.master.006.patch, HBASE-15638.master.007.patch, 
> HBASE-15638.master.007.patch, HBASE-15638.master.008.patch, 
> HBASE-15638.master.009.patch, as.far.as.server.patch
>
>
> We need to change our protobuf. Currently it is pb2.5.0. As is, protobufs 
> expect all buffers to be on-heap byte arrays. It does not have facility for 
> dealing in ByteBuffers and off-heap ByteBuffers in particular. This fact 
> frustrates the off-heaping-of-the-write-path project as 
> marshalling/unmarshalling of protobufs involves a copy on-heap first.
> So, we need to patch our protobuf so it supports off-heap ByteBuffers. To 
> ensure we pick up the patched protobuf always, we need to relocate/shade our 
> protobuf and adjust all protobuf references accordingly.
> Given as we have protobufs in our public facing API, Coprocessor Endpoints -- 
> which use protobuf Service to describe new API -- a blind relocation/shading 
> of com.google.protobuf.* will break our API for CoProcessor EndPoints (CPEP) 
> in particular. For example, in the Table Interface, to invoke a method on a 
> registered CPEP, we have:
> {code} Map 
> coprocessorService(
> Class service, byte[] startKey, byte[] endKey, 
> org.apache.hadoop.hbase.client.coprocessor.Batch.Call 
> callable)
> throws com.google.protobuf.ServiceException, Throwable{code}
> This issue is how we intend to shade protobuf for hbase-2.0.0 while 
> preserving our API as is so CPEPs continue to work on the new hbase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)