[GitHub] [hbase] q977734161 commented on a change in pull request #768: HBASE-23224 Delete the TODO tag

2019-11-07 Thread GitBox
q977734161 commented on a change in pull request #768: HBASE-23224 Delete the 
TODO tag
URL: https://github.com/apache/hbase/pull/768#discussion_r343523328
 
 

 ##
 File path: 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/WALProcedureStore.java
 ##
 @@ -848,14 +850,24 @@ private void syncLoop() throws Throwable {
   StringUtils.humanSize(syncedPerSec)));
   }
 
-  // update webui circular buffers (TODO: get rid of allocations)
-  final SyncMetrics syncMetrics = new SyncMetrics();
+  // update webui circular buffers
+  SyncMetrics syncMetrics = null;
+  if (syncMetricsQueue.isAtFullCapacity()) {
+if (syncMetricsQueueIndex == syncMetricsQueueSize) {
+  syncMetricsQueueIndex = 0;
+}
+syncMetrics = syncMetricsQueue.get(syncMetricsQueueIndex);
+syncMetricsQueueIndex ++;
 
 Review comment:
   Yes, syncMetricsQueueIndex ++ in order reuse next SyncMetrics Object in 
syncMetricsQueue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23269) Hbase crashed due to two versions of regionservers when rolling upgrading

2019-11-07 Thread Jianzhen Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianzhen Xu updated HBASE-23269:

Description: 
Currently, when hbase turns on the rs_group function and needs to upgrade to a 
higher version, the meta table maybe assign failed, which eventually makes the 
whole cluster unavailable and the availability drops to 0.This applies to all 
versions that introduce rs_group functionality.Including the patch of rs_group 
is introduced in the version below 1.4, upgrade to version 1.4+ will also 
appear.
 When this happens during an upgrade:
 * When rolling upgrading regionservers, it must appear if the first rs of the 
upgrade is not in the same rs_group as the meta table.
 The phenomenon is as follows:

!image-2019-11-07-14-50-11-877.png!

!image-2019-11-07-14-51-38-858.png!

The reason for this is as follows: during a rolling upgrade of the first 
regionserver node (denoted as RS1),RS1 started up and re-registered to 
zk,master triggered the operation through watcher perception in 
RegionServerTracker, and finally came to this 
method-HMaster.checkIfShouldMoveSystemRegionAsync()。

The logic of this method is as follows:

 
{code:java}
// code placeholder
public void checkIfShouldMoveSystemRegionAsync() {
  new Thread(new Runnable() {
@Override
public void run() {
  try {
synchronized (checkIfShouldMoveSystemRegionLock) {
  // RS register on ZK after reports startup on master
  List regionsShouldMove = new ArrayList<>();
  for (ServerName server : getExcludedServersForSystemTable()) {
regionsShouldMove.addAll(getCarryingSystemTables(server));
  }
  if (!regionsShouldMove.isEmpty()) {
List plans = new ArrayList<>();
for (HRegionInfo regionInfo : regionsShouldMove) {
  RegionPlan plan = getRegionPlan(regionInfo, true);
  if (regionInfo.isMetaRegion()) {
// Must move meta region first.
balance(plan);
  } else {
plans.add(plan);
  }
}
for (RegionPlan plan : plans) {
  balance(plan);
}
  }
}
  } catch (Throwable t) {
LOG.error(t);
  }
}
  }).start();
}{code}
 
 # First execute getExcludedServersForSystemTable():Get the highest version 
value in all regionservers and return all RSs below that version value, labeled 
LowVersionRSList
 # If 1 does not return null, iterate.If there is a region with system table on 
rs, add this region to the List that needs move.If the first rs upgraded at 
this point is not in the rs_group where the system table is located, the region 
of the meta table is added to regionsShouldMove
 # Get a Regionplan for the region in regionsShouldMove,, and the parameter 
forceNewPlan is true:
 ## Gets all regionserver which version is below the highest version;
 ##  Exclude regionservers from 1) for all rs online status. The result is that 
only the rs has been upgraded will in collection, marked as destServers ;
 ## Since forceNewPlan is set to true, destination server will be obtained 
through balance.randomassignmet (region, destServers). Since rs_group function 
is enabled, the balance here is RSGroupBasedLoadBalancer.The logic in this 
method is:
 ### the destServers in 3.2 obtained intersect with all online regionservers in 
the rs_group of the current region.When region is a system table and not in the 
same rs_group, the result here is null.If null is returned, destination 
regionserver is hard-coded as BOGUS_SERVER_NAME(localhost,1);

Therefore, when master assigns region of the system table to localhost,1, it 
will naturally assign failed.If the above master logic is not noticed and this 
problem occurs, you can randomly upgrade a node in the rs_group where the 
system table is located, and it will automatically recover.

During the actual upgrade process, you will rarely know this problem without 
looking at the master code.However, the official document does not indicate 
that when using the rs_group function, the rs_group where the system table is 
located needs to be upgraded first. It is easy to get into this process and 
eventually crash.The system tables are assigned to the highest version of rs 
for compatibility purposes, the comment says.

Therefore, without changing the code logic, it can be noted in the official 
documentation that the rs_group of the system table is the priority to be 
upgraded when the cluster is upgraded with the rs_group function.

 

 

 

 

  was:
Currently, when hbase turns on the rs_group function and needs to upgrade to a 
higher version, the meta table maybe assign failed, which eventually makes the 
whole cluster unavailable and the availability drops to 0.This applies to all 
versions that introduce rs_group functionality.Including the patch of rs_group 
is introduc

[jira] [Commented] (HBASE-23268) Remove disable/enable operations from doc when altering schema

2019-11-07 Thread Wellington Chevreuil (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969097#comment-16969097
 ] 

Wellington Chevreuil commented on HBASE-23268:
--

Is it that _alter_ command now performs _enabling/disabling_ behind the scenes? 
Had not double checked on it, but I thought I've run in a situation before 
where command had failed midway and left the given table disabled. If that's 
indeed the case, maybe worth a note mentioning it?

> Remove disable/enable operations from doc when altering schema
> --
>
> Key: HBASE-23268
> URL: https://issues.apache.org/jira/browse/HBASE-23268
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HBASE-23268.master.001.patch
>
>
> Per HBASE-15989, we always allow users to alter a schema without disabling 
> the table. We should remove the steps before and after {{alter}} command from 
> the doc appropriately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23270) Inter-cluster replication is unaware destination peer cluster's RSGroup to push the WALEdits

2019-11-07 Thread Pradeep (Jira)
Pradeep created HBASE-23270:
---

 Summary: Inter-cluster replication is unaware destination peer 
cluster's RSGroup to push the WALEdits
 Key: HBASE-23270
 URL: https://issues.apache.org/jira/browse/HBASE-23270
 Project: HBase
  Issue Type: Bug
Reporter: Pradeep


In a source RSGroup enabled HBase cluster where replication is enabled to 
another destination RSGroup enabled cluster, the replication stream of 
List go to any node in the destination cluster without the awareness 
of RSGroup and then gets routed to appropriate node where the region is hosted. 
This extra hop where the data is received and routed could be of any node in 
the cluster and no restriction exists to select the node within the same 
RSGroup.

Implications: RSGroup owner in the multi-tenant HBase cluster can see 
performance and throughput deviations because of this unpredictability caused 
by replication.

Potential fix: options:

a) Select a destination node having RSGroup awareness

b) Group the WAL.Edit list based on region and then by region-servers in which 
the regions are assigned in the destination. Pass the list WAL.Edit directly to 
the region-server to avoid extra intermediate hop in the destination cluster 
during the replication process. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #798: HBASE-23257: Track clusterID in stand by masters

2019-11-07 Thread GitBox
Apache-HBase commented on issue #798: HBASE-23257: Track clusterID in stand by 
masters
URL: https://github.com/apache/hbase/pull/798#issuecomment-551016065
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 25s |  master passed  |
   | :green_heart: |  compile  |   0m 54s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 20s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 39s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 37s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m  4s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m  1s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   4m 57s |  the patch passed  |
   | :green_heart: |  compile  |   0m 54s |  the patch passed  |
   | :green_heart: |  javac  |   0m 54s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 20s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  15m 35s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 34s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 160m 47s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 36s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 217m 29s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-798/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/798 |
   | JIRA Issue | HBASE-23257 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 4db753d191ca 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-798/out/precommit/personality/provided.sh
 |
   | git revision | master / f58bd4a7ac |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-798/4/testReport/
 |
   | Max. process+thread count | 4313 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-798/4/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23257) Track ClusterID in stand by masters

2019-11-07 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969131#comment-16969131
 ] 

HBase QA commented on HBASE-23257:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
39s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m  
4s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
35s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}160m 
47s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}217m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-798/4/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hbase/pull/798 |
| JIRA Issue | HBASE-23257 |
| Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
| uname | Linux 4db753d191ca 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-798/out/precommit/personality/provided.sh
 |
| git revision | master / f58bd4a7ac |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-798/4/testReport/
 |
| Max. process+thread count | 4313 (vs. ulimit of 1) |
| modules | C: hbase-

[GitHub] [hbase] Apache9 commented on issue #797: HBASE-23236 test yetus 0.11.1

2019-11-07 Thread GitBox
Apache9 commented on issue #797: HBASE-23236 test yetus 0.11.1
URL: https://github.com/apache/hbase/pull/797#issuecomment-551019157
 
 
   em...
   
   'The patch generated 1 ASF License warnings'.
   
   The patch does not add new files...
   
   Let me take a look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on issue #797: HBASE-23236 test yetus 0.11.1

2019-11-07 Thread GitBox
Apache9 commented on issue #797: HBASE-23236 test yetus 0.11.1
URL: https://github.com/apache/hbase/pull/797#issuecomment-551022001
 
 
   
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-797/2/artifact/out/archiver/target/rat.txt
   
   '!? excludes'
   
   What's this file? We do not have this file in our repo. @busbey Is it 
generated during yetus processing?
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23241) TestExecutorService sometimes fail

2019-11-07 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin updated HBASE-23241:
--
Fix Version/s: 2.1.8

> TestExecutorService sometimes fail
> --
>
> Key: HBASE-23241
> URL: https://issues.apache.org/jira/browse/HBASE-23241
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.2.2
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> {code}
> [INFO] Running org.apache.hadoop.hbase.executor.TestExecutorService
> [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.347 
> s <<< FAILURE! - in org.apache.hadoop.hbase.executor.TestExecutorService
> [ERROR] 
> testSnapshotHandlers(org.apache.hadoop.hbase.executor.TestExecutorService)  
> Time elapsed: 0.086 s  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<1>
> at 
> org.apache.hadoop.hbase.executor.TestExecutorService.testSnapshotHandlers(TestExecutorService.java:247)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23271) HFileReader get split point handle empty HFile better

2019-11-07 Thread qiang Liu (Jira)
qiang Liu created HBASE-23271:
-

 Summary: HFileReader get split point handle empty HFile better
 Key: HBASE-23271
 URL: https://issues.apache.org/jira/browse/HBASE-23271
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 1.1.7, 3.0.0
Reporter: qiang Liu
Assignee: qiang Liu


currenttly if we call org.apache.hadoop.hbase.io.hfile.HFileReaderImpl#midKey 
on an empty HFile, we got an exception like

 java.io.IOException: HFile empty

since the function return an Optional , I think it's better return 
Optional.empty() instead of throw An Exception

when a region with muiltiple column family grow big enough to be splited, if 
got some empty column family, will got some warn log like this

java.io.IOException: HFile empty at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.midkey(HFileBlockIndex.java:334)

sinece exception is catched,  split logic will go on and got right result



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23248) hbase openjdk11 compile error

2019-11-07 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969184#comment-16969184
 ] 

jackylau commented on HBASE-23248:
--

hi [~busbey], that is my  log “mvn -version && mvn -DskipTests package 
assembly:single > log”

[^log]

> hbase openjdk11 compile error 
> --
>
> Key: HBASE-23248
> URL: https://issues.apache.org/jira/browse/HBASE-23248
> Project: HBase
>  Issue Type: Bug
>  Components: build, java
>Reporter: jackylau
>Priority: Major
>
> i find this 
> [https://stackoverflow.com/questions/49398894/unable-to-compile-simple-java-10-java-11-project-with-maven/55047110#55047110],
>  but it still can not solve this problem
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on project 
> hbase-protocol-shaded: Error creating shaded jar: null: 
> IllegalArgumentException -> [Help 1][ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on project 
> hbase-protocol-shaded: Error creating shaded jar: null: 
> IllegalArgumentException -> [Help 
> 1]org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on 
> project hbase-protocol-shaded: Error creating shaded jar: null at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
>  at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
>  at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307) at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193) at 
> org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106) at 
> org.apache.maven.cli.MavenCli.execute(MavenCli.java:863) at 
> org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288) at 
> org.apache.maven.cli.MavenCli.main(MavenCli.java:199) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
>  at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) 
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
>  at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)Caused
>  by: org.apache.maven.plugin.MojoExecutionException: Error creating shaded 
> jar: null at 
> org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:546) at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
>  ... 20 moreCaused by: java.lang.IllegalArgumentException at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.vafer.jdependency.Clazzpath.addClazzpathUnit(Clazzpath.java:201) at 
> org.vafer.jdependency.Clazzpath.addClazzpathUnit(Clazzpath.java:132) at 
> org.apache.maven.plugins.shade.filter.MinijarFilter.(MinijarFilter.java:95)
>  at 
> org.apache.maven.plugins.shade.mojo.ShadeMojo.getFilters(ShadeMojo.java:826) 
> at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:434) 
> ... 22 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23248) hbase openjdk11 compile error

2019-11-07 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated HBASE-23248:
-
Attachment: log

> hbase openjdk11 compile error 
> --
>
> Key: HBASE-23248
> URL: https://issues.apache.org/jira/browse/HBASE-23248
> Project: HBase
>  Issue Type: Bug
>  Components: build, java
>Reporter: jackylau
>Priority: Major
> Attachments: log
>
>
> i find this 
> [https://stackoverflow.com/questions/49398894/unable-to-compile-simple-java-10-java-11-project-with-maven/55047110#55047110],
>  but it still can not solve this problem
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on project 
> hbase-protocol-shaded: Error creating shaded jar: null: 
> IllegalArgumentException -> [Help 1][ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on project 
> hbase-protocol-shaded: Error creating shaded jar: null: 
> IllegalArgumentException -> [Help 
> 1]org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on 
> project hbase-protocol-shaded: Error creating shaded jar: null at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
>  at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
>  at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307) at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193) at 
> org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106) at 
> org.apache.maven.cli.MavenCli.execute(MavenCli.java:863) at 
> org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288) at 
> org.apache.maven.cli.MavenCli.main(MavenCli.java:199) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
>  at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) 
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
>  at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)Caused
>  by: org.apache.maven.plugin.MojoExecutionException: Error creating shaded 
> jar: null at 
> org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:546) at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
>  ... 20 moreCaused by: java.lang.IllegalArgumentException at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.vafer.jdependency.Clazzpath.addClazzpathUnit(Clazzpath.java:201) at 
> org.vafer.jdependency.Clazzpath.addClazzpathUnit(Clazzpath.java:132) at 
> org.apache.maven.plugins.shade.filter.MinijarFilter.(MinijarFilter.java:95)
>  at 
> org.apache.maven.plugins.shade.mojo.ShadeMojo.getFilters(ShadeMojo.java:826) 
> at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:434) 
> ... 22 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23212) Provide config reload for Auto Region Reopen based on storeFile ref count

2019-11-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969198#comment-16969198
 ] 

Hudson commented on HBASE-23212:


Results for branch branch-1
[build #1130 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1130/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1130//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1130//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1130//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Provide config reload for Auto Region Reopen based on storeFile ref count
> -
>
> Key: HBASE-23212
> URL: https://issues.apache.org/jira/browse/HBASE-23212
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.3.0, 1.6.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 1.6.0
>
> Attachments: HBASE-23212.branch-1.000.patch, 
> HBASE-23212.branch-1.000.patch, HBASE-23212.branch-2.000.patch, 
> HBASE-23212.branch-2.000.patch
>
>
> We should provide flexibility to tune max storeFile Ref Count threshold that 
> is considered for auto region reopen as it represents leak on store file. 
> While running some perf tests, user can bring ref count very high if 
> required, but this config change should be dynamic and should not require 
> HMaster restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23271) HFileReader get split point handle empty HFile better

2019-11-07 Thread qiang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969203#comment-16969203
 ] 

qiang Liu commented on HBASE-23271:
---

by the way fix a warn log message said about size while acturlly is about split 
key

> HFileReader get split point handle empty HFile better
> -
>
> Key: HBASE-23271
> URL: https://issues.apache.org/jira/browse/HBASE-23271
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0, 1.1.7
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easyfix
>
> currenttly if we call org.apache.hadoop.hbase.io.hfile.HFileReaderImpl#midKey 
> on an empty HFile, we got an exception like
>  java.io.IOException: HFile empty
> since the function return an Optional , I think it's better return 
> Optional.empty() instead of throw An Exception
> when a region with muiltiple column family grow big enough to be splited, if 
> got some empty column family, will got some warn log like this
> java.io.IOException: HFile empty at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.midkey(HFileBlockIndex.java:334)
> sinece exception is catched,  split logic will go on and got right result



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23271) HFileReader get split point handle empty HFile better

2019-11-07 Thread qiang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qiang Liu updated HBASE-23271:
--
Attachment: HBASE-23271.patch
Status: Patch Available  (was: Open)

> HFileReader get split point handle empty HFile better
> -
>
> Key: HBASE-23271
> URL: https://issues.apache.org/jira/browse/HBASE-23271
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.1.7, 3.0.0
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: easyfix
> Attachments: HBASE-23271.patch
>
>
> currenttly if we call org.apache.hadoop.hbase.io.hfile.HFileReaderImpl#midKey 
> on an empty HFile, we got an exception like
>  java.io.IOException: HFile empty
> since the function return an Optional , I think it's better return 
> Optional.empty() instead of throw An Exception
> when a region with muiltiple column family grow big enough to be splited, if 
> got some empty column family, will got some warn log like this
> java.io.IOException: HFile empty at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.midkey(HFileBlockIndex.java:334)
> sinece exception is catched,  split logic will go on and got right result



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22480) Get block from BlockCache once and return this block to BlockCache twice make ref count error.

2019-11-07 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969218#comment-16969218
 ] 

HBase QA commented on HBASE-22480:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
23s{color} | {color:green} branch-2.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} branch-2.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} branch-2.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
39s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.2 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
33s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} branch-2.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
50s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 50s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}262m 38s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}320m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.procedure.TestSCPWithReplicas |
|   | hadoop.hbase.client.TestSnapshotCloneIndependence |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1001/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22480 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12985158/HBASE-22480-branch-2.2-v2.patch
 |
| Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
| uname | Linux 45ae0d5c3c96 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | branch-2.2 / c7a84a96f6 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apac

[jira] [Commented] (HBASE-22480) Get block from BlockCache once and return this block to BlockCache twice make ref count error.

2019-11-07 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969252#comment-16969252
 ] 

HBase QA commented on HBASE-22480:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 7s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
47s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 5s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 53s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}299m 32s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}363m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
|   | hadoop.hbase.master.procedure.TestSCPWithReplicasWithoutZKCoordinated |
|   | 
hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint |
|   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1002/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22480 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12985159/HBASE-22480-master-v7.patch
 |
| Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
| uname | Linux bf92d0684706 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
| Build tool | ma

[jira] [Commented] (HBASE-23085) Network and Data related Actions

2019-11-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969277#comment-16969277
 ] 

Hudson commented on HBASE-23085:


Results for branch master
[build #1529 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1529/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1529//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1529//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1529//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Network and Data related Actions
> 
>
> Key: HBASE-23085
> URL: https://issues.apache.org/jira/browse/HBASE-23085
> Project: HBase
>  Issue Type: Sub-task
>  Components: integration tests
>Reporter: Szabolcs Bukros
>Assignee: Szabolcs Bukros
>Priority: Minor
> Fix For: 3.0.0
>
>
> Add additional actions to:
>  * manipulate network packages with tc (reorder, loose,...)
>  * add CPU load
>  * fill the disk
>  * corrupt or delete regionserver data files
> Create new monkey factories for the new actions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22980) HRegionPartioner getPartition() method incorrectly partitions the regions of the table.

2019-11-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969278#comment-16969278
 ] 

Hudson commented on HBASE-22980:


Results for branch master
[build #1529 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1529/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1529//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1529//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1529//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HRegionPartioner getPartition() method incorrectly partitions the regions of 
> the table.
> ---
>
> Key: HBASE-22980
> URL: https://issues.apache.org/jira/browse/HBASE-22980
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Shardul Singh
>Assignee: Shardul Singh
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> *Problem:*
> Partitioner class HRegionPartitioner in a HBase MapReduce job has a method 
> getPartition(). In getPartition(), there is a scenario where we have check 
> for less number of reducers than region. This scenario seems incorrect 
> because for a rowKey present in last region(let us say nth region) , 
> getPartition() should return value (n-1). But it is not returning n-1 for the 
> last region as it is falling in the case where number of reducers < number of 
> regions and returning some random value. 
> So if a client uses this class as a partitioner class in HBase MapReduce 
> jobs, this method incorrectly partitions the regions because rowKeys present 
> in the last regions does not fall to the last region.
> [https://github.com/apache/hbase/blob/fbd5b5e32753104f88600b0f4c803ab5659bce64/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java#L92]
> Consider the following scenario:
> if there are 5 regions for the table, partitions = 5 and number of reducers 
> is also 5.
> So in this case above check for reducers < regions should not return true.
> But for the last region when i=4(last region, 5th region) , getPartition 
> should return 4 but it returns 2 because it falls in the case of when we have 
> less reduces than region and returns true for the above condition even though 
> we have reducers = regions. So the condition is incorrect.
>  
> *Solution:*
> Instead of
>   {code} if (i >= numPartitions-1) {code} 
> It should be
>{code} if (i >= numPartitions){  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23270) Inter-cluster replication is unaware destination peer cluster's RSGroup to push the WALEdits

2019-11-07 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969321#comment-16969321
 ] 

Anoop Sam John commented on HBASE-23270:


bq.Select a destination node having RSGroup awareness
This should be the simpler one right?  Pls prepare a patch for master branch 
make a PR.

> Inter-cluster replication is unaware destination peer cluster's RSGroup to 
> push the WALEdits
> 
>
> Key: HBASE-23270
> URL: https://issues.apache.org/jira/browse/HBASE-23270
> Project: HBase
>  Issue Type: Bug
>Reporter: Pradeep
>Priority: Major
>
> In a source RSGroup enabled HBase cluster where replication is enabled to 
> another destination RSGroup enabled cluster, the replication stream of 
> List go to any node in the destination cluster without the 
> awareness of RSGroup and then gets routed to appropriate node where the 
> region is hosted. This extra hop where the data is received and routed could 
> be of any node in the cluster and no restriction exists to select the node 
> within the same RSGroup.
> Implications: RSGroup owner in the multi-tenant HBase cluster can see 
> performance and throughput deviations because of this unpredictability caused 
> by replication.
> Potential fix: options:
> a) Select a destination node having RSGroup awareness
> b) Group the WAL.Edit list based on region and then by region-servers in 
> which the regions are assigned in the destination. Pass the list WAL.Edit 
> directly to the region-server to avoid extra intermediate hop in the 
> destination cluster during the replication process. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23271) HFileReader get split point handle empty HFile better

2019-11-07 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969387#comment-16969387
 ] 

HBase QA commented on HBASE-23271:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
1s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
31s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
54s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 52s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}187m  
0s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}247m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1003/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-23271 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12985208/HBASE-23271.patch |
| Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
| uname | Linux 92c520a68ffb 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / f58bd4a7ac |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1003/testReport/ |
| Max. process+thread count | 4876 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://

[GitHub] [hbase] busbey commented on issue #797: HBASE-23236 test yetus 0.11.1

2019-11-07 Thread GitBox
busbey commented on issue #797: HBASE-23236 test yetus 0.11.1
URL: https://github.com/apache/hbase/pull/797#issuecomment-551169695
 
 
   that looks like the temp file our personality uses for fetching the 
untrustworthy tests. not sure why we'd download it into the component directory 
instead of the patch working directory, but if we do it would explain why it's 
there. (it wouldn't explain why we're just seeing this now)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on issue #797: HBASE-23236 test yetus 0.11.1

2019-11-07 Thread GitBox
busbey commented on issue #797: HBASE-23236 test yetus 0.11.1
URL: https://github.com/apache/hbase/pull/797#issuecomment-551170732
 
 
   yeah, it's in 
[get_include_exclude_tests_arg](https://github.com/apache/hbase/blob/master/dev-support/hbase-personality.sh#L270).
 should probably be reading/writing `${PATCH_DIR}/excludes`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on issue #797: HBASE-23236 test yetus 0.11.1

2019-11-07 Thread GitBox
busbey commented on issue #797: HBASE-23236 test yetus 0.11.1
URL: https://github.com/apache/hbase/pull/797#issuecomment-551171795
 
 
   Also looks like we don't clean up the excludes file should something go 
wrong with fetching it that results in wget still writing it out. from the qa 
run here:
   
   ```
   20:59:36  [Thu Nov  7 02:59:34 UTC 2019 INFO]: Personality: patch unit
   20:59:36  [Thu Nov  7 02:59:34 UTC 2019 INFO]: EXCLUDE_TESTS_URL=
   20:59:36  [Thu Nov  7 02:59:34 UTC 2019 INFO]: INCLUDE_TESTS_URL=
   20:59:36  --2019-11-07 02:59:34--  
https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/HBASE-23236/lastSuccessfulBuild/artifact/excludes/
   20:59:36  Resolving builds.apache.org (builds.apache.org)... 
195.201.213.130, 2a01:4f8:c0:2cc9::2
   20:59:36  Connecting to builds.apache.org 
(builds.apache.org)|195.201.213.130|:443... connected.
   20:59:36  HTTP request sent, awaiting response... 404 
   20:59:36  2019-11-07 02:59:35 ERROR 404: (no description).
   20:59:36  
   20:59:36  Wget error 8 in fetching excludes file from url 
https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/HBASE-23236/lastSuccessfulBuild/artifact/excludes/.
 Ignoring and proceeding.
   20:59:39  cd 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-797/src
   20:59:39  /usr/share/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-797/yetus-m2/hbase-HBASE-23236-patch-1
 -DHBasePatchProcess -PrunAllTests clean test -fae > 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-797/out/patch-unit-root.txt
 2>&1
   ```
   
   we can see we're exactly on such a path.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] xcangCRM commented on a change in pull request #796: HBASE-23251 - Add Column Family and Table Names to HFileContext and u…

2019-11-07 Thread GitBox
xcangCRM commented on a change in pull request #796: HBASE-23251 - Add Column 
Family and Table Names to HFileContext and u…
URL: https://github.com/apache/hbase/pull/796#discussion_r343837277
 
 

 ##
 File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
 ##
 @@ -35,10 +35,12 @@
 @InterfaceAudience.Private
 public class HFileContext implements HeapSize, Cloneable {
   public static final int FIXED_OVERHEAD = ClassSize.align(ClassSize.OBJECT +
-  // Algorithm, checksumType, encoding, Encryption.Context, hfileName 
reference
+  // Algorithm, checksumType, encoding, Encryption.Context, hfileName 
reference,
   5 * ClassSize.REFERENCE + 2 * Bytes.SIZEOF_INT +
   // usesHBaseChecksum, includesMvcc, includesTags and compressTags
-  4 * Bytes.SIZEOF_BOOLEAN + Bytes.SIZEOF_LONG);
+  4 * Bytes.SIZEOF_BOOLEAN + Bytes.SIZEOF_LONG +
+  //byte[] headers for column family and table name
+  2 * ClassSize.ARRAY + 2 * ClassSize.REFERENCE);
 
 Review comment:
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
busbey commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343838427
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,485 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.UUID;
+import java.util.Base64;
+
+import com.google.protobuf.ServiceException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGITS: -

[GitHub] [hbase] gjacoby126 commented on issue #796: HBASE-23251 - Add Column Family and Table Names to HFileContext and u…

2019-11-07 Thread GitBox
gjacoby126 commented on issue #796: HBASE-23251 - Add Column Family and Table 
Names to HFileContext and u…
URL: https://github.com/apache/hbase/pull/796#issuecomment-551238094
 
 
   I still think ClassSIze's calculations are slightly wrong, but now they're 
self-consistent with the self-reported heapSize() after modifying the latter. 
   
   Also added the test that @xcangCRM requested. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343853461
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,485 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.UUID;
+import java.util.Base64;
+
+import com.google.protobuf.ServiceException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGIT

[GitHub] [hbase] cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343857382
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,501 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Base64;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGITS: - this counter group gives a histogram of 
th

[GitHub] [hbase] cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343741995
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,501 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Base64;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGITS: - this counter group gives a histogram of 
th

[GitHub] [hbase] busbey commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
busbey commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343860898
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,485 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.UUID;
+import java.util.Base64;
+
+import com.google.protobuf.ServiceException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGITS: -

[GitHub] [hbase] busbey commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
busbey commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343861532
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,501 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Base64;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGITS: - this counter group gives a histogram of 
the or

[GitHub] [hbase] busbey commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
busbey commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343862295
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,501 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Base64;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGITS: - this counter group gives a histogram of 
the or

[GitHub] [hbase] cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343867827
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,501 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Base64;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGITS: - this counter group gives a histogram of 
th

[GitHub] [hbase] cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343868989
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,485 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.UUID;
+import java.util.Base64;
+
+import com.google.protobuf.ServiceException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGIT

[GitHub] [hbase] cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
cbaenziger commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343868153
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,501 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Base64;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGITS: - this counter group gives a histogram of 
th

[GitHub] [hbase] busbey commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
busbey commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343869608
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,501 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Base64;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGITS: - this counter group gives a histogram of 
the or

[GitHub] [hbase] busbey commented on a change in pull request #785: HBASE-23239 Reporting on status of backing MOB files from client-facing cells

2019-11-07 Thread GitBox
busbey commented on a change in pull request #785: HBASE-23239 Reporting on 
status of backing MOB files from client-facing cells
URL: https://github.com/apache/hbase/pull/785#discussion_r343869690
 
 

 ##
 File path: 
hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MobRefReporter.java
 ##
 @@ -0,0 +1,501 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob.mapreduce;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Base64;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Scans a given table + CF for all mob reference cells to get the list of 
backing mob files.
+ * For each referenced file we attempt to verify that said file is on the 
FileSystem in a place
+ * that the MOB system will look when attempting to resolve the actual value.
+ *
+ * The job includes counters that can help provide a rough sketch of the mob 
data.
+ *
+ * 
+ * Map-Reduce Framework
+ * Map input records=1
+ * ...
+ * Reduce output records=99
+ * ...
+ * CELLS_PER_ROW_DIGITS
+ * 1=1
+ * MOB
+ * NUM_CELLS=52364
+ * PROBLEM
+ * IMPACTED_ROWS=338
+ * MOB_FILES=2
+ * PROBLEM_ROWS_PER_FILE_DIGITS
+ * 3=2
+ * SIZE_PER_CELL_DIGITS
+ * 5=627
+ * 6=51392
+ * 7=345
+ * SIZE_PER_ROW_DIGITS
+ * 6=6838
+ * 7=3162
+ * 
+ *
+ *   * Map-Reduce Framework:Map input records - the number of rows with mob 
references
+ *   * Map-Reduce Framework:Reduce output records - the number of unique 
hfiles referenced
+ *   * MOB:NUM_CELLS - the total number of mob reference cells
+ *   * PROBLEM:IMPACTED_ROWS - the number of rows that reference hfiles with 
an issue
+ *   * PROBLEM:MOB_FILES - the number of unique hfiles that have an issue
+ *   * CELLS_PER_ROW_DIGITS: - this counter group gives a histogram of the 
order of magnitude of the
+ * number of cells in a given row by grouping by the number of digits 
used in each count.
+ * This allows us to see more about the distribution of cells than 
what we can determine
+ * with just the cell count and the row count. In this particular 
example we can see that
+ * all of our rows have somewhere between 1 - 9 cells.
+ *   * PROBLEM_ROWS_PER_FILE_DIGITS: - this counter group gives a histogram of 
the or

[GitHub] [hbase] busbey opened a new pull request #804: HBASE-23228 Allow for jdk8 specific modules on branch-1 in precommit/nightly testing

2019-11-07 Thread GitBox
busbey opened a new pull request #804: HBASE-23228 Allow for jdk8 specific 
modules on branch-1 in precommit/nightly testing
URL: https://github.com/apache/hbase/pull/804
 
 
   Testing locally with this version of our yetus personality and #251 for 
HBASE-22114
   
   * using jdk7 properly ignores the added module
   * using jdk8 properly shows the issues present in it according to precommit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk commented on issue #775: HBASE-23230 Enforce member visibility in HRegionServer

2019-11-07 Thread GitBox
ndimiduk commented on issue #775: HBASE-23230 Enforce member visibility in 
HRegionServer
URL: https://github.com/apache/hbase/pull/775#issuecomment-551280797
 
 
   Test suite passes for me when I run locally, flakies excluded, and `-fae 
-Dsurefire.rerunFailingTestsCount=2`. Feeling comfortable to commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #804: HBASE-23228 Allow for jdk8 specific modules on branch-1 in precommit/nightly testing

2019-11-07 Thread GitBox
Apache-HBase commented on issue #804: HBASE-23228 Allow for jdk8 specific 
modules on branch-1 in precommit/nightly testing
URL: https://github.com/apache/hbase/pull/804#issuecomment-551284626
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :blue_heart: |  shelldocs  |   0m  0s |  Shelldocs was not available.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 36s |  Maven dependency ordering for branch 
 |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  
|
   | :broken_heart: |  shellcheck  |   0m  4s |  The patch generated 1 new + 3 
unchanged - 0 fixed = 4 total (was 3)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   ||| _ Other Tests _ |
   | :blue_heart: |  asflicense  |   0m  0s |  ASF License check generated no 
output?  |
   |  |   |   2m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-804/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/804 |
   | Optional Tests | dupname asflicense shellcheck shelldocs |
   | uname | Linux 1e5c4c6ee979 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-804/out/precommit/personality/provided.sh
 |
   | git revision | master / f58bd4a7ac |
   | shellcheck | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-804/1/artifact/out/diff-patch-shellcheck.txt
 |
   | Max. process+thread count | 52 (vs. ulimit of 1) |
   | modules | C:  U:  |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-804/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23272) Fix link in Developer guide to "code review checklist"

2019-11-07 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HBASE-23272:


 Summary: Fix link in Developer guide to "code review checklist"
 Key: HBASE-23272
 URL: https://issues.apache.org/jira/browse/HBASE-23272
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Nick Dimiduk


The destination of the link "code review checklist" in 
https://hbase.apache.org/book.html#_reject has been moved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk opened a new pull request #805: HBASE-23272 Fix link in Developer guide to "code review checklist"

2019-11-07 Thread GitBox
ndimiduk opened a new pull request #805: HBASE-23272 Fix link in Developer 
guide to "code review checklist"
URL: https://github.com/apache/hbase/pull/805
 
 
   I assume the Hadoop project moved their wiki around a bit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #805: HBASE-23272 Fix link in Developer guide to "code review checklist"

2019-11-07 Thread GitBox
Apache-HBase commented on issue #805: HBASE-23272 Fix link in Developer guide 
to "code review checklist"
URL: https://github.com/apache/hbase/pull/805#issuecomment-551308261
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 31s |  master passed  |
   | :blue_heart: |  refguide  |   6m  5s |  branch has no errors when building 
the reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   4m 58s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :blue_heart: |  refguide  |   5m 34s |  patch has no errors when building 
the reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   ||| _ Other Tests _ |
   | :green_heart: |  asflicense  |   0m 19s |  The patch does not generate ASF 
License warnings.  |
   |  |   |  24m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-805/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/805 |
   | Optional Tests | dupname asflicense refguide |
   | uname | Linux 192ee7f3ca6e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-805/out/precommit/personality/provided.sh
 |
   | git revision | master / f58bd4a7ac |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-805/1/artifact/out/branch-site/book.html
 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-805/1/artifact/out/patch-site/book.html
 |
   | Max. process+thread count | 96 (vs. ulimit of 1) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-805/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work started] (HBASE-23272) Fix link in Developer guide to "code review checklist"

2019-11-07 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-23272 started by Nick Dimiduk.

> Fix link in Developer guide to "code review checklist"
> --
>
> Key: HBASE-23272
> URL: https://issues.apache.org/jira/browse/HBASE-23272
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
>
> The destination of the link "code review checklist" in 
> https://hbase.apache.org/book.html#_reject has been moved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23272) Fix link in Developer guide to "code review checklist"

2019-11-07 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk reassigned HBASE-23272:


Assignee: Nick Dimiduk

> Fix link in Developer guide to "code review checklist"
> --
>
> Key: HBASE-23272
> URL: https://issues.apache.org/jira/browse/HBASE-23272
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
>
> The destination of the link "code review checklist" in 
> https://hbase.apache.org/book.html#_reject has been moved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #775: HBASE-23230 Enforce member visibility in HRegionServer

2019-11-07 Thread GitBox
saintstack commented on a change in pull request #775: HBASE-23230 Enforce 
member visibility in HRegionServer
URL: https://github.com/apache/hbase/pull/775#discussion_r343924825
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 ##
 @@ -272,10 +268,9 @@
   private final Cache executedRegionProcedures =
   CacheBuilder.newBuilder().expireAfterAccess(600, 
TimeUnit.SECONDS).build();
 
-  // Cache flushing
-  protected MemStoreFlusher cacheFlusher;
+  private MemStoreFlusher cacheFlusher;
 
-  protected HeapMemoryManager hMemManager;
+  private HeapMemoryManager hMemManager;
 
 Review comment:
   What did you figure reading fine-print?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #775: HBASE-23230 Enforce member visibility in HRegionServer

2019-11-07 Thread GitBox
saintstack commented on a change in pull request #775: HBASE-23230 Enforce 
member visibility in HRegionServer
URL: https://github.com/apache/hbase/pull/775#discussion_r343925239
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 ##
 @@ -3807,10 +3688,14 @@ public SecureBulkLoadManager 
getSecureBulkLoadManager() {
   }
 
   @Override
-  public EntityLock regionLock(List regionInfos, String 
description, Abortable abort)
-  throws IOException {
-return new LockServiceClient(conf, lockStub, 
asyncClusterConnection.getNonceGenerator())
-  .regionLock(regionInfos, description, abort);
+  public EntityLock regionLock(
+  final List regionInfo,
+  final String description,
+  final Abortable abort
+  ) {
 
 Review comment:
   Yeah, code looks unusual. See code around you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #775: HBASE-23230 Enforce member visibility in HRegionServer

2019-11-07 Thread GitBox
saintstack commented on a change in pull request #775: HBASE-23230 Enforce 
member visibility in HRegionServer
URL: https://github.com/apache/hbase/pull/775#discussion_r343925457
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 ##
 @@ -272,10 +268,9 @@
   private final Cache executedRegionProcedures =
   CacheBuilder.newBuilder().expireAfterAccess(600, 
TimeUnit.SECONDS).build();
 
-  // Cache flushing
-  protected MemStoreFlusher cacheFlusher;
+  private MemStoreFlusher cacheFlusher;
 
-  protected HeapMemoryManager hMemManager;
+  private HeapMemoryManager hMemManager;
 
 Review comment:
   I see you added it on the end.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ndimiduk merged pull request #775: HBASE-23230 Enforce member visibility in HRegionServer

2019-11-07 Thread GitBox
ndimiduk merged pull request #775: HBASE-23230 Enforce member visibility in 
HRegionServer
URL: https://github.com/apache/hbase/pull/775
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23230) Enforce member visibility in HRegionServer

2019-11-07 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-23230:
-
Fix Version/s: 3.0.0

> Enforce member visibility in HRegionServer
> --
>
> Key: HBASE-23230
> URL: https://issues.apache.org/jira/browse/HBASE-23230
> Project: HBase
>  Issue Type: Task
>  Components: regionserver
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0
>
>
> {{HRegionServer}} leaks member variables quite a lot. Lock down our interface 
> -- member variables should be {{private}} (unless intentionally {{protected}} 
> for subclasses) and use getters with proper accessibility constraints.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk merged pull request #805: HBASE-23272 Fix link in Developer guide to "code review checklist"

2019-11-07 Thread GitBox
ndimiduk merged pull request #805: HBASE-23272 Fix link in Developer guide to 
"code review checklist"
URL: https://github.com/apache/hbase/pull/805
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23272) Fix link in Developer guide to "code review checklist"

2019-11-07 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-23272:
-
Fix Version/s: 3.0.0

> Fix link in Developer guide to "code review checklist"
> --
>
> Key: HBASE-23272
> URL: https://issues.apache.org/jira/browse/HBASE-23272
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 3.0.0
>
>
> The destination of the link "code review checklist" in 
> https://hbase.apache.org/book.html#_reject has been moved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23272) Fix link in Developer guide to "code review checklist"

2019-11-07 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-23272.
--
Resolution: Fixed

> Fix link in Developer guide to "code review checklist"
> --
>
> Key: HBASE-23272
> URL: https://issues.apache.org/jira/browse/HBASE-23272
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 3.0.0
>
>
> The destination of the link "code review checklist" in 
> https://hbase.apache.org/book.html#_reject has been moved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #796: HBASE-23251 - Add Column Family and Table Names to HFileContext and u…

2019-11-07 Thread GitBox
Apache-HBase commented on issue #796: HBASE-23251 - Add Column Family and Table 
Names to HFileContext and u…
URL: https://github.com/apache/hbase/pull/796#issuecomment-551316254
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 3 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 34s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   5m 51s |  master passed  |
   | :green_heart: |  compile  |   1m 50s |  master passed  |
   | :green_heart: |  checkstyle  |   2m 23s |  master passed  |
   | :green_heart: |  shadedjars  |   5m 10s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   1m 16s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 44s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   6m 24s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   5m 36s |  the patch passed  |
   | :green_heart: |  compile  |   1m 44s |  the patch passed  |
   | :green_heart: |  javac  |   1m 44s |  the patch passed  |
   | :green_heart: |  checkstyle  |   2m 13s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 57s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  17m  5s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   1m 10s |  the patch passed  |
   | :green_heart: |  findbugs  |   6m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   2m 55s |  hbase-common in the patch passed.  |
   | :broken_heart: |  unit  | 159m  6s |  hbase-server in the patch failed.  |
   | :green_heart: |  unit  |  17m 32s |  hbase-mapreduce in the patch passed.  
|
   | :green_heart: |  asflicense  |   1m 24s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 251m 44s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.master.replication.TestTransitPeerSyncReplicationStateProcedureRetry
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-796/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/796 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 6967cc139438 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-796/out/precommit/personality/provided.sh
 |
   | git revision | master / f58bd4a7ac |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-796/3/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-796/3/testReport/
 |
   | Max. process+thread count | 5228 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-server hbase-mapreduce U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-796/3/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23185) High cpu usage because getTable()#put() gets config value every time

2019-11-07 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969658#comment-16969658
 ] 

Michael Stack commented on HBASE-23185:
---

TestCatalogJanitor is in the flakies list still. ... it is added as a flakie 
when 1.3 runs ... 
https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/1029/consoleFull 
... but no mention here 
https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-1.3/lastSuccessfulBuild/artifact/dashboard.html
 which is odd. The test is still in the code base. Will let it percolate 
another while then will dig in.



> High cpu usage because getTable()#put() gets config value every time
> 
>
> Key: HBASE-23185
> URL: https://issues.apache.org/jira/browse/HBASE-23185
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0, 1.4.10, 1.2.12, 1.3.5
>Reporter: Shinya Yoshida
>Assignee: Shinya Yoshida
>Priority: Major
>  Labels: performance
> Fix For: 1.6.0
>
> Attachments: Screenshot from 2019-10-18 12-38-14.png, Screenshot from 
> 2019-10-18 13-03-24.png
>
>
> When we analyzed the performance of our hbase application with many puts, we 
> found that Configuration methods use many CPU resources:
> !Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!
> As you can see, getTable().put() is calling Configuration methods which cause 
> regex or synchronization by Hashtable.
> This should not happen in 0.99.2 because 
> https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
>  However, it's reproducing nowadays by bugs or leakages after many code 
> evoluations between 0.9x and 1.x.
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
>  ** finishSetup is called every new HTable() e.g. every con.getTable()
>  ** So getInt is called everytime and it does regex
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
>  ** BufferedMutatorImpl is created every first put for HTable e.g. 
> con.getTable().put()
>  ** Create ConnectionConf every time in BufferedMutatorImpl constructor
>  ** ConnectionConf gets config value in the constructor
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
>  ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
> AsyncProcess is created by con.getTable().put()
>  ** AsyncProcess parse many configurations
> So, con.getTable().put() is heavy operation for CPU because of getting config 
> value.
>  
> With in-house patch for this issue, we observed about 10% improvement on 
> max-throughput (e.g. CPU usage) at client-side:
> !Screenshot from 2019-10-18 13-03-24.png|width=508,height=223!
>  
> Seems branch-2 is not affected because client implementation has been changed 
> dramatically.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23185) High cpu usage because getTable()#put() gets config value every time

2019-11-07 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969661#comment-16969661
 ] 

Sean Busbey commented on HBASE-23185:
-

If it's not in that list at all it probably passed every time. Check the 
Jenkins test summary for the job to confirm

> High cpu usage because getTable()#put() gets config value every time
> 
>
> Key: HBASE-23185
> URL: https://issues.apache.org/jira/browse/HBASE-23185
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0, 1.4.10, 1.2.12, 1.3.5
>Reporter: Shinya Yoshida
>Assignee: Shinya Yoshida
>Priority: Major
>  Labels: performance
> Fix For: 1.6.0
>
> Attachments: Screenshot from 2019-10-18 12-38-14.png, Screenshot from 
> 2019-10-18 13-03-24.png
>
>
> When we analyzed the performance of our hbase application with many puts, we 
> found that Configuration methods use many CPU resources:
> !Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!
> As you can see, getTable().put() is calling Configuration methods which cause 
> regex or synchronization by Hashtable.
> This should not happen in 0.99.2 because 
> https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
>  However, it's reproducing nowadays by bugs or leakages after many code 
> evoluations between 0.9x and 1.x.
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
>  ** finishSetup is called every new HTable() e.g. every con.getTable()
>  ** So getInt is called everytime and it does regex
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
>  ** BufferedMutatorImpl is created every first put for HTable e.g. 
> con.getTable().put()
>  ** Create ConnectionConf every time in BufferedMutatorImpl constructor
>  ** ConnectionConf gets config value in the constructor
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
>  ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
> AsyncProcess is created by con.getTable().put()
>  ** AsyncProcess parse many configurations
> So, con.getTable().put() is heavy operation for CPU because of getting config 
> value.
>  
> With in-house patch for this issue, we observed about 10% improvement on 
> max-throughput (e.g. CPU usage) at client-side:
> !Screenshot from 2019-10-18 13-03-24.png|width=508,height=223!
>  
> Seems branch-2 is not affected because client implementation has been changed 
> dramatically.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23185) High cpu usage because getTable()#put() gets config value every time

2019-11-07 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969662#comment-16969662
 ] 

Sean Busbey commented on HBASE-23185:
-

Yeah all passes

https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-1.3/test_results_analyzer/

> High cpu usage because getTable()#put() gets config value every time
> 
>
> Key: HBASE-23185
> URL: https://issues.apache.org/jira/browse/HBASE-23185
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0, 1.4.10, 1.2.12, 1.3.5
>Reporter: Shinya Yoshida
>Assignee: Shinya Yoshida
>Priority: Major
>  Labels: performance
> Fix For: 1.6.0
>
> Attachments: Screenshot from 2019-10-18 12-38-14.png, Screenshot from 
> 2019-10-18 13-03-24.png
>
>
> When we analyzed the performance of our hbase application with many puts, we 
> found that Configuration methods use many CPU resources:
> !Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!
> As you can see, getTable().put() is calling Configuration methods which cause 
> regex or synchronization by Hashtable.
> This should not happen in 0.99.2 because 
> https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
>  However, it's reproducing nowadays by bugs or leakages after many code 
> evoluations between 0.9x and 1.x.
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
>  ** finishSetup is called every new HTable() e.g. every con.getTable()
>  ** So getInt is called everytime and it does regex
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
>  ** BufferedMutatorImpl is created every first put for HTable e.g. 
> con.getTable().put()
>  ** Create ConnectionConf every time in BufferedMutatorImpl constructor
>  ** ConnectionConf gets config value in the constructor
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
>  ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
> AsyncProcess is created by con.getTable().put()
>  ** AsyncProcess parse many configurations
> So, con.getTable().put() is heavy operation for CPU because of getting config 
> value.
>  
> With in-house patch for this issue, we observed about 10% improvement on 
> max-throughput (e.g. CPU usage) at client-side:
> !Screenshot from 2019-10-18 13-03-24.png|width=508,height=223!
>  
> Seems branch-2 is not affected because client implementation has been changed 
> dramatically.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23185) High cpu usage because getTable()#put() gets config value every time

2019-11-07 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969664#comment-16969664
 ] 

Michael Stack commented on HBASE-23185:
---

Thats a nice tool. Thanks for intercession [~busbey] and pointer.

> High cpu usage because getTable()#put() gets config value every time
> 
>
> Key: HBASE-23185
> URL: https://issues.apache.org/jira/browse/HBASE-23185
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0, 1.4.10, 1.2.12, 1.3.5
>Reporter: Shinya Yoshida
>Assignee: Shinya Yoshida
>Priority: Major
>  Labels: performance
> Fix For: 1.6.0
>
> Attachments: Screenshot from 2019-10-18 12-38-14.png, Screenshot from 
> 2019-10-18 13-03-24.png
>
>
> When we analyzed the performance of our hbase application with many puts, we 
> found that Configuration methods use many CPU resources:
> !Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!
> As you can see, getTable().put() is calling Configuration methods which cause 
> regex or synchronization by Hashtable.
> This should not happen in 0.99.2 because 
> https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
>  However, it's reproducing nowadays by bugs or leakages after many code 
> evoluations between 0.9x and 1.x.
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
>  ** finishSetup is called every new HTable() e.g. every con.getTable()
>  ** So getInt is called everytime and it does regex
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
>  ** BufferedMutatorImpl is created every first put for HTable e.g. 
> con.getTable().put()
>  ** Create ConnectionConf every time in BufferedMutatorImpl constructor
>  ** ConnectionConf gets config value in the constructor
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
>  ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
> AsyncProcess is created by con.getTable().put()
>  ** AsyncProcess parse many configurations
> So, con.getTable().put() is heavy operation for CPU because of getting config 
> value.
>  
> With in-house patch for this issue, we observed about 10% improvement on 
> max-throughput (e.g. CPU usage) at client-side:
> !Screenshot from 2019-10-18 13-03-24.png|width=508,height=223!
>  
> Seems branch-2 is not affected because client implementation has been changed 
> dramatically.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23185) High cpu usage because getTable()#put() gets config value every time

2019-11-07 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-23185.
---
Fix Version/s: 1.5.2
   1.3.7
   1.4.12
   Resolution: Fixed

Resolving as done. Thanks for the patch [~lineyshinya]

> High cpu usage because getTable()#put() gets config value every time
> 
>
> Key: HBASE-23185
> URL: https://issues.apache.org/jira/browse/HBASE-23185
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0, 1.4.10, 1.2.12, 1.3.5
>Reporter: Shinya Yoshida
>Assignee: Shinya Yoshida
>Priority: Major
>  Labels: performance
> Fix For: 1.6.0, 1.4.12, 1.3.7, 1.5.2
>
> Attachments: Screenshot from 2019-10-18 12-38-14.png, Screenshot from 
> 2019-10-18 13-03-24.png
>
>
> When we analyzed the performance of our hbase application with many puts, we 
> found that Configuration methods use many CPU resources:
> !Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!
> As you can see, getTable().put() is calling Configuration methods which cause 
> regex or synchronization by Hashtable.
> This should not happen in 0.99.2 because 
> https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
>  However, it's reproducing nowadays by bugs or leakages after many code 
> evoluations between 0.9x and 1.x.
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
>  ** finishSetup is called every new HTable() e.g. every con.getTable()
>  ** So getInt is called everytime and it does regex
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
>  ** BufferedMutatorImpl is created every first put for HTable e.g. 
> con.getTable().put()
>  ** Create ConnectionConf every time in BufferedMutatorImpl constructor
>  ** ConnectionConf gets config value in the constructor
>  # 
> [https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
>  ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
> AsyncProcess is created by con.getTable().put()
>  ** AsyncProcess parse many configurations
> So, con.getTable().put() is heavy operation for CPU because of getting config 
> value.
>  
> With in-house patch for this issue, we observed about 10% improvement on 
> max-throughput (e.g. CPU usage) at client-side:
> !Screenshot from 2019-10-18 13-03-24.png|width=508,height=223!
>  
> Seems branch-2 is not affected because client implementation has been changed 
> dramatically.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk opened a new pull request #806: HBASE-23230 Enforce member visibility in HRegionServer

2019-11-07 Thread GitBox
ndimiduk opened a new pull request #806: HBASE-23230 Enforce member visibility 
in HRegionServer
URL: https://github.com/apache/hbase/pull/806
 
 
   A backport of #775 to `branch-2`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] gjacoby126 commented on issue #796: HBASE-23251 - Add Column Family and Table Names to HFileContext and u…

2019-11-07 Thread GitBox
gjacoby126 commented on issue #796: HBASE-23251 - Add Column Family and Table 
Names to HFileContext and u…
URL: https://github.com/apache/hbase/pull/796#issuecomment-551332896
 
 
   Test failure doesn't look related:
   [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 
126.395 s <<< FAILURE! - in 
org.apache.hadoop.hbase.master.replication.TestTransitPeerSyncReplicationStateProcedureRetry
   [ERROR] 
testRecoveryAndDoubleExecution(org.apache.hadoop.hbase.master.replication.TestTransitPeerSyncReplicationStateProcedureRetry)
  Time elapsed: 15.558 s  <<< ERROR!
   java.lang.IllegalArgumentException: run queue not empty
at 
org.apache.hadoop.hbase.master.replication.TestTransitPeerSyncReplicationStateProcedureRetry.testRecoveryAndDoubleExecution(TestTransitPeerSyncReplicationStateProcedureRetry.java:93)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23268) Remove disable/enable operations from doc when altering schema

2019-11-07 Thread Daisuke Kobayashi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969694#comment-16969694
 ] 

Daisuke Kobayashi commented on HBASE-23268:
---

Oh I was not aware of that. Was it even a case with 2.x?

> Remove disable/enable operations from doc when altering schema
> --
>
> Key: HBASE-23268
> URL: https://issues.apache.org/jira/browse/HBASE-23268
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HBASE-23268.master.001.patch
>
>
> Per HBASE-15989, we always allow users to alter a schema without disabling 
> the table. We should remove the steps before and after {{alter}} command from 
> the doc appropriately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23268) Remove disable/enable operations from doc when altering schema

2019-11-07 Thread Daisuke Kobayashi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969694#comment-16969694
 ] 

Daisuke Kobayashi edited comment on HBASE-23268 at 11/8/19 1:07 AM:


{quote}Is it that alter command now performs enabling/disabling behind the 
scenes?
{quote}
No, it is now always an online change.
{quote}command had failed midway and left the given table disabled
{quote}
Oh I was not aware of that. Was it even a case with 2.x?


was (Author: daisuke.kobayashi):
Oh I was not aware of that. Was it even a case with 2.x?

> Remove disable/enable operations from doc when altering schema
> --
>
> Key: HBASE-23268
> URL: https://issues.apache.org/jira/browse/HBASE-23268
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HBASE-23268.master.001.patch
>
>
> Per HBASE-15989, we always allow users to alter a schema without disabling 
> the table. We should remove the steps before and after {{alter}} command from 
> the doc appropriately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22980) HRegionPartioner getPartition() method incorrectly partitions the regions of the table.

2019-11-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969718#comment-16969718
 ] 

Hudson commented on HBASE-22980:


Results for branch branch-2.2
[build #686 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/686/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/686//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/686//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/686//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HRegionPartioner getPartition() method incorrectly partitions the regions of 
> the table.
> ---
>
> Key: HBASE-22980
> URL: https://issues.apache.org/jira/browse/HBASE-22980
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Shardul Singh
>Assignee: Shardul Singh
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> *Problem:*
> Partitioner class HRegionPartitioner in a HBase MapReduce job has a method 
> getPartition(). In getPartition(), there is a scenario where we have check 
> for less number of reducers than region. This scenario seems incorrect 
> because for a rowKey present in last region(let us say nth region) , 
> getPartition() should return value (n-1). But it is not returning n-1 for the 
> last region as it is falling in the case where number of reducers < number of 
> regions and returning some random value. 
> So if a client uses this class as a partitioner class in HBase MapReduce 
> jobs, this method incorrectly partitions the regions because rowKeys present 
> in the last regions does not fall to the last region.
> [https://github.com/apache/hbase/blob/fbd5b5e32753104f88600b0f4c803ab5659bce64/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java#L92]
> Consider the following scenario:
> if there are 5 regions for the table, partitions = 5 and number of reducers 
> is also 5.
> So in this case above check for reducers < regions should not return true.
> But for the last region when i=4(last region, 5th region) , getPartition 
> should return 4 but it returns 2 because it falls in the case of when we have 
> less reduces than region and returns true for the above condition even though 
> we have reducers = regions. So the condition is incorrect.
>  
> *Solution:*
> Instead of
>   {code} if (i >= numPartitions-1) {code} 
> It should be
>{code} if (i >= numPartitions){  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23248) hbase openjdk11 compile error

2019-11-07 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969721#comment-16969721
 ] 

jackylau commented on HBASE-23248:
--

hi [~busbey]

i have met two problems 

1) openjdk11 have removed the annotation plugin, which i have solved by adding 
dependency

2) maven-compiler-plugin, i follow this but also can not solved  
[https://stackoverflow.com/questions/49398894/unable-to-compile-simple-java-10-java-11-project-with-maven/55047110#55047110]{{}}

> hbase openjdk11 compile error 
> --
>
> Key: HBASE-23248
> URL: https://issues.apache.org/jira/browse/HBASE-23248
> Project: HBase
>  Issue Type: Bug
>  Components: build, java
>Reporter: jackylau
>Priority: Major
> Attachments: log
>
>
> i find this 
> [https://stackoverflow.com/questions/49398894/unable-to-compile-simple-java-10-java-11-project-with-maven/55047110#55047110],
>  but it still can not solve this problem
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on project 
> hbase-protocol-shaded: Error creating shaded jar: null: 
> IllegalArgumentException -> [Help 1][ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on project 
> hbase-protocol-shaded: Error creating shaded jar: null: 
> IllegalArgumentException -> [Help 
> 1]org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on 
> project hbase-protocol-shaded: Error creating shaded jar: null at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
>  at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
>  at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307) at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193) at 
> org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106) at 
> org.apache.maven.cli.MavenCli.execute(MavenCli.java:863) at 
> org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288) at 
> org.apache.maven.cli.MavenCli.main(MavenCli.java:199) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
>  at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) 
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
>  at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)Caused
>  by: org.apache.maven.plugin.MojoExecutionException: Error creating shaded 
> jar: null at 
> org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:546) at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
>  ... 20 moreCaused by: java.lang.IllegalArgumentException at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.vafer.jdependency.Clazzpath.addClazzpathUnit(Clazzpath.java:201) at 
> org.vafer.jdependency.Clazzpath.addClazzpathUnit(Clazzpath.java:132) at 
> org.apache.maven.plugins.shade.filter.MinijarFilter.(MinijarFilter.java:95)
>  at 
> org.apache.maven.plugins.shade.mojo.ShadeMojo.getFilters(ShadeMojo.java:826) 
> at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:434) 
> ... 22 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-22480) Get block from BlockCache once and return this block to BlockCache twice make ref count error.

2019-11-07 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin updated HBASE-22480:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Get block from BlockCache once and return this block to BlockCache twice make 
> ref count error.
> --
>
> Key: HBASE-22480
> URL: https://issues.apache.org/jira/browse/HBASE-22480
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.2.2
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22480-branch-2.2-v1.patch, 
> HBASE-22480-branch-2.2-v1.patch, HBASE-22480-branch-2.2-v1.patch, 
> HBASE-22480-branch-2.2-v2.patch, HBASE-22480-master-v1.patch, 
> HBASE-22480-master-v2.patch, HBASE-22480-master-v3.patch, 
> HBASE-22480-master-v4.patch, HBASE-22480-master-v5.patch, 
> HBASE-22480-master-v6.patch, HBASE-22480-master-v6.patch, 
> HBASE-22480-master-v6.patch, HBASE-22480-master-v7.patch, 
> HBASE-22480-master-v7.patch
>
>
> After debugging HBASE-22433, i find the problem it is that we get a block 
> from BucketCache once and return this block to BucketCache twice and make the 
> ref count error, sometimes the refCount can be negative.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-22480) Get block from BlockCache once and return this block to BlockCache twice make ref count error.

2019-11-07 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin updated HBASE-22480:
--
Fix Version/s: 2.2.3
   2.3.0
   3.0.0
Affects Version/s: 2.2.2

> Get block from BlockCache once and return this block to BlockCache twice make 
> ref count error.
> --
>
> Key: HBASE-22480
> URL: https://issues.apache.org/jira/browse/HBASE-22480
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.2.2
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22480-branch-2.2-v1.patch, 
> HBASE-22480-branch-2.2-v1.patch, HBASE-22480-branch-2.2-v1.patch, 
> HBASE-22480-branch-2.2-v2.patch, HBASE-22480-master-v1.patch, 
> HBASE-22480-master-v2.patch, HBASE-22480-master-v3.patch, 
> HBASE-22480-master-v4.patch, HBASE-22480-master-v5.patch, 
> HBASE-22480-master-v6.patch, HBASE-22480-master-v6.patch, 
> HBASE-22480-master-v6.patch, HBASE-22480-master-v7.patch, 
> HBASE-22480-master-v7.patch
>
>
> After debugging HBASE-22433, i find the problem it is that we get a block 
> from BucketCache once and return this block to BucketCache twice and make the 
> ref count error, sometimes the refCount can be negative.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22480) Get block from BlockCache once and return this block to BlockCache twice make ref count error.

2019-11-07 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969727#comment-16969727
 ] 

Lijin Bin commented on HBASE-22480:
---

 [~anoop.hbase] Thanks very much for the review.

> Get block from BlockCache once and return this block to BlockCache twice make 
> ref count error.
> --
>
> Key: HBASE-22480
> URL: https://issues.apache.org/jira/browse/HBASE-22480
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.2.2
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22480-branch-2.2-v1.patch, 
> HBASE-22480-branch-2.2-v1.patch, HBASE-22480-branch-2.2-v1.patch, 
> HBASE-22480-branch-2.2-v2.patch, HBASE-22480-master-v1.patch, 
> HBASE-22480-master-v2.patch, HBASE-22480-master-v3.patch, 
> HBASE-22480-master-v4.patch, HBASE-22480-master-v5.patch, 
> HBASE-22480-master-v6.patch, HBASE-22480-master-v6.patch, 
> HBASE-22480-master-v6.patch, HBASE-22480-master-v7.patch, 
> HBASE-22480-master-v7.patch
>
>
> After debugging HBASE-22433, i find the problem it is that we get a block 
> from BucketCache once and return this block to BucketCache twice and make the 
> ref count error, sometimes the refCount can be negative.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22480) Get block from BlockCache once and return this block to BlockCache twice make ref count error.

2019-11-07 Thread Zheng Hu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969730#comment-16969730
 ] 

Zheng Hu commented on HBASE-22480:
--

The UT failure is unrelated to the patch, +1 to commit the patch. [~binlijin]

> Get block from BlockCache once and return this block to BlockCache twice make 
> ref count error.
> --
>
> Key: HBASE-22480
> URL: https://issues.apache.org/jira/browse/HBASE-22480
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.2.2
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22480-branch-2.2-v1.patch, 
> HBASE-22480-branch-2.2-v1.patch, HBASE-22480-branch-2.2-v1.patch, 
> HBASE-22480-branch-2.2-v2.patch, HBASE-22480-master-v1.patch, 
> HBASE-22480-master-v2.patch, HBASE-22480-master-v3.patch, 
> HBASE-22480-master-v4.patch, HBASE-22480-master-v5.patch, 
> HBASE-22480-master-v6.patch, HBASE-22480-master-v6.patch, 
> HBASE-22480-master-v6.patch, HBASE-22480-master-v7.patch, 
> HBASE-22480-master-v7.patch
>
>
> After debugging HBASE-22433, i find the problem it is that we get a block 
> from BucketCache once and return this block to BucketCache twice and make the 
> ref count error, sometimes the refCount can be negative.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] binlijin merged pull request #799: HBASE-23262 Cannot load Master UI

2019-11-07 Thread GitBox
binlijin merged pull request #799: HBASE-23262 Cannot load Master UI
URL: https://github.com/apache/hbase/pull/799
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] binlijin merged pull request #800: HBASE-23263 NPE in Quotas.jsp

2019-11-07 Thread GitBox
binlijin merged pull request #800: HBASE-23263 NPE in Quotas.jsp
URL: https://github.com/apache/hbase/pull/800
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-23263) NPE in Quotas.jsp

2019-11-07 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin resolved HBASE-23263.
---
Resolution: Fixed

> NPE in Quotas.jsp
> -
>
> Key: HBASE-23263
> URL: https://issues.apache.org/jira/browse/HBASE-23263
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> QuotaManager will be started after master initialization. If no online 
> regionservers then master will not be initialized and will throw NPE over 
> accessing Quota page. 
>  
> [http://172.26.70.200:16010/quotas.jsp]
> {code:java}
> HTTP ERROR 500
> Problem accessing /quotas.jsp. 
> Reason:    
>  Server Error
> Caused by:java.lang.NullPointerException
>  at 
> org.apache.hadoop.hbase.generated.master.quotas_jsp._jspService(quotas_jsp.java:58)
>  at 
> org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23263) NPE in Quotas.jsp

2019-11-07 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin updated HBASE-23263:
--
Fix Version/s: 2.2.3
   2.1.8
   2.3.0
   3.0.0

> NPE in Quotas.jsp
> -
>
> Key: HBASE-23263
> URL: https://issues.apache.org/jira/browse/HBASE-23263
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> QuotaManager will be started after master initialization. If no online 
> regionservers then master will not be initialized and will throw NPE over 
> accessing Quota page. 
>  
> [http://172.26.70.200:16010/quotas.jsp]
> {code:java}
> HTTP ERROR 500
> Problem accessing /quotas.jsp. 
> Reason:    
>  Server Error
> Caused by:java.lang.NullPointerException
>  at 
> org.apache.hadoop.hbase.generated.master.quotas_jsp._jspService(quotas_jsp.java:58)
>  at 
> org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23262) Cannot load Master UI

2019-11-07 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin resolved HBASE-23262.
---
Resolution: Fixed

> Cannot load Master UI
> -
>
> Key: HBASE-23262
> URL: https://issues.apache.org/jira/browse/HBASE-23262
> Project: HBase
>  Issue Type: Bug
>  Components: master, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> If no online regionservers then master UI can't be opened. This issue occurs 
> when using RSGroupAdminEndpoint coprocessor(RSGrouping).  The master home 
> page tries to load rsgroup info from "hbase:rsgroup" table but currently no 
> regionservers up and running. 
>  
> [http://172.26.70.200:16010|http://172.26.70.200:16010/]
> {code:java}
> HTTP ERROR 500
> Problem accessing /master-status. 
> Reason:    
>  Server Error
> Caused by:
> java.io.UncheckedIOException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=2, exceptions:
> ...
> Tue Nov
> 05 23:58:51 UTC 2019, , org.apache.hadoop.hbase.exceptions.TimeoutIOException:
> Timeout(9450ms) waiting for region location for hbase:rsgroup, row='',
> replicaId=0
> at
> org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)
> at
> org.apache.hadoop.hbase.RSGroupTableAccessor.getAllRSGroupInfo(RSGroupTableAccessor.java:59)
> at
> org.apache.hadoop.hbase.tmpl.master.RSGroupListTmplImpl.renderNoFlush(RSGroupListTmplImpl.java:58)
> at
> org.apache.hadoop.hbase.tmpl.master.RSGroupListTmpl.renderNoFlush(RSGroupListTmpl.java:150)
> at
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:346)
> at 
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:397)
> at
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:388)
> at
> org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:79)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23262) Cannot load Master UI

2019-11-07 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin updated HBASE-23262:
--
Fix Version/s: 2.2.3
   2.1.8
   2.3.0
   3.0.0

> Cannot load Master UI
> -
>
> Key: HBASE-23262
> URL: https://issues.apache.org/jira/browse/HBASE-23262
> Project: HBase
>  Issue Type: Bug
>  Components: master, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> If no online regionservers then master UI can't be opened. This issue occurs 
> when using RSGroupAdminEndpoint coprocessor(RSGrouping).  The master home 
> page tries to load rsgroup info from "hbase:rsgroup" table but currently no 
> regionservers up and running. 
>  
> [http://172.26.70.200:16010|http://172.26.70.200:16010/]
> {code:java}
> HTTP ERROR 500
> Problem accessing /master-status. 
> Reason:    
>  Server Error
> Caused by:
> java.io.UncheckedIOException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=2, exceptions:
> ...
> Tue Nov
> 05 23:58:51 UTC 2019, , org.apache.hadoop.hbase.exceptions.TimeoutIOException:
> Timeout(9450ms) waiting for region location for hbase:rsgroup, row='',
> replicaId=0
> at
> org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)
> at
> org.apache.hadoop.hbase.RSGroupTableAccessor.getAllRSGroupInfo(RSGroupTableAccessor.java:59)
> at
> org.apache.hadoop.hbase.tmpl.master.RSGroupListTmplImpl.renderNoFlush(RSGroupListTmplImpl.java:58)
> at
> org.apache.hadoop.hbase.tmpl.master.RSGroupListTmpl.renderNoFlush(RSGroupListTmpl.java:150)
> at
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:346)
> at 
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:397)
> at
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:388)
> at
> org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:79)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23263) NPE in Quotas.jsp

2019-11-07 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin updated HBASE-23263:
--
Fix Version/s: (was: 2.1.8)

> NPE in Quotas.jsp
> -
>
> Key: HBASE-23263
> URL: https://issues.apache.org/jira/browse/HBASE-23263
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> QuotaManager will be started after master initialization. If no online 
> regionservers then master will not be initialized and will throw NPE over 
> accessing Quota page. 
>  
> [http://172.26.70.200:16010/quotas.jsp]
> {code:java}
> HTTP ERROR 500
> Problem accessing /quotas.jsp. 
> Reason:    
>  Server Error
> Caused by:java.lang.NullPointerException
>  at 
> org.apache.hadoop.hbase.generated.master.quotas_jsp._jspService(quotas_jsp.java:58)
>  at 
> org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] openinx commented on a change in pull request #796: HBASE-23251 - Add Column Family and Table Names to HFileContext and u…

2019-11-07 Thread GitBox
openinx commented on a change in pull request #796: HBASE-23251 - Add Column 
Family and Table Names to HFileContext and u…
URL: https://github.com/apache/hbase/pull/796#discussion_r343961431
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
 ##
 @@ -1267,6 +1267,8 @@ HFileBlock getBlockForCaching(CacheConfig cacheConf) {
 .withCompressTags(fileContext.isCompressTags())
 .withIncludesMvcc(fileContext.isIncludesMvcc())
 .withIncludesTags(fileContext.isIncludesTags())
+
.withColumnFamily(fileContext.getColumnFamily())
+.withTableName(fileContext.getTableName())
 
 Review comment:
   I see some other paths , which would also use the HFileContextBuilder to 
build a new HFileContext, should we also attach the cf & tableName in it ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #796: HBASE-23251 - Add Column Family and Table Names to HFileContext and u…

2019-11-07 Thread GitBox
openinx commented on a change in pull request #796: HBASE-23251 - Add Column 
Family and Table Names to HFileContext and u…
URL: https://github.com/apache/hbase/pull/796#discussion_r343960296
 
 

 ##
 File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
 ##
 @@ -233,6 +255,14 @@ public String toString() {
   sb.append(", name=");
   sb.append(hfileName);
 }
+if (tableName != null) {
+  sb.append(", tableName=");
+  sb.append(Bytes.toString(tableName));
 
 Review comment:
   Better to use the Bytes.toStringBinary(tableName) ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] openinx commented on a change in pull request #796: HBASE-23251 - Add Column Family and Table Names to HFileContext and u…

2019-11-07 Thread GitBox
openinx commented on a change in pull request #796: HBASE-23251 - Add Column 
Family and Table Names to HFileContext and u…
URL: https://github.com/apache/hbase/pull/796#discussion_r343960468
 
 

 ##
 File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
 ##
 @@ -233,6 +255,14 @@ public String toString() {
   sb.append(", name=");
   sb.append(hfileName);
 }
+if (tableName != null) {
+  sb.append(", tableName=");
+  sb.append(Bytes.toString(tableName));
+}
+if (columnFamily != null) {
+  sb.append(", columnFamily=");
+  sb.append(Bytes.toString(columnFamily));
 
 Review comment:
   Also better to use the Bytes.toStringBinary(columnFamily) ? because there's 
some unprintable char in the byte[] 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] binlijin merged pull request #798: HBASE-23257: Track clusterID in stand by masters

2019-11-07 Thread GitBox
binlijin merged pull request #798: HBASE-23257: Track clusterID in stand by 
masters
URL: https://github.com/apache/hbase/pull/798
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23257) Track ClusterID in stand by masters

2019-11-07 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin updated HBASE-23257:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Track ClusterID in stand by masters
> ---
>
> Key: HBASE-23257
> URL: https://issues.apache.org/jira/browse/HBASE-23257
> Project: HBase
>  Issue Type: Sub-task
>  Components: master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
> Fix For: 3.0.0
>
>
> Currently, only active master tracks the cluster ID. As a part of removing 
> client dependency on ZK (HBASE-18095), it was noted that having stand by 
> masters serve ClusterID will help load balance the client requests instead of 
> hot-spotting the active master. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23257) Track ClusterID in stand by masters

2019-11-07 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin updated HBASE-23257:
--
Fix Version/s: 3.0.0

> Track ClusterID in stand by masters
> ---
>
> Key: HBASE-23257
> URL: https://issues.apache.org/jira/browse/HBASE-23257
> Project: HBase
>  Issue Type: Sub-task
>  Components: master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
> Fix For: 3.0.0
>
>
> Currently, only active master tracks the cluster ID. As a part of removing 
> client dependency on ZK (HBASE-18095), it was noted that having stand by 
> masters serve ClusterID will help load balance the client requests instead of 
> hot-spotting the active master. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22980) HRegionPartioner getPartition() method incorrectly partitions the regions of the table.

2019-11-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969786#comment-16969786
 ] 

Hudson commented on HBASE-22980:


Results for branch branch-2
[build #2347 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2347/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2347//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2347//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2347//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HRegionPartioner getPartition() method incorrectly partitions the regions of 
> the table.
> ---
>
> Key: HBASE-22980
> URL: https://issues.apache.org/jira/browse/HBASE-22980
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Shardul Singh
>Assignee: Shardul Singh
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> *Problem:*
> Partitioner class HRegionPartitioner in a HBase MapReduce job has a method 
> getPartition(). In getPartition(), there is a scenario where we have check 
> for less number of reducers than region. This scenario seems incorrect 
> because for a rowKey present in last region(let us say nth region) , 
> getPartition() should return value (n-1). But it is not returning n-1 for the 
> last region as it is falling in the case where number of reducers < number of 
> regions and returning some random value. 
> So if a client uses this class as a partitioner class in HBase MapReduce 
> jobs, this method incorrectly partitions the regions because rowKeys present 
> in the last regions does not fall to the last region.
> [https://github.com/apache/hbase/blob/fbd5b5e32753104f88600b0f4c803ab5659bce64/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java#L92]
> Consider the following scenario:
> if there are 5 regions for the table, partitions = 5 and number of reducers 
> is also 5.
> So in this case above check for reducers < regions should not return true.
> But for the last region when i=4(last region, 5th region) , getPartition 
> should return 4 but it returns 2 because it falls in the case of when we have 
> less reduces than region and returns true for the above condition even though 
> we have reducers = regions. So the condition is incorrect.
>  
> *Solution:*
> Instead of
>   {code} if (i >= numPartitions-1) {code} 
> It should be
>{code} if (i >= numPartitions){  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23212) Provide config reload for Auto Region Reopen based on storeFile ref count

2019-11-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969785#comment-16969785
 ] 

Hudson commented on HBASE-23212:


Results for branch branch-2
[build #2347 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2347/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2347//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2347//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2347//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Provide config reload for Auto Region Reopen based on storeFile ref count
> -
>
> Key: HBASE-23212
> URL: https://issues.apache.org/jira/browse/HBASE-23212
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.3.0, 1.6.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 1.6.0
>
> Attachments: HBASE-23212.branch-1.000.patch, 
> HBASE-23212.branch-1.000.patch, HBASE-23212.branch-2.000.patch, 
> HBASE-23212.branch-2.000.patch
>
>
> We should provide flexibility to tune max storeFile Ref Count threshold that 
> is considered for auto region reopen as it represents leak on store file. 
> While running some perf tests, user can bring ref count very high if 
> required, but this config change should be dynamic and should not require 
> HMaster restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #775: HBASE-23230 Enforce member visibility in HRegionServer

2019-11-07 Thread GitBox
Apache-HBase commented on issue #775: HBASE-23230 Enforce member visibility in 
HRegionServer
URL: https://github.com/apache/hbase/pull/775#issuecomment-551365887
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 46s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  1s |  The patch appears to include 14 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 50s |  master passed  |
   | :green_heart: |  compile  |   1m  0s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 30s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 46s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 39s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 34s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m 31s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 12s |  the patch passed  |
   | :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | :broken_heart: |  javac  |   0m 55s |  hbase-server generated 1 new + 3 
unchanged - 3 fixed = 4 total (was 6)  |
   | :broken_heart: |  checkstyle  |   1m 23s |  hbase-server: The patch 
generated 4 new + 266 unchanged - 41 fixed = 270 total (was 307)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 33s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  16m 30s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 18s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 166m 49s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 36s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 226m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-775/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/775 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 6f264290adb3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-775/out/precommit/personality/provided.sh
 |
   | git revision | master / f58bd4a7ac |
   | Default Java | 1.8.0_181 |
   | javac | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-775/5/artifact/out/diff-compile-javac-hbase-server.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-775/5/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-775/5/testReport/
 |
   | Max. process+thread count | 4243 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-775/5/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22980) HRegionPartioner getPartition() method incorrectly partitions the regions of the table.

2019-11-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969802#comment-16969802
 ] 

Hudson commented on HBASE-22980:


Results for branch branch-2.1
[build #1703 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1703/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1703//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1703//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1703//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HRegionPartioner getPartition() method incorrectly partitions the regions of 
> the table.
> ---
>
> Key: HBASE-22980
> URL: https://issues.apache.org/jira/browse/HBASE-22980
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Shardul Singh
>Assignee: Shardul Singh
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> *Problem:*
> Partitioner class HRegionPartitioner in a HBase MapReduce job has a method 
> getPartition(). In getPartition(), there is a scenario where we have check 
> for less number of reducers than region. This scenario seems incorrect 
> because for a rowKey present in last region(let us say nth region) , 
> getPartition() should return value (n-1). But it is not returning n-1 for the 
> last region as it is falling in the case where number of reducers < number of 
> regions and returning some random value. 
> So if a client uses this class as a partitioner class in HBase MapReduce 
> jobs, this method incorrectly partitions the regions because rowKeys present 
> in the last regions does not fall to the last region.
> [https://github.com/apache/hbase/blob/fbd5b5e32753104f88600b0f4c803ab5659bce64/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java#L92]
> Consider the following scenario:
> if there are 5 regions for the table, partitions = 5 and number of reducers 
> is also 5.
> So in this case above check for reducers < regions should not return true.
> But for the last region when i=4(last region, 5th region) , getPartition 
> should return 4 but it returns 2 because it falls in the case of when we have 
> less reduces than region and returns true for the above condition even though 
> we have reducers = regions. So the condition is incorrect.
>  
> *Solution:*
> Instead of
>   {code} if (i >= numPartitions-1) {code} 
> It should be
>{code} if (i >= numPartitions){  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23248) hbase openjdk11 compile error

2019-11-07 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969804#comment-16969804
 ] 

Sean Busbey commented on HBASE-23248:
-

bq. [WARNING] Some problems were encountered while building the effective model 
for org.apache.hbase:hbase-build-configuration:pom:2.1.0-cdh6.3.0

you're building with a vendor's fork of the project. AFAIK you can't build it 
with jdk11. You should use the vendor's support mechanisms if you need to get 
compiling with jdk11 to work for some reason.

The first release from the apache hbase project that is expected to build with 
jdk11 will be the upcoming Apache HBase 2.3.0.

> hbase openjdk11 compile error 
> --
>
> Key: HBASE-23248
> URL: https://issues.apache.org/jira/browse/HBASE-23248
> Project: HBase
>  Issue Type: Bug
>  Components: build, java
>Reporter: jackylau
>Priority: Major
> Attachments: log
>
>
> i find this 
> [https://stackoverflow.com/questions/49398894/unable-to-compile-simple-java-10-java-11-project-with-maven/55047110#55047110],
>  but it still can not solve this problem
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on project 
> hbase-protocol-shaded: Error creating shaded jar: null: 
> IllegalArgumentException -> [Help 1][ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on project 
> hbase-protocol-shaded: Error creating shaded jar: null: 
> IllegalArgumentException -> [Help 
> 1]org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-shade-plugin:3.0.0:shade (default) on 
> project hbase-protocol-shaded: Error creating shaded jar: null at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
>  at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
>  at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
>  at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307) at 
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193) at 
> org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106) at 
> org.apache.maven.cli.MavenCli.execute(MavenCli.java:863) at 
> org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288) at 
> org.apache.maven.cli.MavenCli.main(MavenCli.java:199) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
>  at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) 
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
>  at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)Caused
>  by: org.apache.maven.plugin.MojoExecutionException: Error creating shaded 
> jar: null at 
> org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:546) at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
>  at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
>  ... 20 moreCaused by: java.lang.IllegalArgumentException at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.objectweb.asm.ClassReader.(Unknown Source) at 
> org.vafer.jdependency.Clazzpath.addClazzpathUnit(Clazzpath.java:201) at 
> org.vafer.jdependency.Clazzpath.addClazzpathUnit(Clazzpath.java:132) at 
> org.apache.maven.plugins.shade.filter.MinijarFilter.(MinijarFilter.java:95)
>  at 
> org.apache.maven.plugins.shade.mojo.ShadeMojo.getFilters(ShadeMojo.java:826) 
> at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:434) 
> ... 22 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23262) Cannot load Master UI

2019-11-07 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969809#comment-16969809
 ] 

Lijin Bin commented on HBASE-23262:
---

Pushed to branch-2.1+.
Thanks  [~kpalanisamy] for contributing.

> Cannot load Master UI
> -
>
> Key: HBASE-23262
> URL: https://issues.apache.org/jira/browse/HBASE-23262
> Project: HBase
>  Issue Type: Bug
>  Components: master, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
>
> If no online regionservers then master UI can't be opened. This issue occurs 
> when using RSGroupAdminEndpoint coprocessor(RSGrouping).  The master home 
> page tries to load rsgroup info from "hbase:rsgroup" table but currently no 
> regionservers up and running. 
>  
> [http://172.26.70.200:16010|http://172.26.70.200:16010/]
> {code:java}
> HTTP ERROR 500
> Problem accessing /master-status. 
> Reason:    
>  Server Error
> Caused by:
> java.io.UncheckedIOException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=2, exceptions:
> ...
> Tue Nov
> 05 23:58:51 UTC 2019, , org.apache.hadoop.hbase.exceptions.TimeoutIOException:
> Timeout(9450ms) waiting for region location for hbase:rsgroup, row='',
> replicaId=0
> at
> org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)
> at
> org.apache.hadoop.hbase.RSGroupTableAccessor.getAllRSGroupInfo(RSGroupTableAccessor.java:59)
> at
> org.apache.hadoop.hbase.tmpl.master.RSGroupListTmplImpl.renderNoFlush(RSGroupListTmplImpl.java:58)
> at
> org.apache.hadoop.hbase.tmpl.master.RSGroupListTmpl.renderNoFlush(RSGroupListTmpl.java:150)
> at
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:346)
> at 
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:397)
> at
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:388)
> at
> org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:79)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23257) Track ClusterID in stand by masters

2019-11-07 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969813#comment-16969813
 ] 

Lijin Bin commented on HBASE-23257:
---

Pushed to master
Thanks [~bharathv] for contributing.

> Track ClusterID in stand by masters
> ---
>
> Key: HBASE-23257
> URL: https://issues.apache.org/jira/browse/HBASE-23257
> Project: HBase
>  Issue Type: Sub-task
>  Components: master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
> Fix For: 3.0.0
>
>
> Currently, only active master tracks the cluster ID. As a part of removing 
> client dependency on ZK (HBASE-18095), it was noted that having stand by 
> masters serve ClusterID will help load balance the client requests instead of 
> hot-spotting the active master. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23263) NPE in Quotas.jsp

2019-11-07 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969810#comment-16969810
 ] 

Lijin Bin commented on HBASE-23263:
---

Pushed to branch-2.2+.
Thanks [~kpalanisamy] for contributing.

> NPE in Quotas.jsp
> -
>
> Key: HBASE-23263
> URL: https://issues.apache.org/jira/browse/HBASE-23263
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> QuotaManager will be started after master initialization. If no online 
> regionservers then master will not be initialized and will throw NPE over 
> accessing Quota page. 
>  
> [http://172.26.70.200:16010/quotas.jsp]
> {code:java}
> HTTP ERROR 500
> Problem accessing /quotas.jsp. 
> Reason:    
>  Server Error
> Caused by:java.lang.NullPointerException
>  at 
> org.apache.hadoop.hbase.generated.master.quotas_jsp._jspService(quotas_jsp.java:58)
>  at 
> org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #796: HBASE-23251 - Add Column Family and Table Names to HFileContext and u…

2019-11-07 Thread GitBox
Apache-HBase commented on issue #796: HBASE-23251 - Add Column Family and Table 
Names to HFileContext and u…
URL: https://github.com/apache/hbase/pull/796#issuecomment-551373862
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 4 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 33s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   5m 39s |  master passed  |
   | :green_heart: |  compile  |   1m 46s |  master passed  |
   | :green_heart: |  checkstyle  |   2m 16s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 57s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   1m 13s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 29s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   6m  3s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   6m  3s |  the patch passed  |
   | :green_heart: |  compile  |   2m  2s |  the patch passed  |
   | :green_heart: |  javac  |   2m  2s |  the patch passed  |
   | :green_heart: |  checkstyle  |   2m 19s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   5m  0s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  17m  4s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   1m 10s |  the patch passed  |
   | :green_heart: |  findbugs  |   6m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   2m 55s |  hbase-common in the patch passed.  |
   | :green_heart: |  unit  | 163m 41s |  hbase-server in the patch passed.  |
   | :green_heart: |  unit  |  17m 11s |  hbase-mapreduce in the patch passed.  
|
   | :green_heart: |  asflicense  |   1m 23s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 255m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-796/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/796 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux ee7498281e1d 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-796/out/precommit/personality/provided.sh
 |
   | git revision | master / 34d5b3bf05 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-796/4/testReport/
 |
   | Max. process+thread count | 5243 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-server hbase-mapreduce U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-796/4/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22480) Get block from BlockCache once and return this block to BlockCache twice make ref count error.

2019-11-07 Thread chenxu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969817#comment-16969817
 ] 

chenxu commented on HBASE-22480:


After reading the code, I have some doubt about 
HFileScannerImpl#updateCurrBlockRef
{code:java}
void updateCurrBlockRef(HFileBlock block) {
  if (block != null && curBlock != null && block.getOffset() == 
curBlock.getOffset()) {
return;
  }
  if (this.curBlock != null && this.curBlock.isSharedMem()) {
prevBlocks.add(this.curBlock);
  }
  this.curBlock = block;
}
{code}
I know it has little to do with the current JIRA, It's just a question
Should we put the new block into the prevBlocks if it has the same offset with 
the curBlock. Otherwise, we can't release it ?

> Get block from BlockCache once and return this block to BlockCache twice make 
> ref count error.
> --
>
> Key: HBASE-22480
> URL: https://issues.apache.org/jira/browse/HBASE-22480
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.2.2
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-22480-branch-2.2-v1.patch, 
> HBASE-22480-branch-2.2-v1.patch, HBASE-22480-branch-2.2-v1.patch, 
> HBASE-22480-branch-2.2-v2.patch, HBASE-22480-master-v1.patch, 
> HBASE-22480-master-v2.patch, HBASE-22480-master-v3.patch, 
> HBASE-22480-master-v4.patch, HBASE-22480-master-v5.patch, 
> HBASE-22480-master-v6.patch, HBASE-22480-master-v6.patch, 
> HBASE-22480-master-v6.patch, HBASE-22480-master-v7.patch, 
> HBASE-22480-master-v7.patch
>
>
> After debugging HBASE-22433, i find the problem it is that we get a block 
> from BucketCache once and return this block to BucketCache twice and make the 
> ref count error, sometimes the refCount can be negative.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #806: HBASE-23230 Enforce member visibility in HRegionServer

2019-11-07 Thread GitBox
Apache-HBase commented on issue #806: HBASE-23230 Enforce member visibility in 
HRegionServer
URL: https://github.com/apache/hbase/pull/806#issuecomment-551377616
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 50s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 13 
new or modified test files.  |
   ||| _ branch-2 Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 48s |  branch-2 passed  |
   | :green_heart: |  compile  |   1m 10s |  branch-2 passed  |
   | :green_heart: |  checkstyle  |   1m 47s |  branch-2 passed  |
   | :green_heart: |  shadedjars  |   5m 18s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 43s |  branch-2 passed  |
   | :blue_heart: |  spotbugs  |   4m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m  1s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 29s |  the patch passed  |
   | :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | :broken_heart: |  javac  |   1m  7s |  hbase-server generated 1 new + 3 
unchanged - 3 fixed = 4 total (was 6)  |
   | :broken_heart: |  checkstyle  |   1m 44s |  hbase-server: The patch 
generated 5 new + 286 unchanged - 41 fixed = 291 total (was 327)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   5m 25s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  20m 21s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 40s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m  7s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 172m 46s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 34s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 242m  1s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-806/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/806 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 2ca3fa813acc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-806/out/precommit/personality/provided.sh
 |
   | git revision | branch-2 / d1864ae8af |
   | Default Java | 1.8.0_181 |
   | javac | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-806/1/artifact/out/diff-compile-javac-hbase-server.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-806/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-806/1/testReport/
 |
   | Max. process+thread count | 4251 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-806/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23273) Table header is not correct on table.jsp when table name is hbase:meta

2019-11-07 Thread Baiqiang Zhao (Jira)
Baiqiang Zhao created HBASE-23273:
-

 Summary: Table header is not correct on table.jsp when table name 
is hbase:meta
 Key: HBASE-23273
 URL: https://issues.apache.org/jira/browse/HBASE-23273
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 1.4.11
Reporter: Baiqiang Zhao
Assignee: Baiqiang Zhao
 Attachments: WX20191107-161810.png

When viewing table.jsp?name=hbase:meta, the table header in "Table Regions" is 
not correct. As shown in the figure below:

!WX20191107-161810.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23270) Inter-cluster replication is unaware destination peer cluster's RSGroup to push the WALEdits

2019-11-07 Thread ramkrishna.s.vasudevan (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan reassigned HBASE-23270:
--

Assignee: Pradeep

> Inter-cluster replication is unaware destination peer cluster's RSGroup to 
> push the WALEdits
> 
>
> Key: HBASE-23270
> URL: https://issues.apache.org/jira/browse/HBASE-23270
> Project: HBase
>  Issue Type: Bug
>Reporter: Pradeep
>Assignee: Pradeep
>Priority: Major
>
> In a source RSGroup enabled HBase cluster where replication is enabled to 
> another destination RSGroup enabled cluster, the replication stream of 
> List go to any node in the destination cluster without the 
> awareness of RSGroup and then gets routed to appropriate node where the 
> region is hosted. This extra hop where the data is received and routed could 
> be of any node in the cluster and no restriction exists to select the node 
> within the same RSGroup.
> Implications: RSGroup owner in the multi-tenant HBase cluster can see 
> performance and throughput deviations because of this unpredictability caused 
> by replication.
> Potential fix: options:
> a) Select a destination node having RSGroup awareness
> b) Group the WAL.Edit list based on region and then by region-servers in 
> which the regions are assigned in the destination. Pass the list WAL.Edit 
> directly to the region-server to avoid extra intermediate hop in the 
> destination cluster during the replication process. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23273) Table header is not correct on table.jsp when table name is hbase:meta

2019-11-07 Thread Baiqiang Zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Baiqiang Zhao updated HBASE-23273:
--
Attachment: HBASE-23273.branch-1.0001.patch

> Table header is not correct on table.jsp when table name is hbase:meta
> --
>
> Key: HBASE-23273
> URL: https://issues.apache.org/jira/browse/HBASE-23273
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.4.11
>Reporter: Baiqiang Zhao
>Assignee: Baiqiang Zhao
>Priority: Minor
> Attachments: HBASE-23273.branch-1.0001.patch, WX20191107-161810.png
>
>
> When viewing table.jsp?name=hbase:meta, the table header in "Table Regions" 
> is not correct. As shown in the figure below:
> !WX20191107-161810.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23273) Table header is not correct on table.jsp when table name is hbase:meta

2019-11-07 Thread Baiqiang Zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Baiqiang Zhao updated HBASE-23273:
--
Status: Patch Available  (was: Open)

> Table header is not correct on table.jsp when table name is hbase:meta
> --
>
> Key: HBASE-23273
> URL: https://issues.apache.org/jira/browse/HBASE-23273
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.4.11
>Reporter: Baiqiang Zhao
>Assignee: Baiqiang Zhao
>Priority: Minor
> Attachments: HBASE-23273.branch-1.0001.patch, WX20191107-161810.png
>
>
> When viewing table.jsp?name=hbase:meta, the table header in "Table Regions" 
> is not correct. As shown in the figure below:
> !WX20191107-161810.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23270) Inter-cluster replication is unaware destination peer cluster's RSGroup to push the WALEdits

2019-11-07 Thread chenxu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969832#comment-16969832
 ] 

chenxu commented on HBASE-23270:


bq. Select a destination node having RSGroup awareness
+1
Not only ReplicationSink, ReplicationSource should also be awareness of the 
source rsgroup, right?

> Inter-cluster replication is unaware destination peer cluster's RSGroup to 
> push the WALEdits
> 
>
> Key: HBASE-23270
> URL: https://issues.apache.org/jira/browse/HBASE-23270
> Project: HBase
>  Issue Type: Bug
>Reporter: Pradeep
>Assignee: Pradeep
>Priority: Major
>
> In a source RSGroup enabled HBase cluster where replication is enabled to 
> another destination RSGroup enabled cluster, the replication stream of 
> List go to any node in the destination cluster without the 
> awareness of RSGroup and then gets routed to appropriate node where the 
> region is hosted. This extra hop where the data is received and routed could 
> be of any node in the cluster and no restriction exists to select the node 
> within the same RSGroup.
> Implications: RSGroup owner in the multi-tenant HBase cluster can see 
> performance and throughput deviations because of this unpredictability caused 
> by replication.
> Potential fix: options:
> a) Select a destination node having RSGroup awareness
> b) Group the WAL.Edit list based on region and then by region-servers in 
> which the regions are assigned in the destination. Pass the list WAL.Edit 
> directly to the region-server to avoid extra intermediate hop in the 
> destination cluster during the replication process. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 closed pull request #797: HBASE-23236 test yetus 0.11.1

2019-11-07 Thread GitBox
Apache9 closed pull request #797: HBASE-23236 test yetus 0.11.1
URL: https://github.com/apache/hbase/pull/797
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >