[jira] [Created] (HBASE-12852) Tests from hbase-it that use ChaosMonkey don't fail if SSH commands fail

2015-01-13 Thread Dima Spivak (JIRA)
Dima Spivak created HBASE-12852:
---

 Summary: Tests from hbase-it that use ChaosMonkey don't fail if 
SSH commands fail
 Key: HBASE-12852
 URL: https://issues.apache.org/jira/browse/HBASE-12852
 Project: HBase
  Issue Type: Bug
  Components: integration tests
Affects Versions: 0.98.6
Reporter: Dima Spivak
Assignee: Dima Spivak


I've just started rolling my sleeves up and playing about with hbase-it (at the 
moment, only on 0.98.6), but wanted to begin filing JIRAs for issues I 
encounter so that I don't forget to get to them. First up is the fact that it 
seems that tests run with ChaosMonkey don't fail when the ChaosMonkey fails to 
work. As an example, while running IntegrationTestIngest with a 
slowDeterministic CM, I forgot to set up SSH properly and saw the following:
{code}
15/01/14 07:36:53 WARN hbase.ClusterManager: Remote command: ps aux | grep 
proc_regionserver | grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs kill -s 
SIGKILL , hostname:node-5.internal failed at attempt 4. Retrying until 
maxAttempts: 5. Exception: stderr: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).
, stdout: 
15/01/14 07:36:53 INFO util.RetryCounter: Sleeping 16000ms before retry #4...
15/01/14 07:36:53 INFO zookeeper.ZooKeeper: Session: 0x14ae74d7bac006b closed
15/01/14 07:36:53 INFO policies.Policy: Sleeping for: 59541
15/01/14 07:36:53 INFO zookeeper.ClientCnxn: EventThread shut down
Failed to write keys: 0
Key range: [15..15]
Batch updates: false
Percent of keys to update: 60
Updater threads: 10
Ignore nonce conflicts: true
Regions per server: 5
15/01/14 07:36:56 INFO util.LoadTestTool: Starting to mutate data...
Starting to mutate data...
15/01/14 07:36:57 INFO policies.Policy: Sleeping for: 88816
15/01/14 07:37:01 INFO util.MultiThreadedAction: [U:10] Keys=471, cols=5.7 K, 
time=00:00:05 Overall: [keys/s= 94, latency=102 ms] Current: [keys/s=94, 
latency=102 ms], wroteUpTo=14
15/01/14 07:37:06 INFO util.MultiThreadedAction: [U:10] Keys=908, cols=11.0 K, 
time=00:00:10 Overall: [keys/s= 90, latency=90 ms] Current: [keys/s=87, 
latency=77 ms], wroteUpTo=14
15/01/14 07:37:09 INFO hbase.ClusterManager: Executing remote command: ps aux | 
grep proc_regionserver | grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs kill 
-s SIGKILL , hostname:node-5.internal
15/01/14 07:37:09 INFO util.Shell: Executing full command [/usr/bin/ssh  
node-5.internal "ps aux | grep proc_regionserver | grep -v grep | tr -s ' ' | 
cut -d ' ' -f2 | xargs kill -s SIGKILL"]
15/01/14 07:37:09 WARN policies.Policy: Exception occured during performing 
action: ExitCodeException exitCode=255: stderr: Permission denied, please try 
again.
Permission denied, please try again.
Permission denied (publickey,password).
, stdout: 
at 
org.apache.hadoop.hbase.HBaseClusterManager.exec(HBaseClusterManager.java:208)
at 
org.apache.hadoop.hbase.HBaseClusterManager.execWithRetries(HBaseClusterManager.java:223)
at 
org.apache.hadoop.hbase.HBaseClusterManager.signal(HBaseClusterManager.java:268)
at org.apache.hadoop.hbase.ClusterManager.kill(ClusterManager.java:97)
at 
org.apache.hadoop.hbase.DistributedHBaseCluster.killRegionServer(DistributedHBaseCluster.java:110)
at org.apache.hadoop.hbase.chaos.actions.Action.killRs(Action.java:84)
at 
org.apache.hadoop.hbase.chaos.actions.RestartActionBaseAction.restartRs(RestartActionBaseAction.java:50)
at 
org.apache.hadoop.hbase.chaos.actions.RestartRsHoldingMetaAction.perform(RestartRsHoldingMetaAction.java:38)
at 
org.apache.hadoop.hbase.chaos.policies.DoActionsOncePolicy.runOneIteration(DoActionsOncePolicy.java:50)
at 
org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41)
at 
org.apache.hadoop.hbase.chaos.policies.CompositeSequentialPolicy.run(CompositeSequentialPolicy.java:42)
at java.lang.Thread.run(Thread.java:745)
{code}

Seems to me that tests should fail in these instances rather than just toss a 
warning. Was this just an oversight, [~enis] and [~ndimiduk], or is this by 
design?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276606#comment-14276606
 ] 

Hadoop QA commented on HBASE-5878:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692160/HBASE-5878-v4.patch
  against master branch at commit 9b7f36b8cf521bcc01ac6476349a9d2f34be8bb3.
  ATTACHMENT ID: 12692160

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.curator.test.TestingZooKeeperMain.runFromConfig(TestingZooKeeperMain.java:73)
at 
org.apache.curator.test.TestingZooKeeperServer$1.run(TestingZooKeeperServer.java:134)
at org.apache.oozie.test.MiniHCatServer$1.run(MiniHCatServer.java:137)
at 
org.apache.oozie.test.XTestCase$MiniClusterShutdownMonitor.run(XTestCase.java:1071)
at org.apache.oozie.test.XTestCase.waitFor(XTestCase.java:692)
at org.apache.oozie.test.XTestCase.sleep(XTestCase.java:710)
at 
org.apache.oozie.service.TestZKXLogStreamingService.testDisableLogOverWS(TestZKXLogStreamingService.java:88)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12458//console

This message is automatically generated.

> Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
> ---
>
> Key: HBASE-5878
> URL: https://issues.apache.org/jira/browse/HBASE-5878
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Uma Maheswara Rao G
>Assignee: Ashish Singhi
> Fix For: 1.0.0, 2.0.0
>
> Attachments: HBASE-5878-v2.patch, HBASE-5878-v3.patch, 
> HBASE-

[jira] [Commented] (HBASE-11144) Filter to support scanning multiple row key ranges

2015-01-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276585#comment-14276585
 ] 

Lars Hofhansl commented on HBASE-11144:
---

bq. We need to add this as a client feature[...]?

Maybe. It's not really that hard to issue a few scans.

Finding a small sub-range out of a very large set of rows is precisely what 
HBase is good at, so I am bit surprised we need this.
A filter like this implementing skip-scans is good for the equivalent of a IN 
(v1, v2, v3, v4, ...) query, i.e. many point queries (or Gets) that can now be 
executed in a single RPC. AFAIK that is what Phoenix uses its filter for. Maybe 
it'll work too if the individual ranges are small.
Once the retrieved ranges approach a certain size (maybe 1000's or 1's of 
rows) I doubt this will be better over multiple scan RPCs. Especially when 
those are farmed out in parallel (as Phoenix does).

Note that Phoenix parallelizes scan requests (so some of the perf comes from 
using more resources of the cluster).


> Filter to support scanning multiple row key ranges
> --
>
> Key: HBASE-11144
> URL: https://issues.apache.org/jira/browse/HBASE-11144
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE_11144_4.patch, HBASE_11144_V10.patch, 
> HBASE_11144_V11.patch, HBASE_11144_V12.patch, HBASE_11144_V13.patch, 
> HBASE_11144_V14.patch, HBASE_11144_V15.patch, HBASE_11144_V16.patch, 
> HBASE_11144_V17.patch, HBASE_11144_V18.patch, HBASE_11144_V5.patch, 
> HBASE_11144_V6.patch, HBASE_11144_V7.patch, HBASE_11144_V9.patch, 
> MultiRowRangeFilter.patch, MultiRowRangeFilter2.patch, 
> MultiRowRangeFilter3.patch, hbase_11144_V8.patch
>
>
> HBase is quite efficient when scanning only one small row key range. If user 
> needs to specify multiple row key ranges in one scan, the typical solutions 
> are: 1. through FilterList which is a list of row key Filters, 2. using the 
> SQL layer over HBase to join with two table, such as hive, phoenix etc. 
> However, both solutions are inefficient. Both of them can’t utilize the range 
> info to perform fast forwarding during scan which is quite time consuming. If 
> the number of ranges are quite big (e.g. millions), join is a proper solution 
> though it is slow. However, there are cases that user wants to specify a 
> small number of ranges to scan (e.g. <1000 ranges). Both solutions can’t 
> provide satisfactory performance in such case. 
> We provide this filter (MultiRowRangeFilter) to support such use case (scan 
> multiple row key ranges), which can construct the row key ranges from user 
> specified list and perform fast-forwarding during scan. Thus, the scan will 
> be quite efficient. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2015-01-13 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-12393:
--
Attachment: HBASE-12393-master.patch

> The regionserver web will throw exception if we disable block cache
> ---
>
> Key: HBASE-12393
> URL: https://issues.apache.org/jira/browse/HBASE-12393
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, UI
>Affects Versions: 0.98.7
> Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12393-master.patch, HBASE-12393-v2.patch, 
> HBASE-12393.patch
>
>
> The CacheConfig.getBlockCache() will return the null point when we set 
> hfile.block.cache.size to zero.
> The BlockCacheTmpl.jamon doesn't make a check on null blockcache.
> {code}
> <%if cacheConfig == null %>
> CacheConfig is null
> <%else>
> 
> 
> Attribute
> Value
> Description
> 
> 
> Size
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().size()) %>
> Total size of Block Cache (bytes)
> 
> 
> Free
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().getFreeSize()) 
> %>
> Free space in Block Cache (bytes)
> 
> 
> Count
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getBlockCount()) %>
> Number of blocks in Block Cache
> 
> 
> Evicted
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictedCount()) %>
> Number of blocks evicted
> 
> 
> Evictions
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictionCount()) %>
> Number of times an eviction occurred
> 
> 
> Hits
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCount()) %>
> Number requests that were cache hits
> 
> 
> Hits Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCachingCount()) %>
> Cache hit block requests but only requests set to use Block 
> Cache
> 
> 
> Misses
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Number of requests that were cache misses
> 
> 
> Misses Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Block requests that were cache misses but only requests set to 
> use Block Cache
> 
> 
> Hit Ratio
> <% String.format("%,.2f", 
> cacheConfig.getBlockCache().getStats().getHitRatio() * 100) %><% "%" %>
> Hit Count divided by total requests count
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-7541) Convert all tests that use HBaseTestingUtility.createMultiRegions to HBA.createTable

2015-01-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7541:
-
   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Nice one Jonathan. Thanks for the patch.  Pushed to master. If you make a 
version for branch-1, I will push it there to.  Thanks.

> Convert all tests that use HBaseTestingUtility.createMultiRegions to 
> HBA.createTable
> 
>
> Key: HBASE-7541
> URL: https://issues.apache.org/jira/browse/HBASE-7541
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Jonathan Lawlor
> Fix For: 2.0.0
>
> Attachments: HBASE7541_patch_v1.txt, HBASE_7541_v2.txt, 
> HBASE_7541_v2.txt
>
>
> Like I discussed in HBASE-7534, {{HBaseTestingUtility.createMultiRegions}} 
> should disappear and not come back. There's about 25 different places in the 
> code that rely on it that need to be changed the same way I changed 
> TestReplication.
> Perfect for someone that wants to get started with HBase dev :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12831) Changing the set of vis labels a user has access to doesn't generate an audit log event

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276535#comment-14276535
 ] 

Hadoop QA commented on HBASE-12831:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692151/HBASE-12831v4.patch
  against master branch at commit 9b7f36b8cf521bcc01ac6476349a9d2f34be8bb3.
  ATTACHMENT ID: 12692151

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12457//console

This message is automatically generated.

> Changing the set of vis labels a user has access to doesn't generate an audit 
> log event
> ---
>
> Key: HBASE-12831
> URL: https://issues.apache.org/jira/browse/HBASE-12831
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 2.0.0, 0.98.6
>Reporter: Sean Busbey
>Assignee: Ashish Singhi
>  Labels: audit
> Fix For: 1.0.1, 0.98.11
>
> Attachments: HBASE-12831-v2.patch, HBASE-12831-v3.patch, 
> HBASE-12831.patch, HBASE-12831v4.patch
>
>
> Right now, the AccessController makes sure that (when users care about audit 
> events) we generate an audit log event for any access change, like granting 
> or removing a permission from a user.
> When the set of labels a user has access to is altered, it gets handled by 
> the VisibilityLabelService and we don't log anything to the audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11144) Filter to support scanning multiple row key ranges

2015-01-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276529#comment-14276529
 ] 

stack commented on HBASE-11144:
---

bq. A bit late to this party, but have we compared this to issuing 100 scans 
individual scans with the proper start and stop keys set?


We need to add this as a client feature [~lhofhansl]?

> Filter to support scanning multiple row key ranges
> --
>
> Key: HBASE-11144
> URL: https://issues.apache.org/jira/browse/HBASE-11144
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE_11144_4.patch, HBASE_11144_V10.patch, 
> HBASE_11144_V11.patch, HBASE_11144_V12.patch, HBASE_11144_V13.patch, 
> HBASE_11144_V14.patch, HBASE_11144_V15.patch, HBASE_11144_V16.patch, 
> HBASE_11144_V17.patch, HBASE_11144_V18.patch, HBASE_11144_V5.patch, 
> HBASE_11144_V6.patch, HBASE_11144_V7.patch, HBASE_11144_V9.patch, 
> MultiRowRangeFilter.patch, MultiRowRangeFilter2.patch, 
> MultiRowRangeFilter3.patch, hbase_11144_V8.patch
>
>
> HBase is quite efficient when scanning only one small row key range. If user 
> needs to specify multiple row key ranges in one scan, the typical solutions 
> are: 1. through FilterList which is a list of row key Filters, 2. using the 
> SQL layer over HBase to join with two table, such as hive, phoenix etc. 
> However, both solutions are inefficient. Both of them can’t utilize the range 
> info to perform fast forwarding during scan which is quite time consuming. If 
> the number of ranges are quite big (e.g. millions), join is a proper solution 
> though it is slow. However, there are cases that user wants to specify a 
> small number of ranges to scan (e.g. <1000 ranges). Both solutions can’t 
> provide satisfactory performance in such case. 
> We provide this filter (MultiRowRangeFilter) to support such use case (scan 
> multiple row key ranges), which can construct the row key ranges from user 
> specified list and perform fast-forwarding during scan. Thus, the scan will 
> be quite efficient. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11144) Filter to support scanning multiple row key ranges

2015-01-13 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276522#comment-14276522
 ] 

Jiajia Li commented on HBASE-11144:
---

I've only test between the filterlist and multirowrangefilter, the RowFilter is 
used by the filterlist.

> Filter to support scanning multiple row key ranges
> --
>
> Key: HBASE-11144
> URL: https://issues.apache.org/jira/browse/HBASE-11144
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE_11144_4.patch, HBASE_11144_V10.patch, 
> HBASE_11144_V11.patch, HBASE_11144_V12.patch, HBASE_11144_V13.patch, 
> HBASE_11144_V14.patch, HBASE_11144_V15.patch, HBASE_11144_V16.patch, 
> HBASE_11144_V17.patch, HBASE_11144_V18.patch, HBASE_11144_V5.patch, 
> HBASE_11144_V6.patch, HBASE_11144_V7.patch, HBASE_11144_V9.patch, 
> MultiRowRangeFilter.patch, MultiRowRangeFilter2.patch, 
> MultiRowRangeFilter3.patch, hbase_11144_V8.patch
>
>
> HBase is quite efficient when scanning only one small row key range. If user 
> needs to specify multiple row key ranges in one scan, the typical solutions 
> are: 1. through FilterList which is a list of row key Filters, 2. using the 
> SQL layer over HBase to join with two table, such as hive, phoenix etc. 
> However, both solutions are inefficient. Both of them can’t utilize the range 
> info to perform fast forwarding during scan which is quite time consuming. If 
> the number of ranges are quite big (e.g. millions), join is a proper solution 
> though it is slow. However, there are cases that user wants to specify a 
> small number of ranges to scan (e.g. <1000 ranges). Both solutions can’t 
> provide satisfactory performance in such case. 
> We provide this filter (MultiRowRangeFilter) to support such use case (scan 
> multiple row key ranges), which can construct the row key ranges from user 
> specified list and perform fast-forwarding during scan. Thus, the scan will 
> be quite efficient. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11677) Make Logger instance modifiers consistent

2015-01-13 Thread Usha Kuchibhotla (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276519#comment-14276519
 ] 

Usha Kuchibhotla commented on HBASE-11677:
--

Thanks for the review [~busbey]. I have made the changes you suggested. Could 
you have a look at the patch and let me know if it can be committed?

> Make Logger instance modifiers consistent
> -
>
> Key: HBASE-11677
> URL: https://issues.apache.org/jira/browse/HBASE-11677
> Project: HBase
>  Issue Type: Task
>Reporter: Sean Busbey
>Priority: Minor
>  Labels: beginner, sonar
> Attachments: HBASE-11677-v1.patch, HBASE-11677-v2.patch, 
> HBASE-11677-v3.patch, HBASE-11677.patch
>
>
> We have some instances of Logger that are missing one of being private, 
> static, and final.
> ex from HealthChecker.java, missing final
> {code}
> private static Log LOG = LogFactory.getLog(HealthChecker.class);
> {code}
> * Clean up where possible by making {{private static final}}
> * If we can't, add a non-javadoc note about why
> One way to look for problematic instances is to grep for initial assignment 
> for the commonly used LOG member, e.g.
> * missing final: {{grep -r "LOG =" * | grep -v "final"}}
> * missing static: {{grep -r "LOG =" * | grep -v "static"}}
> * missing private: {{grep -r "LOG =" * | grep -v "private"}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11144) Filter to support scanning multiple row key ranges

2015-01-13 Thread Brian Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276517#comment-14276517
 ] 

Brian Johnson commented on HBASE-11144:
---

I'm surprised by the modest speed increase. We ended up using Phoenix to get a 
similar capability and saw a speed up of several orders of magnitude vs a 
filter list on a similar size data set to that test, but we were retrieving a 
much smaller subset of the data from the ~100 ranges (thousands of records). 

> Filter to support scanning multiple row key ranges
> --
>
> Key: HBASE-11144
> URL: https://issues.apache.org/jira/browse/HBASE-11144
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE_11144_4.patch, HBASE_11144_V10.patch, 
> HBASE_11144_V11.patch, HBASE_11144_V12.patch, HBASE_11144_V13.patch, 
> HBASE_11144_V14.patch, HBASE_11144_V15.patch, HBASE_11144_V16.patch, 
> HBASE_11144_V17.patch, HBASE_11144_V18.patch, HBASE_11144_V5.patch, 
> HBASE_11144_V6.patch, HBASE_11144_V7.patch, HBASE_11144_V9.patch, 
> MultiRowRangeFilter.patch, MultiRowRangeFilter2.patch, 
> MultiRowRangeFilter3.patch, hbase_11144_V8.patch
>
>
> HBase is quite efficient when scanning only one small row key range. If user 
> needs to specify multiple row key ranges in one scan, the typical solutions 
> are: 1. through FilterList which is a list of row key Filters, 2. using the 
> SQL layer over HBase to join with two table, such as hive, phoenix etc. 
> However, both solutions are inefficient. Both of them can’t utilize the range 
> info to perform fast forwarding during scan which is quite time consuming. If 
> the number of ranges are quite big (e.g. millions), join is a proper solution 
> though it is slow. However, there are cases that user wants to specify a 
> small number of ranges to scan (e.g. <1000 ranges). Both solutions can’t 
> provide satisfactory performance in such case. 
> We provide this filter (MultiRowRangeFilter) to support such use case (scan 
> multiple row key ranges), which can construct the row key ranges from user 
> specified list and perform fast-forwarding during scan. Thus, the scan will 
> be quite efficient. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-01-13 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-5878:
-
Attachment: HBASE-5878-v4.patch

Thanks [~eclark] for the review.
bq. Will it ever enter the top of the if statement?
Nope, it wasn't. My bad :(
Corrected the logic in v4 patch. Please review

> Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
> ---
>
> Key: HBASE-5878
> URL: https://issues.apache.org/jira/browse/HBASE-5878
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Uma Maheswara Rao G
>Assignee: Ashish Singhi
> Fix For: 1.0.0, 2.0.0
>
> Attachments: HBASE-5878-v2.patch, HBASE-5878-v3.patch, 
> HBASE-5878-v4.patch, HBASE-5878.patch
>
>
> SequencFileLogReader: 
> Currently Hbase using getFileLength api from DFSInputStream class by 
> reflection. DFSInputStream is not exposed as public. So, this may change in 
> future. Now HDFS exposed HdfsDataInputStream as public API.
> We can make use of it, when we are not able to find the getFileLength api 
> from DFSInputStream as a else condition. So, that we will not have any sudden 
> surprise like we are facing today.
> Also,  it is just logging one warn message and proceeding if it throws any 
> exception while getting the length. I think we can re-throw the exception 
> because there is no point in continuing with dataloss.
> {code}
> long adjust = 0;
>   try {
> Field fIn = FilterInputStream.class.getDeclaredField("in");
> fIn.setAccessible(true);
> Object realIn = fIn.get(this.in);
> // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
> this,
> // it was an inner class of DFSClient.
> if (realIn.getClass().getName().endsWith("DFSInputStream")) {
>   Method getFileLength = realIn.getClass().
> getDeclaredMethod("getFileLength", new Class []{});
>   getFileLength.setAccessible(true);
>   long realLength = ((Long)getFileLength.
> invoke(realIn, new Object []{})).longValue();
>   assert(realLength >= this.length);
>   adjust = realLength - this.length;
> } else {
>   LOG.info("Input stream class: " + realIn.getClass().getName() +
>   ", not adjusting length");
> }
>   } catch(Exception e) {
> SequenceFileLogReader.LOG.warn(
>   "Error while trying to get accurate file length.  " +
>   "Truncation / data loss may occur if RegionServers die.", e);
>   }
>   return adjust + super.getPos();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11144) Filter to support scanning multiple row key ranges

2015-01-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276500#comment-14276500
 ] 

Lars Hofhansl commented on HBASE-11144:
---

bq. {noformat}
The test is done using MultiRowRangeFilter and the FilterList with a list of 
row key Filters on a 7-node cluster, each node uses 32 CPUs and 90GB memory.
There’re 4 rounds of the test and each round scan for 100 row key ranges in the 
table with 100million records, and get the count of results is 153437898. 
Following is the test results and the average time is computed without the max 
and min values.

1 2 3 4 Avg
FilterList 8693479 8641336 8644194 8647838 8646016(ms)
MultiRowRangeFilter 1264502 1263921 1262744 1252947 126(ms)

Speed up to 6.84 times.
{noformat}

A bit late to this party, but have we compared this to issuing 100 scans 
individual scans with the proper start and stop keys set?


> Filter to support scanning multiple row key ranges
> --
>
> Key: HBASE-11144
> URL: https://issues.apache.org/jira/browse/HBASE-11144
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE_11144_4.patch, HBASE_11144_V10.patch, 
> HBASE_11144_V11.patch, HBASE_11144_V12.patch, HBASE_11144_V13.patch, 
> HBASE_11144_V14.patch, HBASE_11144_V15.patch, HBASE_11144_V16.patch, 
> HBASE_11144_V17.patch, HBASE_11144_V18.patch, HBASE_11144_V5.patch, 
> HBASE_11144_V6.patch, HBASE_11144_V7.patch, HBASE_11144_V9.patch, 
> MultiRowRangeFilter.patch, MultiRowRangeFilter2.patch, 
> MultiRowRangeFilter3.patch, hbase_11144_V8.patch
>
>
> HBase is quite efficient when scanning only one small row key range. If user 
> needs to specify multiple row key ranges in one scan, the typical solutions 
> are: 1. through FilterList which is a list of row key Filters, 2. using the 
> SQL layer over HBase to join with two table, such as hive, phoenix etc. 
> However, both solutions are inefficient. Both of them can’t utilize the range 
> info to perform fast forwarding during scan which is quite time consuming. If 
> the number of ranges are quite big (e.g. millions), join is a proper solution 
> though it is slow. However, there are cases that user wants to specify a 
> small number of ranges to scan (e.g. <1000 ranges). Both solutions can’t 
> provide satisfactory performance in such case. 
> We provide this filter (MultiRowRangeFilter) to support such use case (scan 
> multiple row key ranges), which can construct the row key ranges from user 
> specified list and perform fast-forwarding during scan. Thus, the scan will 
> be quite efficient. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12108) HBaseConfiguration

2015-01-13 Thread Aniket Bhatnagar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276477#comment-14276477
 ] 

Aniket Bhatnagar commented on HBASE-12108:
--

I have submitted a pull request on github for this - 
https://github.com/apache/hbase/pull/10

> HBaseConfiguration
> --
>
> Key: HBASE-12108
> URL: https://issues.apache.org/jira/browse/HBASE-12108
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Aniket Bhatnagar
>Priority: Minor
>
> IN the setup wherein HBase jars are loaded in child classloader whose parent 
> had loaded hadoop-common jar, HBaseConfiguration.create() throws 
> "hbase-default.xml file seems to be for and old version of HBase (null)... " 
> exception. ClassLoader should be set in Hadoop conf object before calling 
> addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12831) Changing the set of vis labels a user has access to doesn't generate an audit log event

2015-01-13 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-12831:
--
Attachment: HBASE-12831v4.patch

Thanks for review [~busbey].
Attaching the patch addressing your comments.

> Changing the set of vis labels a user has access to doesn't generate an audit 
> log event
> ---
>
> Key: HBASE-12831
> URL: https://issues.apache.org/jira/browse/HBASE-12831
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 2.0.0, 0.98.6
>Reporter: Sean Busbey
>Assignee: Ashish Singhi
>  Labels: audit
> Fix For: 1.0.1, 0.98.11
>
> Attachments: HBASE-12831-v2.patch, HBASE-12831-v3.patch, 
> HBASE-12831.patch, HBASE-12831v4.patch
>
>
> Right now, the AccessController makes sure that (when users care about audit 
> events) we generate an audit log event for any access change, like granting 
> or removing a permission from a user.
> When the set of labels a user has access to is altered, it gets handled by 
> the VisibilityLabelService and we don't log anything to the audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12847) TestZKLessSplitOnCluster frequently times out in 0.98 builds

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276384#comment-14276384
 ] 

Hadoop QA commented on HBASE-12847:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692090/HBASE-12847.patch
  against master branch at commit 9b7f36b8cf521bcc01ac6476349a9d2f34be8bb3.
  ATTACHMENT ID: 12692090

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12455//console

This message is automatically generated.

> TestZKLessSplitOnCluster frequently times out in 0.98 builds
> 
>
> Key: HBASE-12847
> URL: https://issues.apache.org/jira/browse/HBASE-12847
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12847.patch, HBASE-12847_98.patch, 
> HBASE-12847_branch-1.patch, test.log.bad.gz, test.log.good.gz
>
>
> Gets hung up in testSSHCleanupDaugtherRegionsOfAbortedSplit waiting on 
> deleteTable
> {noformat}
> "Thread-334" prio=10 tid=0x7f15382da800 nid=0x40ae in Object.wait() 
> [0x7
> f1315f5d000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0x0007e1b525b8> (a 
> org.apache.hadoop.hbase.ipc.RpcClie
> nt$Call)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1452)
> - locked <0x0007e1b525b8> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$C
> all)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.ja
> va:1661)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementatio
> n.callBlockingMethod

[jira] [Updated] (HBASE-11144) Filter to support scanning multiple row key ranges

2015-01-13 Thread Jiajia Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiajia Li updated HBASE-11144:
--
Release Note: MultiRowRangeFilter is a filter to support scanning multiple 
row key ranges.

> Filter to support scanning multiple row key ranges
> --
>
> Key: HBASE-11144
> URL: https://issues.apache.org/jira/browse/HBASE-11144
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE_11144_4.patch, HBASE_11144_V10.patch, 
> HBASE_11144_V11.patch, HBASE_11144_V12.patch, HBASE_11144_V13.patch, 
> HBASE_11144_V14.patch, HBASE_11144_V15.patch, HBASE_11144_V16.patch, 
> HBASE_11144_V17.patch, HBASE_11144_V18.patch, HBASE_11144_V5.patch, 
> HBASE_11144_V6.patch, HBASE_11144_V7.patch, HBASE_11144_V9.patch, 
> MultiRowRangeFilter.patch, MultiRowRangeFilter2.patch, 
> MultiRowRangeFilter3.patch, hbase_11144_V8.patch
>
>
> HBase is quite efficient when scanning only one small row key range. If user 
> needs to specify multiple row key ranges in one scan, the typical solutions 
> are: 1. through FilterList which is a list of row key Filters, 2. using the 
> SQL layer over HBase to join with two table, such as hive, phoenix etc. 
> However, both solutions are inefficient. Both of them can’t utilize the range 
> info to perform fast forwarding during scan which is quite time consuming. If 
> the number of ranges are quite big (e.g. millions), join is a proper solution 
> though it is slow. However, there are cases that user wants to specify a 
> small number of ranges to scan (e.g. <1000 ranges). Both solutions can’t 
> provide satisfactory performance in such case. 
> We provide this filter (MultiRowRangeFilter) to support such use case (scan 
> multiple row key ranges), which can construct the row key ranges from user 
> specified list and perform fast-forwarding during scan. Thus, the scan will 
> be quite efficient. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11983) HRegion constructors should not create HLog

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276339#comment-14276339
 ] 

Hudson commented on HBASE-11983:


SUCCESS: Integrated in HBase-TRUNK #6020 (See 
[https://builds.apache.org/job/HBase-TRUNK/6020/])
HBASE-11983 HRegion constructors should not create HLog (ndimiduk: rev 
9b7f36b8cf521bcc01ac6476349a9d2f34be8bb3)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestIntraRowPagination.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultCompactSelection.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPrefixFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTree.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanner.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingKeyRange.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWithBloomError.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTable.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestDependentColumnFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestColumnSeeking.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPerColumnFamilyFlush.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/ModifyRegionUtils.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestFSHLog.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeepDeletes.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestGetClosestAtOrBefore.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWideScanner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTool.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransaction.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestAtomicOperation.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinVersions.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestBlocksRead.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestResettingCounters.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSeekOptimizations.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverStacking.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestJoinedScanners.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestMultipleColumnPrefixFilter.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java


> HRegion constructors should not create HLog 
> 
>
> Key: HBASE-11983
> URL: https://issues.apache.org/jira/browse/HBASE-11983
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Enis Soztutar
>Assignee: Nick Dimiduk
>  Labels: beginner
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE

[jira] [Commented] (HBASE-8386) deprecate TableMapReduce.addDependencyJars(Configuration, class ...)

2015-01-13 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276338#comment-14276338
 ] 

Nick Dimiduk commented on HBASE-8386:
-

Yeah, this should probably be taken care of. Are you volunteering? ;)

> deprecate TableMapReduce.addDependencyJars(Configuration, class ...)
> ---
>
> Key: HBASE-8386
> URL: https://issues.apache.org/jira/browse/HBASE-8386
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Reporter: Nick Dimiduk
>
> We expose two public static methods names {{addDependencyJars}}. One of them, 
> {{void addDependencyJars(Job}}, is very helpful -- goes out of its way to 
> detect job dependencies as well as shipping all the necessary HBase 
> dependencies. The other is shfty and nefarious, {{void 
> addDependencyJars(Configuration, Class...)}} -- it only adds exactly what 
> the user requests, forcing them to resolve dependencies themselves and giving 
> a false sense of security. We should deprecate the latter throw a big giant 
> warning when people use that one. The handy functionality of providing help 
> when our heuristics fail can be added via a new method signature, something 
> like {{void addDependencyJars(Job, Class ...}}. This method would do 
> everything {{void addDependencyJars(Job}} does, plus let the user specify 
> arbitrary additional classes. That way HBase still can help the user, but 
> also gives them super-powers to compensate for when our heuristics fail.
> For reference, this appears to be the reason why HBase + Pig doesn't really 
> work out of the box. See 
> [HBaseStorage.java|https://github.com/apache/pig/blob/trunk/src/org/apache/pig/backend/hadoop/hbase/HBaseStorage.java#L730]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5401) PerformanceEvaluation generates 10x the number of expected mappers

2015-01-13 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276336#comment-14276336
 ] 

Nick Dimiduk commented on HBASE-5401:
-

I don't know why the 10x multiplier is there. I usually run in --nomapred mode, 
so I haven't thought about this much. If you want to work out a patch we can 
get it committed.

> PerformanceEvaluation generates 10x the number of expected mappers
> --
>
> Key: HBASE-5401
> URL: https://issues.apache.org/jira/browse/HBASE-5401
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Oliver Meyn
>
> With a command line like 'hbase org.apache.hadoop.hbase.PerformanceEvaluation 
> randomWrite 10' there are 100 mappers spawned, rather than the expected 10.  
> The culprit appears to be the outer loop in writeInputFile which sets up 10 
> splits for every "asked-for client".  I think the fix is just to remove that 
> outer loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276326#comment-14276326
 ] 

Hadoop QA commented on HBASE-12848:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692079/12848-v3.patch
  against master branch at commit 9b7f36b8cf521bcc01ac6476349a9d2f34be8bb3.
  ATTACHMENT ID: 12692079

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12454//console

This message is automatically generated.

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12848-v1.patch, 12848-v2.patch, 12848-v3.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12847) TestZKLessSplitOnCluster frequently times out in 0.98 builds

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276325#comment-14276325
 ] 

Hadoop QA commented on HBASE-12847:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692113/HBASE-12847_98.patch
  against master branch at commit 9b7f36b8cf521bcc01ac6476349a9d2f34be8bb3.
  ATTACHMENT ID: 12692113

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12456//console

This message is automatically generated.

> TestZKLessSplitOnCluster frequently times out in 0.98 builds
> 
>
> Key: HBASE-12847
> URL: https://issues.apache.org/jira/browse/HBASE-12847
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12847.patch, HBASE-12847_98.patch, 
> HBASE-12847_branch-1.patch, test.log.bad.gz, test.log.good.gz
>
>
> Gets hung up in testSSHCleanupDaugtherRegionsOfAbortedSplit waiting on 
> deleteTable
> {noformat}
> "Thread-334" prio=10 tid=0x7f15382da800 nid=0x40ae in Object.wait() 
> [0x7
> f1315f5d000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0x0007e1b525b8> (a 
> org.apache.hadoop.hbase.ipc.RpcClie
> nt$Call)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1452)
> - locked <0x0007e1b525b8> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$C
> all)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.ja
> va:1661)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementatio
> n.callBlockingMethod(RpcClient.java:1719)
> at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService
> $BlockingStub.disableTable(MasterProtos.java:43749)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplemen
> tation$5.disableTable(HConnectionManager.java:1995)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:947)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:942)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcR
> etryingCaller.java:117)
> - locked <0x0007dfffe938> (a 
> org.apache.hadoop.hbase.client.RpcRetry
> ingCaller)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcR
> etryingCaller.java:93)
> - locked <0x0007dfffe938> (a 
> org.apache.hadoop.hbase.client.RpcRetry
> ingCaller)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.
> java:3398)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmi
> n.java:942)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.jav
> a:974)
> at 
> org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1532)
> at 
> org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testSSHCleanupDaugtherRegionsOfAbortedSplit(TestSplitTransactionOnCluster.java:1172)
> {noformat}
> See attached test.log.good.gz and test.log.bad.gz.
> In test.log.bad at 2015-01-13 08:02:45,947 we acquire a lock on 
> testSSHCleanupDaugtherRegionsOfAbortedSplit to start a disable but do nothing 
> afterward. Nothing happens for one minute. Then looks like the client makes 
> another request but it can't get the table lock. There's no progress until 
> timeout.
> {noformat}
> 2015-01-13 08:02:45,947 INFO  [Thread-334] client.HBaseAdmin$5(945): Started 
> disable of testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:02:45,948 INFO  [FifoRpcScheduler.handler1-thread-4] 
> master.HMaster(2213): Client=apurtell//10.40.8.95 disable 
> testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:02:45,950 DEBUG [FifoRpcScheduler.handler1-thread-4] 
> lock.ZKInterProcessLockBase(226): Acquired a lock for 
> /hbase/table-lock/testSSHCleanupDaugtherRegionsOfAbortedSplit/write-master:3677401
> 2015-01-13 08:03:44,585 DEBUG 
> [ip-10-40-8-95.us-west-2.compute.internal,36774,1421136162827-BalancerChore] 
> master.HMaster(1553): Not running balancer because 3 region(s) in transition: 
> {e4f79ac2b4711f7d906a291d94302d6e={e4f79ac2b4711f7d906a291d94302d6e 
> state=SPLITTING, ts=1421136165837, 
> server=ip-10-40-8-95.us-west-2.compute.intern

[jira] [Updated] (HBASE-12847) TestZKLessSplitOnCluster frequently times out in 0.98 builds

2015-01-13 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated HBASE-12847:

Attachment: HBASE-12847_98.patch
HBASE-12847_branch-1.patch

Patches for 0.98 and branch-1. 
[~apurtell] I will commit if it's ok. 

> TestZKLessSplitOnCluster frequently times out in 0.98 builds
> 
>
> Key: HBASE-12847
> URL: https://issues.apache.org/jira/browse/HBASE-12847
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12847.patch, HBASE-12847_98.patch, 
> HBASE-12847_branch-1.patch, test.log.bad.gz, test.log.good.gz
>
>
> Gets hung up in testSSHCleanupDaugtherRegionsOfAbortedSplit waiting on 
> deleteTable
> {noformat}
> "Thread-334" prio=10 tid=0x7f15382da800 nid=0x40ae in Object.wait() 
> [0x7
> f1315f5d000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0x0007e1b525b8> (a 
> org.apache.hadoop.hbase.ipc.RpcClie
> nt$Call)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1452)
> - locked <0x0007e1b525b8> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$C
> all)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.ja
> va:1661)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementatio
> n.callBlockingMethod(RpcClient.java:1719)
> at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService
> $BlockingStub.disableTable(MasterProtos.java:43749)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplemen
> tation$5.disableTable(HConnectionManager.java:1995)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:947)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:942)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcR
> etryingCaller.java:117)
> - locked <0x0007dfffe938> (a 
> org.apache.hadoop.hbase.client.RpcRetry
> ingCaller)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcR
> etryingCaller.java:93)
> - locked <0x0007dfffe938> (a 
> org.apache.hadoop.hbase.client.RpcRetry
> ingCaller)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.
> java:3398)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmi
> n.java:942)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.jav
> a:974)
> at 
> org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1532)
> at 
> org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testSSHCleanupDaugtherRegionsOfAbortedSplit(TestSplitTransactionOnCluster.java:1172)
> {noformat}
> See attached test.log.good.gz and test.log.bad.gz.
> In test.log.bad at 2015-01-13 08:02:45,947 we acquire a lock on 
> testSSHCleanupDaugtherRegionsOfAbortedSplit to start a disable but do nothing 
> afterward. Nothing happens for one minute. Then looks like the client makes 
> another request but it can't get the table lock. There's no progress until 
> timeout.
> {noformat}
> 2015-01-13 08:02:45,947 INFO  [Thread-334] client.HBaseAdmin$5(945): Started 
> disable of testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:02:45,948 INFO  [FifoRpcScheduler.handler1-thread-4] 
> master.HMaster(2213): Client=apurtell//10.40.8.95 disable 
> testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:02:45,950 DEBUG [FifoRpcScheduler.handler1-thread-4] 
> lock.ZKInterProcessLockBase(226): Acquired a lock for 
> /hbase/table-lock/testSSHCleanupDaugtherRegionsOfAbortedSplit/write-master:3677401
> 2015-01-13 08:03:44,585 DEBUG 
> [ip-10-40-8-95.us-west-2.compute.internal,36774,1421136162827-BalancerChore] 
> master.HMaster(1553): Not running balancer because 3 region(s) in transition: 
> {e4f79ac2b4711f7d906a291d94302d6e={e4f79ac2b4711f7d906a291d94302d6e 
> state=SPLITTING, ts=1421136165837, 
> server=ip-10-40-8-95.us-west-2.compute.internal,47769,1421136163047}, 
> ebbcf66ec09960d42a2a49252599bb6d={ebbcf66ec09960d42a2a49252599bb6d 
> state=SPLITTI...
> 2015-01-13 08:03:46,210 INFO  [Thread-334] client.HBaseAdmin$5(945): Started 
> disable of testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:03:46,211 INFO  [FifoRpcScheduler.handler1-thread-3] 
> master.HMaster(2213): Client=apurtell//10.40.8.95 disable 
> testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:03:46,214 DEBUG [FifoRpcScheduler.handler1-thread-3] 
> master.TableLockManager$ZKTableLockM

[jira] [Commented] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2015-01-13 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276321#comment-14276321
 ] 

Nick Dimiduk commented on HBASE-12393:
--

Looks like your patch v2 isn't vs master. Can you rebase it onto master and 
reattach? Also, it looks like it does not include the changes from your first 
patch.

> The regionserver web will throw exception if we disable block cache
> ---
>
> Key: HBASE-12393
> URL: https://issues.apache.org/jira/browse/HBASE-12393
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, UI
>Affects Versions: 0.98.7
> Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12393-v2.patch, HBASE-12393.patch
>
>
> The CacheConfig.getBlockCache() will return the null point when we set 
> hfile.block.cache.size to zero.
> The BlockCacheTmpl.jamon doesn't make a check on null blockcache.
> {code}
> <%if cacheConfig == null %>
> CacheConfig is null
> <%else>
> 
> 
> Attribute
> Value
> Description
> 
> 
> Size
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().size()) %>
> Total size of Block Cache (bytes)
> 
> 
> Free
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().getFreeSize()) 
> %>
> Free space in Block Cache (bytes)
> 
> 
> Count
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getBlockCount()) %>
> Number of blocks in Block Cache
> 
> 
> Evicted
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictedCount()) %>
> Number of blocks evicted
> 
> 
> Evictions
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictionCount()) %>
> Number of times an eviction occurred
> 
> 
> Hits
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCount()) %>
> Number requests that were cache hits
> 
> 
> Hits Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCachingCount()) %>
> Cache hit block requests but only requests set to use Block 
> Cache
> 
> 
> Misses
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Number of requests that were cache misses
> 
> 
> Misses Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Block requests that were cache misses but only requests set to 
> use Block Cache
> 
> 
> Hit Ratio
> <% String.format("%,.2f", 
> cacheConfig.getBlockCache().getStats().getHitRatio() * 100) %><% "%" %>
> Hit Count divided by total requests count
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7541) Convert all tests that use HBaseTestingUtility.createMultiRegions to HBA.createTable

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276314#comment-14276314
 ] 

Hadoop QA commented on HBASE-7541:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692067/HBASE_7541_v2.txt
  against master branch at commit 4ac457a7bc909cc92e0a1a0cab21ed0ce6bae893.
  ATTACHMENT ID: 12692067

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 109 
new or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12451//console

This message is automatically generated.

> Convert all tests that use HBaseTestingUtility.createMultiRegions to 
> HBA.createTable
> 
>
> Key: HBASE-7541
> URL: https://issues.apache.org/jira/browse/HBASE-7541
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Jonathan Lawlor
> Attachments: HBASE7541_patch_v1.txt, HBASE_7541_v2.txt, 
> HBASE_7541_v2.txt
>
>
> Like I discussed in HBASE-7534, {{HBaseTestingUtility.createMultiRegions}} 
> should disappear and not come back. There's about 25 different places in the 
> code that rely on it that need to be changed the same way I changed 
> TestReplication.
> Perfect for someone that wants to get started with HBase dev :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-8329) Limit compaction speed

2015-01-13 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HBASE-8329:

Release Note: 
Adds compaction throughput limit mechanism(the word "throttle" is already used 
when choosing compaction thread pool, so use a different word here to avoid 
ambiguity). Default is 
org.apache.hadoop.hbase.regionserver.compactions.DefaultThroughputController, 
will limit throughput as follow:
1. In off peak hours, use a fixed limitation 
"hbase.hstore.compaction.throughput.offpeak" (default is Long.MAX_VALUE which 
means no limitation).
2. In normal hours, the limitation is tuned between 
"hbase.hstore.compaction.throughput.lower.bound"(default 20MB/sec) and 
"hbase.hstore.compaction.throughput.higher.bound"(default 10MB/sec), using the 
formula "lower + (higer - lower) * param" where param is in range [0.0, 1.0] 
and calculate based on store files count on this regionserver.
3. If some stores have too many store files(storefilesCount > 
blockingFileCount), then there is no limitation no matter peak or off peak.
You can set "hbase.regionserver.throughput.controller" to 
org.apache.hadoop.hbase.regionserver.compactions.NoLimitThroughputController to 
disable throughput controlling.
And we have implemented ConfigurationObserver which means you can change all 
configurations above and do not need to restart cluster.

  was:
Adds compaction throughput limit mechanism(the word "throttle" is already used 
when choosing compaction thread pool, so use a different word here to avoid 
ambiguity). Default is 
org.apache.hadoop.hbase.regionserver.compactions.DefaultThroughputController, 
will limit throughput as follow:
1. In off peak hours, use a fixed limitation 
"hbase.hstore.compaction.throughput.offpeak" (default 40MB/sec)
2. In normal hours, the limitation is tuned between 
"hbase.hstore.compaction.throughput.lower.bound"(default 20MB/sec) and 
"hbase.hstore.compaction.throughput.higher.bound"(default 10MB/sec), using the 
formula "lower + (higer - lower) * param" where param is in range [0.0, 1.0] 
and calculate based on store files count on this regionserver.
3. If some stores have too many store files(storefilesCount > 
blockingFileCount), then there is no limitation no matter peak or off peak.
You can set "hbase.regionserver.throughput.controller" to 
org.apache.hadoop.hbase.regionserver.compactions.NoLimitThroughputController to 
disable throughput controlling.
And we have implemented ConfigurationObserver which means you can change all 
configurations above and do not need to restart cluster.


> Limit compaction speed
> --
>
> Key: HBASE-8329
> URL: https://issues.apache.org/jira/browse/HBASE-8329
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: binlijin
>Assignee: zhangduo
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-8329-10.patch, HBASE-8329-11.patch, 
> HBASE-8329-12.patch, HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, 
> HBASE-8329-4-trunk.patch, HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, 
> HBASE-8329-7-trunk.patch, HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, 
> HBASE-8329-trunk.patch, HBASE-8329_13.patch, HBASE-8329_14.patch
>
>
> There is no speed or resource limit for compaction,I think we should add this 
> feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-8329) Limit compaction speed

2015-01-13 Thread zhangduo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangduo updated HBASE-8329:

Release Note: 
Adds compaction throughput limit mechanism(the word "throttle" is already used 
when choosing compaction thread pool, so use a different word here to avoid 
ambiguity). Default is 
org.apache.hadoop.hbase.regionserver.compactions.DefaultThroughputController, 
will limit throughput as follow:
1. In off peak hours, use a fixed limitation 
"hbase.hstore.compaction.throughput.offpeak" (default 40MB/sec)
2. In normal hours, the limitation is tuned between 
"hbase.hstore.compaction.throughput.lower.bound"(default 20MB/sec) and 
"hbase.hstore.compaction.throughput.higher.bound"(default 10MB/sec), using the 
formula "lower + (higer - lower) * param" where param is in range [0.0, 1.0] 
and calculate based on store files count on this regionserver.
3. If some stores have too many store files(storefilesCount > 
blockingFileCount), then there is no limitation no matter peak or off peak.
You can set "hbase.regionserver.throughput.controller" to 
org.apache.hadoop.hbase.regionserver.compactions.NoLimitThroughputController to 
disable throughput controlling.
And we have implemented ConfigurationObserver which means you can change all 
configurations above and do not need to restart cluster.

> Limit compaction speed
> --
>
> Key: HBASE-8329
> URL: https://issues.apache.org/jira/browse/HBASE-8329
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: binlijin
>Assignee: zhangduo
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-8329-10.patch, HBASE-8329-11.patch, 
> HBASE-8329-12.patch, HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, 
> HBASE-8329-4-trunk.patch, HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, 
> HBASE-8329-7-trunk.patch, HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, 
> HBASE-8329-trunk.patch, HBASE-8329_13.patch, HBASE-8329_14.patch
>
>
> There is no speed or resource limit for compaction,I think we should add this 
> feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12833) [shell] table.rb leaks connections

2015-01-13 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276264#comment-14276264
 ] 

Nick Dimiduk commented on HBASE-12833:
--

On closer inspection, it looks like SecurityAdmin and VisibilityLabelsAdmin 
need updated with this new style as well. ReplicationAdmin appears to manage 
it's own connection by design, though the rational eludes me.

> [shell] table.rb leaks connections
> --
>
> Key: HBASE-12833
> URL: https://issues.apache.org/jira/browse/HBASE-12833
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 1.0.0, 2.0.0, 1.1.0
>Reporter: Nick Dimiduk
>Assignee: Solomon Duskis
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: HBASE-12833.patch
>
>
> TestShell is erring out (timeout) consistently for me. Culprit is OOM cannot 
> create native thread. It looks to me like test_table.rb and hbase/table.rb 
> are made for leaking connections. table calls 
> ConnectionFactory.createConnection() for every table but provides no close() 
> method to clean it up. test_table creates a new table with every test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276258#comment-14276258
 ] 

Ted Yu commented on HBASE-12848:


w.r.t. @param, I think javadoc warning would be produced if only one parameter 
is in javadoc.

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12848-v1.patch, 12848-v2.patch, 12848-v3.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12836) Tip of branch 0.98 has some binary incompatibilities with 0.98.0

2015-01-13 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276245#comment-14276245
 ] 

Dima Spivak commented on HBASE-12836:
-

+1 from me, as well. Good work on including a test, too, [~srikanth235].

> Tip of branch 0.98 has some binary incompatibilities with 0.98.0
> 
>
> Key: HBASE-12836
> URL: https://issues.apache.org/jira/browse/HBASE-12836
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.10
>Reporter: Dima Spivak
> Attachments: HBASE-12836.patch, HBASE-12836_copy_table.patch, 
> HBASE-12836_copy_table_v2.patch, HBASE-12836_v2.patch
>
>
> In working on HBASE-12808, I ran a scan between the 0.98.0 tag and the tip of 
> branch 0.98 and found a handful of binary incompatibilities that are probably 
> worth addressing:
> - org.apache.hadoop.hbase.security.access.AccessControlClient.grant and 
> org.apache.hadoop.hbase.security.access.AccessControlClient.revoke had their 
> return types and parameter lists changed in HBASE-12161. cc: [~srikanth235] 
> and [~mbertozzi].
> - org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob is no 
> longer static and its parameter list changed in HBASE-11997. cc: 
> [~daviddengcn] and [~tedyu].
> - getBlockSize was added to the org.apache.hadoop.hbase.io.crypto.Encryptor 
> interface in HBASE-11446, which may lead to an AbstractMethodError exception 
> in a 0.98.0 client that doesn't have this implemented. I suspect this one is 
> worth living with? cc: [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12847) TestZKLessSplitOnCluster frequently times out in 0.98 builds

2015-01-13 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated HBASE-12847:

Fix Version/s: 1.1.0
   2.0.0
   1.0.0
 Assignee: Rajeshbabu Chintaguntla
   Status: Patch Available  (was: Open)

> TestZKLessSplitOnCluster frequently times out in 0.98 builds
> 
>
> Key: HBASE-12847
> URL: https://issues.apache.org/jira/browse/HBASE-12847
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12847.patch, test.log.bad.gz, test.log.good.gz
>
>
> Gets hung up in testSSHCleanupDaugtherRegionsOfAbortedSplit waiting on 
> deleteTable
> {noformat}
> "Thread-334" prio=10 tid=0x7f15382da800 nid=0x40ae in Object.wait() 
> [0x7
> f1315f5d000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0x0007e1b525b8> (a 
> org.apache.hadoop.hbase.ipc.RpcClie
> nt$Call)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1452)
> - locked <0x0007e1b525b8> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$C
> all)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.ja
> va:1661)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementatio
> n.callBlockingMethod(RpcClient.java:1719)
> at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService
> $BlockingStub.disableTable(MasterProtos.java:43749)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplemen
> tation$5.disableTable(HConnectionManager.java:1995)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:947)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:942)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcR
> etryingCaller.java:117)
> - locked <0x0007dfffe938> (a 
> org.apache.hadoop.hbase.client.RpcRetry
> ingCaller)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcR
> etryingCaller.java:93)
> - locked <0x0007dfffe938> (a 
> org.apache.hadoop.hbase.client.RpcRetry
> ingCaller)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.
> java:3398)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmi
> n.java:942)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.jav
> a:974)
> at 
> org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1532)
> at 
> org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testSSHCleanupDaugtherRegionsOfAbortedSplit(TestSplitTransactionOnCluster.java:1172)
> {noformat}
> See attached test.log.good.gz and test.log.bad.gz.
> In test.log.bad at 2015-01-13 08:02:45,947 we acquire a lock on 
> testSSHCleanupDaugtherRegionsOfAbortedSplit to start a disable but do nothing 
> afterward. Nothing happens for one minute. Then looks like the client makes 
> another request but it can't get the table lock. There's no progress until 
> timeout.
> {noformat}
> 2015-01-13 08:02:45,947 INFO  [Thread-334] client.HBaseAdmin$5(945): Started 
> disable of testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:02:45,948 INFO  [FifoRpcScheduler.handler1-thread-4] 
> master.HMaster(2213): Client=apurtell//10.40.8.95 disable 
> testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:02:45,950 DEBUG [FifoRpcScheduler.handler1-thread-4] 
> lock.ZKInterProcessLockBase(226): Acquired a lock for 
> /hbase/table-lock/testSSHCleanupDaugtherRegionsOfAbortedSplit/write-master:3677401
> 2015-01-13 08:03:44,585 DEBUG 
> [ip-10-40-8-95.us-west-2.compute.internal,36774,1421136162827-BalancerChore] 
> master.HMaster(1553): Not running balancer because 3 region(s) in transition: 
> {e4f79ac2b4711f7d906a291d94302d6e={e4f79ac2b4711f7d906a291d94302d6e 
> state=SPLITTING, ts=1421136165837, 
> server=ip-10-40-8-95.us-west-2.compute.internal,47769,1421136163047}, 
> ebbcf66ec09960d42a2a49252599bb6d={ebbcf66ec09960d42a2a49252599bb6d 
> state=SPLITTI...
> 2015-01-13 08:03:46,210 INFO  [Thread-334] client.HBaseAdmin$5(945): Started 
> disable of testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:03:46,211 INFO  [FifoRpcScheduler.handler1-thread-3] 
> master.HMaster(2213): Client=apurtell//10.40.8.95 disable 
> testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:03:46,214 DEBUG [FifoRpcScheduler.handler1-thread-3] 
> master.TableLockManager$ZKTableLockManager$1(242): Table is locked by 
> [t

[jira] [Updated] (HBASE-12847) TestZKLessSplitOnCluster frequently times out in 0.98 builds

2015-01-13 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated HBASE-12847:

Attachment: HBASE-12847.patch

Here is the patch killing the RS holding the region which failed split. Same 
change makes the test pass consistently. 

> TestZKLessSplitOnCluster frequently times out in 0.98 builds
> 
>
> Key: HBASE-12847
> URL: https://issues.apache.org/jira/browse/HBASE-12847
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
> Fix For: 0.98.10
>
> Attachments: HBASE-12847.patch, test.log.bad.gz, test.log.good.gz
>
>
> Gets hung up in testSSHCleanupDaugtherRegionsOfAbortedSplit waiting on 
> deleteTable
> {noformat}
> "Thread-334" prio=10 tid=0x7f15382da800 nid=0x40ae in Object.wait() 
> [0x7
> f1315f5d000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0x0007e1b525b8> (a 
> org.apache.hadoop.hbase.ipc.RpcClie
> nt$Call)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1452)
> - locked <0x0007e1b525b8> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$C
> all)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.ja
> va:1661)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementatio
> n.callBlockingMethod(RpcClient.java:1719)
> at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService
> $BlockingStub.disableTable(MasterProtos.java:43749)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplemen
> tation$5.disableTable(HConnectionManager.java:1995)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:947)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:942)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcR
> etryingCaller.java:117)
> - locked <0x0007dfffe938> (a 
> org.apache.hadoop.hbase.client.RpcRetry
> ingCaller)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcR
> etryingCaller.java:93)
> - locked <0x0007dfffe938> (a 
> org.apache.hadoop.hbase.client.RpcRetry
> ingCaller)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.
> java:3398)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTableAsync(HBaseAdmi
> n.java:942)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.jav
> a:974)
> at 
> org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1532)
> at 
> org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster.testSSHCleanupDaugtherRegionsOfAbortedSplit(TestSplitTransactionOnCluster.java:1172)
> {noformat}
> See attached test.log.good.gz and test.log.bad.gz.
> In test.log.bad at 2015-01-13 08:02:45,947 we acquire a lock on 
> testSSHCleanupDaugtherRegionsOfAbortedSplit to start a disable but do nothing 
> afterward. Nothing happens for one minute. Then looks like the client makes 
> another request but it can't get the table lock. There's no progress until 
> timeout.
> {noformat}
> 2015-01-13 08:02:45,947 INFO  [Thread-334] client.HBaseAdmin$5(945): Started 
> disable of testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:02:45,948 INFO  [FifoRpcScheduler.handler1-thread-4] 
> master.HMaster(2213): Client=apurtell//10.40.8.95 disable 
> testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:02:45,950 DEBUG [FifoRpcScheduler.handler1-thread-4] 
> lock.ZKInterProcessLockBase(226): Acquired a lock for 
> /hbase/table-lock/testSSHCleanupDaugtherRegionsOfAbortedSplit/write-master:3677401
> 2015-01-13 08:03:44,585 DEBUG 
> [ip-10-40-8-95.us-west-2.compute.internal,36774,1421136162827-BalancerChore] 
> master.HMaster(1553): Not running balancer because 3 region(s) in transition: 
> {e4f79ac2b4711f7d906a291d94302d6e={e4f79ac2b4711f7d906a291d94302d6e 
> state=SPLITTING, ts=1421136165837, 
> server=ip-10-40-8-95.us-west-2.compute.internal,47769,1421136163047}, 
> ebbcf66ec09960d42a2a49252599bb6d={ebbcf66ec09960d42a2a49252599bb6d 
> state=SPLITTI...
> 2015-01-13 08:03:46,210 INFO  [Thread-334] client.HBaseAdmin$5(945): Started 
> disable of testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:03:46,211 INFO  [FifoRpcScheduler.handler1-thread-3] 
> master.HMaster(2213): Client=apurtell//10.40.8.95 disable 
> testSSHCleanupDaugtherRegionsOfAbortedSplit
> 2015-01-13 08:03:46,214 DEBUG [FifoRpcScheduler.handler1-thread-3] 
> master.TableLockManager$ZKTableLockManager$1(242): Table is locked by 
> [tableName=
> default+testSSHCleanupDaugtherRegionsOfAbortedSplit, 
> lockOwner=i

[jira] [Commented] (HBASE-12728) buffered writes substantially less useful after removal of HTablePool

2015-01-13 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276224#comment-14276224
 ] 

Enis Soztutar commented on HBASE-12728:
---

Great patch. 
BulkMutator looks good except I would also prefer BufferedMutator. Table 
already does bulk puts via put(List) interface. This is more like 
buffered/async puts. 

Shouldn't this get the BulkMutator (the interface) instead of the 
implementation class?
{code}
public void onException(RetriesExhaustedWithDetailsException exception, 
HBulkMutator hBulkMutator)
{code}

BulkMutatorParameters -> BulkMutatorConfig(uration). We usually suffix these 
kind of objects with Config (look at TableConfiguration). Also can we do 
builder-style on setXXX() methods. This class is only for passing args to the 
method it seems. Do we really need it?

Why are we exposing Lock to users. Should this be a boolean whether you want 
thread-safe or not. 

Table.close -> BulkMutator.close() below:
{code}
+   * The caller is responsible for calling {@link Table#close()} on the 
returned bulkMutator
{code}

HBulkMutator -> H prefix is old school. Let's use Impl suffix. 



> buffered writes substantially less useful after removal of HTablePool
> -
>
> Key: HBASE-12728
> URL: https://issues.apache.org/jira/browse/HBASE-12728
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 0.98.0
>Reporter: Aaron Beppu
>Assignee: Solomon Duskis
>Priority: Blocker
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: 12728.connection-owns-buffers.example.branch-1.0.patch, 
> HBASE-12728.patch, bulk-mutator.patch
>
>
> In previous versions of HBase, when use of HTablePool was encouraged, HTable 
> instances were long-lived in that pool, and for that reason, if autoFlush was 
> set to false, the table instance could accumulate a full buffer of writes 
> before a flush was triggered. Writes from the client to the cluster could 
> then be substantially larger and less frequent than without buffering.
> However, when HTablePool was deprecated, the primary justification seems to 
> have been that creating HTable instances is cheap, so long as the connection 
> and executor service being passed to it are pre-provided. A use pattern was 
> encouraged where users should create a new HTable instance for every 
> operation, using an existing connection and executor service, and then close 
> the table. In this pattern, buffered writes are substantially less useful; 
> writes are as small and as frequent as they would have been with 
> autoflush=true, except the synchronous write is moved from the operation 
> itself to the table close call which immediately follows.
> More concretely :
> ```
> // Given these two helpers ...
> private HTableInterface getAutoFlushTable(String tableName) throws 
> IOException {
>   // (autoflush is true by default)
>   return storedConnection.getTable(tableName, executorService);
> }
> private HTableInterface getBufferedTable(String tableName) throws IOException 
> {
>   HTableInterface table = getAutoFlushTable(tableName);
>   table.setAutoFlush(false);
>   return table;
> }
> // it's my contention that these two methods would behave almost identically,
> // except the first will hit a synchronous flush during the put call,
> and the second will
> // flush during the (hidden) close call on table.
> private void writeAutoFlushed(Put somePut) throws IOException {
>   try (HTableInterface table = getAutoFlushTable(tableName)) {
> table.put(somePut); // will do synchronous flush
>   }
> }
> private void writeBuffered(Put somePut) throws IOException {
>   try (HTableInterface table = getBufferedTable(tableName)) {
> table.put(somePut);
>   } // auto-close will trigger synchronous flush
> }
> ```
> For buffered writes to actually provide a performance benefit to users, one 
> of two things must happen:
> - The writeBuffer itself shouldn't live, flush and die with the lifecycle of 
> it's HTableInstance. If the writeBuffer were managed elsewhere and had a long 
> lifespan, this could cease to be an issue. However, if the same writeBuffer 
> is appended to by multiple tables, then some additional concurrency control 
> will be needed around it.
> - Alternatively, there should be some pattern for having long-lived HTable 
> instances. However, since HTable is not thread-safe, we'd need multiple 
> instances, and a mechanism for leasing them out safely -- which sure sounds a 
> lot like the old HTablePool to me.
> See discussion on mailing list here : 
> http://mail-archives.apache.org/mod_mbox/hbase-user/201412.mbox/%3CCAPdJLkEzmUQZ_kvD%3D8mrxi4V%3DhCmUp3g9MUZsddD%2Bmon%2BAvNtg%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12836) Tip of branch 0.98 has some binary incompatibilities with 0.98.0

2015-01-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276217#comment-14276217
 ] 

Andrew Purtell commented on HBASE-12836:


+1

> Tip of branch 0.98 has some binary incompatibilities with 0.98.0
> 
>
> Key: HBASE-12836
> URL: https://issues.apache.org/jira/browse/HBASE-12836
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.10
>Reporter: Dima Spivak
> Attachments: HBASE-12836.patch, HBASE-12836_copy_table.patch, 
> HBASE-12836_copy_table_v2.patch, HBASE-12836_v2.patch
>
>
> In working on HBASE-12808, I ran a scan between the 0.98.0 tag and the tip of 
> branch 0.98 and found a handful of binary incompatibilities that are probably 
> worth addressing:
> - org.apache.hadoop.hbase.security.access.AccessControlClient.grant and 
> org.apache.hadoop.hbase.security.access.AccessControlClient.revoke had their 
> return types and parameter lists changed in HBASE-12161. cc: [~srikanth235] 
> and [~mbertozzi].
> - org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob is no 
> longer static and its parameter list changed in HBASE-11997. cc: 
> [~daviddengcn] and [~tedyu].
> - getBlockSize was added to the org.apache.hadoop.hbase.io.crypto.Encryptor 
> interface in HBASE-11446, which may lead to an AbstractMethodError exception 
> in a 0.98.0 client that doesn't have this implemented. I suspect this one is 
> worth living with? cc: [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12836) Tip of branch 0.98 has some binary incompatibilities with 0.98.0

2015-01-13 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-12836:

Attachment: HBASE-12836_copy_table_v2.patch

> Tip of branch 0.98 has some binary incompatibilities with 0.98.0
> 
>
> Key: HBASE-12836
> URL: https://issues.apache.org/jira/browse/HBASE-12836
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.10
>Reporter: Dima Spivak
> Attachments: HBASE-12836.patch, HBASE-12836_copy_table.patch, 
> HBASE-12836_copy_table_v2.patch, HBASE-12836_v2.patch
>
>
> In working on HBASE-12808, I ran a scan between the 0.98.0 tag and the tip of 
> branch 0.98 and found a handful of binary incompatibilities that are probably 
> worth addressing:
> - org.apache.hadoop.hbase.security.access.AccessControlClient.grant and 
> org.apache.hadoop.hbase.security.access.AccessControlClient.revoke had their 
> return types and parameter lists changed in HBASE-12161. cc: [~srikanth235] 
> and [~mbertozzi].
> - org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob is no 
> longer static and its parameter list changed in HBASE-11997. cc: 
> [~daviddengcn] and [~tedyu].
> - getBlockSize was added to the org.apache.hadoop.hbase.io.crypto.Encryptor 
> interface in HBASE-11446, which may lead to an AbstractMethodError exception 
> in a 0.98.0 client that doesn't have this implemented. I suspect this one is 
> worth living with? cc: [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12836) Tip of branch 0.98 has some binary incompatibilities with 0.98.0

2015-01-13 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-12836:

Attachment: HBASE-12836_copy_table_v2.patch

How about attached v2? (credit for variable naming and comment suggestions goes 
to Mr.[~dimaspivak])

> Tip of branch 0.98 has some binary incompatibilities with 0.98.0
> 
>
> Key: HBASE-12836
> URL: https://issues.apache.org/jira/browse/HBASE-12836
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.10
>Reporter: Dima Spivak
> Attachments: HBASE-12836.patch, HBASE-12836_copy_table.patch, 
> HBASE-12836_v2.patch
>
>
> In working on HBASE-12808, I ran a scan between the 0.98.0 tag and the tip of 
> branch 0.98 and found a handful of binary incompatibilities that are probably 
> worth addressing:
> - org.apache.hadoop.hbase.security.access.AccessControlClient.grant and 
> org.apache.hadoop.hbase.security.access.AccessControlClient.revoke had their 
> return types and parameter lists changed in HBASE-12161. cc: [~srikanth235] 
> and [~mbertozzi].
> - org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob is no 
> longer static and its parameter list changed in HBASE-11997. cc: 
> [~daviddengcn] and [~tedyu].
> - getBlockSize was added to the org.apache.hadoop.hbase.io.crypto.Encryptor 
> interface in HBASE-11446, which may lead to an AbstractMethodError exception 
> in a 0.98.0 client that doesn't have this implemented. I suspect this one is 
> worth living with? cc: [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12836) Tip of branch 0.98 has some binary incompatibilities with 0.98.0

2015-01-13 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-12836:

Attachment: (was: HBASE-12836_copy_table_v2.patch)

> Tip of branch 0.98 has some binary incompatibilities with 0.98.0
> 
>
> Key: HBASE-12836
> URL: https://issues.apache.org/jira/browse/HBASE-12836
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.10
>Reporter: Dima Spivak
> Attachments: HBASE-12836.patch, HBASE-12836_copy_table.patch, 
> HBASE-12836_v2.patch
>
>
> In working on HBASE-12808, I ran a scan between the 0.98.0 tag and the tip of 
> branch 0.98 and found a handful of binary incompatibilities that are probably 
> worth addressing:
> - org.apache.hadoop.hbase.security.access.AccessControlClient.grant and 
> org.apache.hadoop.hbase.security.access.AccessControlClient.revoke had their 
> return types and parameter lists changed in HBASE-12161. cc: [~srikanth235] 
> and [~mbertozzi].
> - org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob is no 
> longer static and its parameter list changed in HBASE-11997. cc: 
> [~daviddengcn] and [~tedyu].
> - getBlockSize was added to the org.apache.hadoop.hbase.io.crypto.Encryptor 
> interface in HBASE-11446, which may lead to an AbstractMethodError exception 
> in a 0.98.0 client that doesn't have this implemented. I suspect this one is 
> worth living with? cc: [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12851) AccessController tests were killed on ASF Jenkins

2015-01-13 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-12851:
--

 Summary: AccessController tests were killed on ASF Jenkins
 Key: HBASE-12851
 URL: https://issues.apache.org/jira/browse/HBASE-12851
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
 Fix For: 0.98.10


For example, in https://builds.apache.org/job/HBase-0.98/794:
{noformat}
Running org.apache.hadoop.hbase.security.access.TestCellACLs
Running org.apache.hadoop.hbase.security.access.TestAccessController
Killed
Killed
{noformat}
Watching the surefire forked runner for each of these, TestAccessController at 
peak uses 321 threads and 675MB of heap, and TestCellACls uses 227 threads and 
557MB of heap. I couldn't figure out how to launch both in the same JVM (this 
was the case on Jenkins). The surefire runner might have been killed by the 
OOMKiller or a segfault. 

We've already started breaking out access controller tests from 
TestAccessController to TestAccessController2. Could do more of this. 

I'll keep this issue open for a while to track.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12833) [shell] table.rb leaks connections

2015-01-13 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276204#comment-14276204
 ] 

Nick Dimiduk commented on HBASE-12833:
--

bq. the interface changes and the managed/unmanaged connection issues are 
separate concerns

Agreed.

I also think that the implementation details of the shell *should not* be 
considered a part of our public API contract. Point remains that this is code 
we've shipped that folks may be relying on. I asked the question on dev@ 
yesterday. If no one chimes in, I think we can go forward with the close and 
initialize changes for branch-1.0+ (and so long as RM is in agreement).

bq. Do you see any advantages to keeping a mix of managed and unmanaged 
connections around?

Not long term, no. However, we will continue to support both for 1.x series as 
a backward compatibility consideration.

> [shell] table.rb leaks connections
> --
>
> Key: HBASE-12833
> URL: https://issues.apache.org/jira/browse/HBASE-12833
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 1.0.0, 2.0.0, 1.1.0
>Reporter: Nick Dimiduk
>Assignee: Solomon Duskis
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: HBASE-12833.patch
>
>
> TestShell is erring out (timeout) consistently for me. Culprit is OOM cannot 
> create native thread. It looks to me like test_table.rb and hbase/table.rb 
> are made for leaking connections. table calls 
> ConnectionFactory.createConnection() for every table but provides no close() 
> method to clean it up. test_table creates a new table with every test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276197#comment-14276197
 ] 

Sean Busbey commented on HBASE-12848:
-

+1. one nit left, fix on push fine by me.

{code}
+  /*
+   * Sets storage policy for given path according to config setting
+   * @param fs
+   * @param conf
+   * @param path the Path whose storage policy is to be set
+   * @param policyKey
+   * @param defaultPolicy
+   */
{code}

nit: leave off the @param javadocs for everything except path since they don't 
add any information.

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12848-v1.patch, 12848-v2.patch, 12848-v3.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12848:
---
Attachment: 12848-v3.patch

Patch v3 for review.

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12848-v1.patch, 12848-v2.patch, 12848-v3.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12848:
---
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12848-v1.patch, 12848-v2.patch, 12848-v3.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12836) Tip of branch 0.98 has some binary incompatibilities with 0.98.0

2015-01-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276172#comment-14276172
 ] 

Andrew Purtell commented on HBASE-12836:


Minor nit:
{code}
@@ -69,9 +69,167 @@ public class CopyTable extends Configured implements Tool {
 
   private final static String JOB_NAME_CONF_KEY = "mapreduce.job.name";
 
+  static long deprecatedStartTime = 0;
+  static long deprecatedEndTime = 0;
+  static int deprecatedVersions = -1;
+  static String deprecatedTableName = null;
+  static String deprecatedStartRow = null;
+  static String deprecatedStopRow = null;
+  static String deprecatedNewTableName = null;
+  static String deprecatedPeerAddress = null;
+  static String deprecatedFamilies = null;
+  static boolean deprecatedAllCells = false;
{code}
Rather than name these deprecated$FOO, can these be annotated?

> Tip of branch 0.98 has some binary incompatibilities with 0.98.0
> 
>
> Key: HBASE-12836
> URL: https://issues.apache.org/jira/browse/HBASE-12836
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.10
>Reporter: Dima Spivak
> Attachments: HBASE-12836.patch, HBASE-12836_copy_table.patch, 
> HBASE-12836_v2.patch
>
>
> In working on HBASE-12808, I ran a scan between the 0.98.0 tag and the tip of 
> branch 0.98 and found a handful of binary incompatibilities that are probably 
> worth addressing:
> - org.apache.hadoop.hbase.security.access.AccessControlClient.grant and 
> org.apache.hadoop.hbase.security.access.AccessControlClient.revoke had their 
> return types and parameter lists changed in HBASE-12161. cc: [~srikanth235] 
> and [~mbertozzi].
> - org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob is no 
> longer static and its parameter list changed in HBASE-11997. cc: 
> [~daviddengcn] and [~tedyu].
> - getBlockSize was added to the org.apache.hadoop.hbase.io.crypto.Encryptor 
> interface in HBASE-11446, which may lead to an AbstractMethodError exception 
> in a 0.98.0 client that doesn't have this implemented. I suspect this one is 
> worth living with? cc: [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12836) Tip of branch 0.98 has some binary incompatibilities with 0.98.0

2015-01-13 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-12836:

Attachment: HBASE-12836_copy_table.patch

Had an offline chat with [~dimaspivak]. Suggestion is that introduce static 
counterparts of class variables and restore old method. Attaching the patch for 
the same.

> Tip of branch 0.98 has some binary incompatibilities with 0.98.0
> 
>
> Key: HBASE-12836
> URL: https://issues.apache.org/jira/browse/HBASE-12836
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.10
>Reporter: Dima Spivak
> Attachments: HBASE-12836.patch, HBASE-12836_copy_table.patch, 
> HBASE-12836_v2.patch
>
>
> In working on HBASE-12808, I ran a scan between the 0.98.0 tag and the tip of 
> branch 0.98 and found a handful of binary incompatibilities that are probably 
> worth addressing:
> - org.apache.hadoop.hbase.security.access.AccessControlClient.grant and 
> org.apache.hadoop.hbase.security.access.AccessControlClient.revoke had their 
> return types and parameter lists changed in HBASE-12161. cc: [~srikanth235] 
> and [~mbertozzi].
> - org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob is no 
> longer static and its parameter list changed in HBASE-11997. cc: 
> [~daviddengcn] and [~tedyu].
> - getBlockSize was added to the org.apache.hadoop.hbase.io.crypto.Encryptor 
> interface in HBASE-11446, which may lead to an AbstractMethodError exception 
> in a 0.98.0 client that doesn't have this implemented. I suspect this one is 
> worth living with? cc: [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11983) HRegion constructors should not create HLog

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276164#comment-14276164
 ] 

Hadoop QA commented on HBASE-11983:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12692072/HBASE-11983.00-branch-1.patch
  against master branch at commit 9b7f36b8cf521bcc01ac6476349a9d2f34be8bb3.
  ATTACHMENT ID: 12692072

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 177 
new or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12453//console

This message is automatically generated.

> HRegion constructors should not create HLog 
> 
>
> Key: HBASE-11983
> URL: https://issues.apache.org/jira/browse/HBASE-11983
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Enis Soztutar
>Assignee: Nick Dimiduk
>  Labels: beginner
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-11983.00-branch-1.patch, HBASE-11983.00.patch, 
> HBASE-11983.01.patch, HBASE-11983.02.patch, HBASE-11983.03.patch, 
> HBASE-11983.03.patch, HBASE-11983.04.patch, HBASE-11983.05.patch
>
>
> We should get rid of HRegion creating its own HLog. It should ALWAYS get the 
> log from outside. 
> I think this was added for unit tests, but we should refrain from such 
> practice in the future (adding UT constructors always leads to weird and 
> critical bugs down the road). See recent: HBASE-11982, HBASE-11654. 
> Get rid of weird things like ignoreHLog:
> {code}
>   /**
>* @param ignoreHLog - true to skip generate new hlog if it is null, mostly 
> for createTable
>*/
>   public static HRegion createHRegion(final HRegionInfo info, final Path 
> rootDir,
>   final Configuration conf,
>   final HTableDescriptor hTableDescriptor,
>   final HLog hlog,
>   final boolean initialize, final boolean 
> ignoreHLog)
> {code}
> We can unify all the createXX and newXX methods and separate creating a 
> region in the file system vs opening a region. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11983) HRegion constructors should not create HLog

2015-01-13 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-11983:
-
Attachment: HBASE-11983.00-branch-1.patch

Here's the patch back ported to branch-1. SmallTests pass, running the full 
suite now.

> HRegion constructors should not create HLog 
> 
>
> Key: HBASE-11983
> URL: https://issues.apache.org/jira/browse/HBASE-11983
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Enis Soztutar
>Assignee: Nick Dimiduk
>  Labels: beginner
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-11983.00-branch-1.patch, HBASE-11983.00.patch, 
> HBASE-11983.01.patch, HBASE-11983.02.patch, HBASE-11983.03.patch, 
> HBASE-11983.03.patch, HBASE-11983.04.patch, HBASE-11983.05.patch
>
>
> We should get rid of HRegion creating its own HLog. It should ALWAYS get the 
> log from outside. 
> I think this was added for unit tests, but we should refrain from such 
> practice in the future (adding UT constructors always leads to weird and 
> critical bugs down the road). See recent: HBASE-11982, HBASE-11654. 
> Get rid of weird things like ignoreHLog:
> {code}
>   /**
>* @param ignoreHLog - true to skip generate new hlog if it is null, mostly 
> for createTable
>*/
>   public static HRegion createHRegion(final HRegionInfo info, final Path 
> rootDir,
>   final Configuration conf,
>   final HTableDescriptor hTableDescriptor,
>   final HLog hlog,
>   final boolean initialize, final boolean 
> ignoreHLog)
> {code}
> We can unify all the createXX and newXX methods and separate creating a 
> region in the file system vs opening a region. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11983) HRegion constructors should not create HLog

2015-01-13 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276152#comment-14276152
 ] 

Nick Dimiduk commented on HBASE-11983:
--

Checkstyle worked out, so pushed to master to avoid further bit-rot. Thanks for 
the reviews folks.

> HRegion constructors should not create HLog 
> 
>
> Key: HBASE-11983
> URL: https://issues.apache.org/jira/browse/HBASE-11983
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Enis Soztutar
>Assignee: Nick Dimiduk
>  Labels: beginner
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-11983.00.patch, HBASE-11983.01.patch, 
> HBASE-11983.02.patch, HBASE-11983.03.patch, HBASE-11983.03.patch, 
> HBASE-11983.04.patch, HBASE-11983.05.patch
>
>
> We should get rid of HRegion creating its own HLog. It should ALWAYS get the 
> log from outside. 
> I think this was added for unit tests, but we should refrain from such 
> practice in the future (adding UT constructors always leads to weird and 
> critical bugs down the road). See recent: HBASE-11982, HBASE-11654. 
> Get rid of weird things like ignoreHLog:
> {code}
>   /**
>* @param ignoreHLog - true to skip generate new hlog if it is null, mostly 
> for createTable
>*/
>   public static HRegion createHRegion(final HRegionInfo info, final Path 
> rootDir,
>   final Configuration conf,
>   final HTableDescriptor hTableDescriptor,
>   final HLog hlog,
>   final boolean initialize, final boolean 
> ignoreHLog)
> {code}
> We can unify all the createXX and newXX methods and separate creating a 
> region in the file system vs opening a region. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276149#comment-14276149
 ] 

Hadoop QA commented on HBASE-12393:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692066/HBASE-12393-v2.patch
  against master branch at commit 4ac457a7bc909cc92e0a1a0cab21ed0ce6bae893.
  ATTACHMENT ID: 12692066

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12452//console

This message is automatically generated.

> The regionserver web will throw exception if we disable block cache
> ---
>
> Key: HBASE-12393
> URL: https://issues.apache.org/jira/browse/HBASE-12393
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, UI
>Affects Versions: 0.98.7
> Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12393-v2.patch, HBASE-12393.patch
>
>
> The CacheConfig.getBlockCache() will return the null point when we set 
> hfile.block.cache.size to zero.
> The BlockCacheTmpl.jamon doesn't make a check on null blockcache.
> {code}
> <%if cacheConfig == null %>
> CacheConfig is null
> <%else>
> 
> 
> Attribute
> Value
> Description
> 
> 
> Size
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().size()) %>
> Total size of Block Cache (bytes)
> 
> 
> Free
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().getFreeSize()) 
> %>
> Free space in Block Cache (bytes)
> 
> 
> Count
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getBlockCount()) %>
> Number of blocks in Block Cache
> 
> 
> Evicted
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictedCount()) %>
> Number of blocks evicted
> 
> 
> Evictions
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictionCount()) %>
> Number of times an eviction occurred
> 
> 
> Hits
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCount()) %>
> Number requests that were cache hits
> 
> 
> Hits Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCachingCount()) %>
> Cache hit block requests but only requests set to use Block 
> Cache
> 
> 
> Misses
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Number of requests that were cache misses
> 
> 
> Misses Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Block requests that were cache misses but only requests set to 
> use Block Cache
> 
> 
> Hit Ratio
> <% String.format("%,.2f", 
> cacheConfig.getBlockCache().getStats().getHitRatio() * 100) %><% "%" %>
> Hit Count divided by total requests count
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-7541) Convert all tests that use HBaseTestingUtility.createMultiRegions to HBA.createTable

2015-01-13 Thread Jonathan Lawlor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Lawlor updated HBASE-7541:
---
Attachment: HBASE_7541_v2.txt

retrying to trigger the build

> Convert all tests that use HBaseTestingUtility.createMultiRegions to 
> HBA.createTable
> 
>
> Key: HBASE-7541
> URL: https://issues.apache.org/jira/browse/HBASE-7541
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Jonathan Lawlor
> Attachments: HBASE7541_patch_v1.txt, HBASE_7541_v2.txt, 
> HBASE_7541_v2.txt
>
>
> Like I discussed in HBASE-7534, {{HBaseTestingUtility.createMultiRegions}} 
> should disappear and not come back. There's about 25 different places in the 
> code that rely on it that need to be changed the same way I changed 
> TestReplication.
> Perfect for someone that wants to get started with HBase dev :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2015-01-13 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-12393:
--
Attachment: HBASE-12393-v2.patch

> The regionserver web will throw exception if we disable block cache
> ---
>
> Key: HBASE-12393
> URL: https://issues.apache.org/jira/browse/HBASE-12393
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, UI
>Affects Versions: 0.98.7
> Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12393-v2.patch, HBASE-12393.patch
>
>
> The CacheConfig.getBlockCache() will return the null point when we set 
> hfile.block.cache.size to zero.
> The BlockCacheTmpl.jamon doesn't make a check on null blockcache.
> {code}
> <%if cacheConfig == null %>
> CacheConfig is null
> <%else>
> 
> 
> Attribute
> Value
> Description
> 
> 
> Size
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().size()) %>
> Total size of Block Cache (bytes)
> 
> 
> Free
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().getFreeSize()) 
> %>
> Free space in Block Cache (bytes)
> 
> 
> Count
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getBlockCount()) %>
> Number of blocks in Block Cache
> 
> 
> Evicted
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictedCount()) %>
> Number of blocks evicted
> 
> 
> Evictions
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictionCount()) %>
> Number of times an eviction occurred
> 
> 
> Hits
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCount()) %>
> Number requests that were cache hits
> 
> 
> Hits Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCachingCount()) %>
> Cache hit block requests but only requests set to use Block 
> Cache
> 
> 
> Misses
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Number of requests that were cache misses
> 
> 
> Misses Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Block requests that were cache misses but only requests set to 
> use Block Cache
> 
> 
> Hit Ratio
> <% String.format("%,.2f", 
> cacheConfig.getBlockCache().getStats().getHitRatio() * 100) %><% "%" %>
> Hit Count divided by total requests count
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276141#comment-14276141
 ] 

Sean Busbey commented on HBASE-12848:
-

DistributedFileSystem is imported in FSHLog and the reflection code is in 
FSUtils, so I don't think it's needed in FSHLog.

I think there will be a substantial amount of time where the storage policy for 
WALs will be different from the one for HFiles, so it's worth breaking things 
out. If you want to keep the Configuration lookup inside of the FSUtils method, 
we could take a parameter name and a default to use.

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12848-v1.patch, 12848-v2.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12480) Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276126#comment-14276126
 ] 

Hudson commented on HBASE-12480:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #757 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/757/])
HBASE-12480 Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master 
failover (virag: rev 3b4b1de3ca387a0b720bf4c61d8f5a9ba08da78f)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


> Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover 
> ---
>
> Key: HBASE-12480
> URL: https://issues.apache.org/jira/browse/HBASE-12480
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Virag Kothari
>Assignee: Virag Kothari
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12480-0.98.patch, HBASE-12480-branch_1.patch, 
> HBASE-12480.patch, HBASE-12480_v2.patch
>
>
> For zk assignment, we used to process this regions. For zk less assignment, 
> we should do the same



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276117#comment-14276117
 ] 

Ted Yu commented on HBASE-12848:


Thanks for detailed review, Sean.

w.r.t. import of DistributedFileSystem, it is needed - see reflection code.
This is what I am thinking: I started this patch targeting WAL. But storage 
policy is an hdfs concept which may later be used by other components.
How about removing WAL_ from the constants' names ?

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12848-v1.patch, 12848-v2.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12848:
---
Status: Open  (was: Patch Available)

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12848-v1.patch, 12848-v2.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12848:
---
Fix Version/s: 1.1.0
   2.0.0
   Status: Patch Available  (was: Open)

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12848-v1.patch, 12848-v2.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276111#comment-14276111
 ] 

Sean Busbey commented on HBASE-12848:
-

{code}
+  /** Configuration name of HLog storage policy */
{code}

Should be "WAL storage policy" and not HLog.

{code}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
index 1fad93d..62ef364 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
@@ -82,6 +82,7 @@ import org.apache.hadoop.hbase.wal.WALKey;
 import org.apache.hadoop.hbase.wal.WALPrettyPrinter;
 import org.apache.hadoop.hbase.wal.WALProvider.Writer;
 import org.apache.hadoop.hbase.wal.WALSplitter;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
{code}

nit: this import is no longer needed in FSHLog.

{code}
+   * @param fs the FileSystem
+   * @param conf the Configuration
{code}

nit: just leave out these javadocs since they aren't adding any info.

{code}
+String storagePolicy = conf.get(HConstants.WAL_STORAGE_POLICY,
+  HConstants.DEFAULT_WAL_STORAGE_POLICY).toUpperCase();
+if (!storagePolicy.equals(HConstants.DEFAULT_WAL_STORAGE_POLICY) &&
+fs instanceof DistributedFileSystem) {
{code}

Pull this part into FSHLog and make the FSUtils method just take a storage 
policy as a param. That will allow the method to be reused as-is for non-WAL 
paths.

{code}
+String storagePolicy = conf.get(HConstants.WAL_STORAGE_POLICY,
+  HConstants.DEFAULT_WAL_STORAGE_POLICY).toUpperCase();
{code}

nit: should be indented 4 spaces for line continuation.

{code}
+m = dfsClass.getDeclaredMethod("setStoragePolicy",
+  new Class[] { Path.class, String.class });
{code}

nit: should be indented 4 spaces for line continuation.

{code}
+  LOG.info("setting " + storagePolicy + " for " + path);
{code}

nit: use "set" instead of "setting" so it's clear the action succeeded at this 
point.

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12848-v1.patch, 12848-v2.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11983) HRegion constructors should not create HLog

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276108#comment-14276108
 ] 

Hadoop QA commented on HBASE-11983:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692040/HBASE-11983.05.patch
  against master branch at commit 4ac457a7bc909cc92e0a1a0cab21ed0ce6bae893.
  ATTACHMENT ID: 12692040

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 154 
new or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at org.apache.hadoop.hdfs.TestPread.testMaxOutHedgedReadPool(TestPread.java:420)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12450//console

This message is automatically generated.

> HRegion constructors should not create HLog 
> 
>
> Key: HBASE-11983
> URL: https://issues.apache.org/jira/browse/HBASE-11983
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Enis Soztutar
>Assignee: Nick Dimiduk
>  Labels: beginner
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-11983.00.patch, HBASE-11983.01.patch, 
> HBASE-11983.02.patch, HBASE-11983.03.patch, HBASE-11983.03.patch, 
> HBASE-11983.04.patch, HBASE-11983.05.patch
>
>
> We should get rid of HRegion creating its own HLog. It should ALWAYS get the 
> log from outside. 
> I think this was added for unit tests, but we should refrain from such 
> practice in the future (adding UT constructors always leads to weird and 
> critical bugs down the road). See recent: HBASE-11982, HBASE-11654. 
> Get rid of weird things like ignoreHLog:
> {code}
>   /**
>* @param ignoreHLog - true to skip generate new hlog if it is null, mostly 
> for createTable
>*/
>   public static HRegion createHRegion(final HRegionInfo info, final Path 
> rootDir,
>   final Configuration conf,
>

[jira] [Commented] (HBASE-12836) Tip of branch 0.98 has some binary incompatibilities with 0.98.0

2015-01-13 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276075#comment-14276075
 ] 

Srikanth Srungarapu commented on HBASE-12836:
-

[~mbertozzi] The grant/revoke in ProtobufUtil methods return void where as the 
older methods return GrantResponse/RevokeResponse. So, I just sticked to 
bringing the old methods as they were...

> Tip of branch 0.98 has some binary incompatibilities with 0.98.0
> 
>
> Key: HBASE-12836
> URL: https://issues.apache.org/jira/browse/HBASE-12836
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.10
>Reporter: Dima Spivak
> Attachments: HBASE-12836.patch, HBASE-12836_v2.patch
>
>
> In working on HBASE-12808, I ran a scan between the 0.98.0 tag and the tip of 
> branch 0.98 and found a handful of binary incompatibilities that are probably 
> worth addressing:
> - org.apache.hadoop.hbase.security.access.AccessControlClient.grant and 
> org.apache.hadoop.hbase.security.access.AccessControlClient.revoke had their 
> return types and parameter lists changed in HBASE-12161. cc: [~srikanth235] 
> and [~mbertozzi].
> - org.apache.hadoop.hbase.mapreduce.CopyTable.createSubmittableJob is no 
> longer static and its parameter list changed in HBASE-11997. cc: 
> [~daviddengcn] and [~tedyu].
> - getBlockSize was added to the org.apache.hadoop.hbase.io.crypto.Encryptor 
> interface in HBASE-11446, which may lead to an AbstractMethodError exception 
> in a 0.98.0 client that doesn't have this implemented. I suspect this one is 
> worth living with? cc: [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12728) buffered writes substantially less useful after removal of HTablePool

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276074#comment-14276074
 ] 

Hadoop QA commented on HBASE-12728:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692039/HBASE-12728.patch
  against master branch at commit 4ac457a7bc909cc92e0a1a0cab21ed0ce6bae893.
  ATTACHMENT ID: 12692039

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 102 
new or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
13 warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
2080 checkstyle errors (more than the master's current 2075 errors).

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12449//console

This message is automatically generated.

> buffered writes substantially less useful after removal of HTablePool
> -
>
> Key: HBASE-12728
> URL: https://issues.apache.org/jira/browse/HBASE-12728
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 0.98.0
>Reporter: Aaron Beppu
>Assignee: Solomon Duskis
>Priority: Blocker
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: 12728.connection-owns-buffers.example.branch-1.0.patch, 
> HBASE-12728.patch, bulk-mutator.patch
>
>
> In previous versions of HBase, when use of HTablePool was encouraged, HTable 
> instances were long-lived in that pool, and for that reason, if autoFlush was 
> set to false, the table instance could accumulate a full buffer of writes 
> before a flush was triggered. Writes from the client to the cluster could 
> then be substantially larger and less frequent than without buffering.
> However, when HTablePool was deprecated, the primary justification seems to 
> have been that creating HTable instances is cheap, so long as the connection 
> and executor service being passed to it are pre-provided. 

[jira] [Commented] (HBASE-8026) HBase Shell docs for scan command don't reference VERSIONS

2015-01-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276070#comment-14276070
 ] 

Sean Busbey commented on HBASE-8026:


1) Okay, that looks good.

2) yes please raise a follow on jira for the test if there isn't one.

> HBase Shell docs for scan command don't reference VERSIONS
> --
>
> Key: HBASE-8026
> URL: https://issues.apache.org/jira/browse/HBASE-8026
> Project: HBase
>  Issue Type: Bug
>Reporter: Jonathan Natkins
>Assignee: Amit Kabra
>  Labels: beginner
> Fix For: 0.98.8
>
> Attachments: HBASE-8026.patch
>
>
> hbase(main):046:0> help 'scan'
> Scan a table; pass table name and optionally a dictionary of scanner
> specifications.  Scanner specifications may include one or more of:
> TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH,
> or COLUMNS, CACHE
> VERSIONS should be mentioned somewhere here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12848:
---
Attachment: 12848-v2.patch

See if patch v2 is better.

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 12848-v1.patch, 12848-v2.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12480) Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276056#comment-14276056
 ] 

Hudson commented on HBASE-12480:


FAILURE: Integrated in HBase-0.98 #794 (See 
[https://builds.apache.org/job/HBase-0.98/794/])
HBASE-12480 Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master 
failover (virag: rev 3b4b1de3ca387a0b720bf4c61d8f5a9ba08da78f)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java


> Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover 
> ---
>
> Key: HBASE-12480
> URL: https://issues.apache.org/jira/browse/HBASE-12480
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Virag Kothari
>Assignee: Virag Kothari
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12480-0.98.patch, HBASE-12480-branch_1.patch, 
> HBASE-12480.patch, HBASE-12480_v2.patch
>
>
> For zk assignment, we used to process this regions. For zk less assignment, 
> we should do the same



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10528) DefaultBalancer selects plans to move regions onto draining nodes

2015-01-13 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-10528:
---
Fix Version/s: 0.98.10

> DefaultBalancer selects plans to move regions onto draining nodes
> -
>
> Key: HBASE-10528
> URL: https://issues.apache.org/jira/browse/HBASE-10528
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.5
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: 10528-1.0.addendum, HBASE-10528-0.94.patch, 
> HBASE-10528-0.98.patch, HBASE-10528-0.98.v2.patch, HBASE-10528.patch, 
> HBASE-10528.v2.patch
>
>
> We have quite a large cluster > 100k regions, and we needed to isolate a 
> region was very hot until we could push a patch.  We put this region on its 
> own regionserver and set it in the draining state.  The default balancer was 
> selecting regions to move to this cluster for its region plans.  
> It just so happened for other tables, the default load balancer was creating 
> plans for the draining servers, even though they were not available to move 
> regions to.  Thus we were closing regions, then attempting to move them to 
> the draining server then finding out its draining. 
> We had to disable the balancer to resolve this issue.
> There are some approaches we can take here.
> 1. Exclude draining servers altogether, don't even pass those into the load 
> balancer from HMaster.
> 2. We could exclude draining servers from ceiling and floor calculations 
> where we could potentially skip load balancing because those draining servers 
> wont be represented when deciding whether to balance.
> 3. Along with #2 when assigning regions, we would skip plans to assign 
> regions to those draining servers.
> I am in favor of #1 which is simply removes servers as candidates for 
> balancing if they are in the draining state.
> But I would love to hear what everyone else thinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276037#comment-14276037
 ] 

Sean Busbey commented on HBASE-12848:
-

The InterfaceAudience annotation allows down-grading member access, right? 
Maybe leave them in HConstants but mark them as LimitedPrivate(CONFIG)? Then 
again, we have other config properties present in the IA.Private parts of the 
wal. They're just considered advanced and don't show up in config docs; largely 
because we assume we'll have a good default for many workloads. Maybe the same 
is true in this case?

Definitely don't put the reflection stuff in WALUtil. That's just for helpers 
for the region server to make use of the WAL. It's only supposed to touch the 
normal WAL interfaces. the dependency isn't meant to go the other way. How 
about hbase/util/FSUtils?

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 12848-v1.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12480) Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276032#comment-14276032
 ] 

Hudson commented on HBASE-12480:


FAILURE: Integrated in HBase-1.0 #656 (See 
[https://builds.apache.org/job/HBase-1.0/656/])
HBASE-12480 Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master 
failover (virag: rev 4f78e07bc719ea2c7f207b225ede7d4198410a31)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java


> Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover 
> ---
>
> Key: HBASE-12480
> URL: https://issues.apache.org/jira/browse/HBASE-12480
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Virag Kothari
>Assignee: Virag Kothari
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12480-0.98.patch, HBASE-12480-branch_1.patch, 
> HBASE-12480.patch, HBASE-12480_v2.patch
>
>
> For zk assignment, we used to process this regions. For zk less assignment, 
> we should do the same



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12480) Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276021#comment-14276021
 ] 

Hudson commented on HBASE-12480:


FAILURE: Integrated in HBase-1.1 #80 (See 
[https://builds.apache.org/job/HBase-1.1/80/])
HBASE-12480 Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master 
failover (virag: rev 4ff742742be53d8c6a08fb4ce37bd80f2988abac)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


> Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover 
> ---
>
> Key: HBASE-12480
> URL: https://issues.apache.org/jira/browse/HBASE-12480
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Virag Kothari
>Assignee: Virag Kothari
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12480-0.98.patch, HBASE-12480-branch_1.patch, 
> HBASE-12480.patch, HBASE-12480_v2.patch
>
>
> For zk assignment, we used to process this regions. For zk less assignment, 
> we should do the same



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12480) Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276014#comment-14276014
 ] 

Hudson commented on HBASE-12480:


SUCCESS: Integrated in HBase-TRUNK #6019 (See 
[https://builds.apache.org/job/HBase-TRUNK/6019/])
HBASE-12480 Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master 
failover (virag: rev 4ac457a7bc909cc92e0a1a0cab21ed0ce6bae893)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java


> Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover 
> ---
>
> Key: HBASE-12480
> URL: https://issues.apache.org/jira/browse/HBASE-12480
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Virag Kothari
>Assignee: Virag Kothari
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12480-0.98.patch, HBASE-12480-branch_1.patch, 
> HBASE-12480.patch, HBASE-12480_v2.patch
>
>
> For zk assignment, we used to process this regions. For zk less assignment, 
> we should do the same



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276002#comment-14276002
 ] 

Ted Yu commented on HBASE-12848:


DefaultWALProvider is marked @InterfaceAudience.Private
I looked at other classes under 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal, such as WALFactory.java 
- they're marked as Private as well.

Maybe put the constants in FSHLog.java ?

w.r.t. reflection code, how about adding a method in WALUtil class ?

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 12848-v1.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10528) DefaultBalancer selects plans to move regions onto draining nodes

2015-01-13 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275986#comment-14275986
 ] 

churro morales commented on HBASE-10528:


Hey guys, 

I ran the tests locally and things seem to pass, I rewrote the tests and code 
and will submit a 0.98 patch right now.

> DefaultBalancer selects plans to move regions onto draining nodes
> -
>
> Key: HBASE-10528
> URL: https://issues.apache.org/jira/browse/HBASE-10528
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.5
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: 10528-1.0.addendum, HBASE-10528-0.94.patch, 
> HBASE-10528-0.98.patch, HBASE-10528-0.98.v2.patch, HBASE-10528.patch, 
> HBASE-10528.v2.patch
>
>
> We have quite a large cluster > 100k regions, and we needed to isolate a 
> region was very hot until we could push a patch.  We put this region on its 
> own regionserver and set it in the draining state.  The default balancer was 
> selecting regions to move to this cluster for its region plans.  
> It just so happened for other tables, the default load balancer was creating 
> plans for the draining servers, even though they were not available to move 
> regions to.  Thus we were closing regions, then attempting to move them to 
> the draining server then finding out its draining. 
> We had to disable the balancer to resolve this issue.
> There are some approaches we can take here.
> 1. Exclude draining servers altogether, don't even pass those into the load 
> balancer from HMaster.
> 2. We could exclude draining servers from ceiling and floor calculations 
> where we could potentially skip load balancing because those draining servers 
> wont be represented when deciding whether to balance.
> 3. Along with #2 when assigning regions, we would skip plans to assign 
> regions to those draining servers.
> I am in favor of #1 which is simply removes servers as candidates for 
> balancing if they are in the draining state.
> But I would love to hear what everyone else thinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10528) DefaultBalancer selects plans to move regions onto draining nodes

2015-01-13 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-10528:
---
Attachment: HBASE-10528-0.98.v2.patch

> DefaultBalancer selects plans to move regions onto draining nodes
> -
>
> Key: HBASE-10528
> URL: https://issues.apache.org/jira/browse/HBASE-10528
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.5
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: 10528-1.0.addendum, HBASE-10528-0.94.patch, 
> HBASE-10528-0.98.patch, HBASE-10528-0.98.v2.patch, HBASE-10528.patch, 
> HBASE-10528.v2.patch
>
>
> We have quite a large cluster > 100k regions, and we needed to isolate a 
> region was very hot until we could push a patch.  We put this region on its 
> own regionserver and set it in the draining state.  The default balancer was 
> selecting regions to move to this cluster for its region plans.  
> It just so happened for other tables, the default load balancer was creating 
> plans for the draining servers, even though they were not available to move 
> regions to.  Thus we were closing regions, then attempting to move them to 
> the draining server then finding out its draining. 
> We had to disable the balancer to resolve this issue.
> There are some approaches we can take here.
> 1. Exclude draining servers altogether, don't even pass those into the load 
> balancer from HMaster.
> 2. We could exclude draining servers from ceiling and floor calculations 
> where we could potentially skip load balancing because those draining servers 
> wont be represented when deciding whether to balance.
> 3. Along with #2 when assigning regions, we would skip plans to assign 
> regions to those draining servers.
> I am in favor of #1 which is simply removes servers as candidates for 
> balancing if they are in the draining state.
> But I would love to hear what everyone else thinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275978#comment-14275978
 ] 

Sean Busbey commented on HBASE-12848:
-

+1 reorganizing to subtask. the WAL is the obvious first step towards HBase 
leveraging the feature.

patch comments

{code}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
index eee5e83..ffc660b 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
@@ -918,6 +918,14 @@ public final class HConstants {
   public static final String ENABLE_WAL_COMPRESSION =
 "hbase.regionserver.wal.enablecompression";
 
+  /** Configuration name of HLog storage policy */
+  public static final String WAL_STORAGE_POLICY = "hbase.wal.storage.policy";
+  public static final String DEFAULT_WAL_STORAGE_POLICY = "NONE";
+  /** place only one replica in SSD and the remaining in default storage */
+  public static final String WAL_STORAGE_POLICY_ONE_SSD = "ONE_SSD";
+  /** place all replica on SSD */
+  public static final String WAL_STORAGE_POLICY_ALL_SSD = "ALL_SSD";
+
{code}

please put these somewhere other than HConstants so that they can be scoped to 
LimitedPrivate(CONFIG) instead of public. Do we want to flag them as unstable 
pending some benchmarks?

Maybe in DefaultWALProvider since it already talks about being FS-based? Also 
would be a good place to add some javadocs about using it. we should include a 
note that we only allow setting one policy for all FS-based WALs (and not e.g. 
one policy for meta and another for user-data).

Also, would be good to add NONE as a policy so that folks could expressly stick 
to not using SSD for the wal. (agree that for now doing NONE as default is also 
good).

{code}
+  DistributedFileSystem dfs = (DistributedFileSystem)fs;
+  Class dfsClass = dfs.getClass();
+  Method m = null;
+  try {
+m = dfsClass.getDeclaredMethod("setStoragePolicy",
+  new Class[] { Path.class, String.class });
+m.setAccessible(true);
+  } catch (NoSuchMethodException e) {
+LOG.info("FileSystem doesn't support"
++ " setStoragePolicy; --HDFS-7228 not available");
+  } catch (SecurityException e) {
+LOG.info("Doesn't have access to setStoragePolicy on "
++ "FileSystems --HDFS-7228 not available", e);
+m = null; // could happen on setAccessible()
+  }
+  if (m != null) {
+try {
+  m.invoke(dfs, this.fullPathLogDir, storagePolicy);
+  LOG.info("setting " + storagePolicy + " for " + this.fullPathLogDir);
+} catch (Exception e) {
+  LOG.warn("Unable to set " + storagePolicy + " for " + 
this.fullPathLogDir, e);
+}
+  }
{code}

Can we move the reflection into one of the filesystem utility classes so that 
other parts of HBASE-6572 could make use of it later?

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 12848-v1.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11983) HRegion constructors should not create HLog

2015-01-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275934#comment-14275934
 ] 

Sean Busbey commented on HBASE-11983:
-

Yeah, I've wanted that for awhile and it's probably a better use of time than 
trying to handle bringing the number down in one fell swoop. filed HBASE-12850 
against myself.

FWIW, I've done this manually before by looking at the check-patch util; it'll 
tell you which file to save and then diff.

> HRegion constructors should not create HLog 
> 
>
> Key: HBASE-11983
> URL: https://issues.apache.org/jira/browse/HBASE-11983
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Enis Soztutar
>Assignee: Nick Dimiduk
>  Labels: beginner
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-11983.00.patch, HBASE-11983.01.patch, 
> HBASE-11983.02.patch, HBASE-11983.03.patch, HBASE-11983.03.patch, 
> HBASE-11983.04.patch, HBASE-11983.05.patch
>
>
> We should get rid of HRegion creating its own HLog. It should ALWAYS get the 
> log from outside. 
> I think this was added for unit tests, but we should refrain from such 
> practice in the future (adding UT constructors always leads to weird and 
> critical bugs down the road). See recent: HBASE-11982, HBASE-11654. 
> Get rid of weird things like ignoreHLog:
> {code}
>   /**
>* @param ignoreHLog - true to skip generate new hlog if it is null, mostly 
> for createTable
>*/
>   public static HRegion createHRegion(final HRegionInfo info, final Path 
> rootDir,
>   final Configuration conf,
>   final HTableDescriptor hTableDescriptor,
>   final HLog hlog,
>   final boolean initialize, final boolean 
> ignoreHLog)
> {code}
> We can unify all the createXX and newXX methods and separate creating a 
> region in the file system vs opening a region. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12850) Patch check should output list of X new checkstyle warnings

2015-01-13 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-12850:
---

 Summary: Patch check should output list of X new checkstyle 
warnings
 Key: HBASE-12850
 URL: https://issues.apache.org/jira/browse/HBASE-12850
 Project: HBase
  Issue Type: Sub-task
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor


check patch says how many new checkstyle warnings there are, but doesn't 
provide any sane way of finding out what they are.

We should output the first 10 or so of them, as we do for long lines and such, 
and give a command line flag for folks who want them all.

With this change, I'd be in favor of closing as wontfix/later the remaining 
proactive cleanup subtasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12843) TestAssignmentManager.testOpenCloseRegionRPCIntendedForPreviousServer failing frequently in 0.98 builds

2015-01-13 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275929#comment-14275929
 ] 

churro morales commented on HBASE-12843:


Dima, thanks for bringing this up.  I will look into this test right now.



> TestAssignmentManager.testOpenCloseRegionRPCIntendedForPreviousServer failing 
> frequently in 0.98 builds
> ---
>
> Key: HBASE-12843
> URL: https://issues.apache.org/jira/browse/HBASE-12843
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>
> TestAssignmentManager.testOpenCloseRegionRPCIntendedForPreviousServer has 
> started failing intermittently in 0.98 builds:
> {noformat}
> java.lang.AssertionError: expected: but 
> was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hbase.master.TestAssignmentManager.testOpenCloseRegionRPCIntendedForPreviousServer(TestAssignmentManager.java:1425)
> {noformat}
> For example, in 
> https://builds.apache.org/job/HBase-0.98/789/testReport/junit/org.apache.hadoop.hbase.master/TestAssignmentManager/testOpenCloseRegionRPCIntendedForPreviousServer/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12849) LoadIncrementalHFiles should use unmanaged connection in branch-1

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275919#comment-14275919
 ] 

Hadoop QA commented on HBASE-12849:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692000/12849-master.patch
  against master branch at commit e5f3dd682fb8884a947b40b4348bd5d1386a6470.
  ATTACHMENT ID: 12692000

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12446//console

This message is automatically generated.

> LoadIncrementalHFiles should use unmanaged connection in branch-1
> -
>
> Key: HBASE-12849
> URL: https://issues.apache.org/jira/browse/HBASE-12849
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12849-1.1-v2.patch, 12849-1.1.patch, 12849-master.patch
>
>
> From 
> https://builds.apache.org/job/HBase-1.1/78/testReport/org.apache.hadoop.hbase.mapreduce/TestLoadIncrementalHFiles/testSimpleLoad/
>  :
> {code}
> java.io.IOException: The connection has to be unmanaged.
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getAdmin(ConnectionManager.java:715)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:239)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:936)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:255)
>   at 
> org.apache.hadoop.hbase.mapreduc

[jira] [Commented] (HBASE-12831) Changing the set of vis labels a user has access to doesn't generate an audit log event

2015-01-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275910#comment-14275910
 ] 

Sean Busbey commented on HBASE-12831:
-

{code}
+  } catch (AccessDeniedException e) {
+logResult(false, "addLabels", e.getMessage(), null, labels, null);
+LOG.error(e);
+setExceptionResults(visLabels.size(), e, response);
{code}

In places where we're writing to the normal log, please include more of a 
message than just the exception object.

{code}
+  } catch (IOException e) {
+// TODO Auto-generated catch block
+e.printStackTrace();
+  }
{code}

please log as a WARN.

> Changing the set of vis labels a user has access to doesn't generate an audit 
> log event
> ---
>
> Key: HBASE-12831
> URL: https://issues.apache.org/jira/browse/HBASE-12831
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 2.0.0, 0.98.6
>Reporter: Sean Busbey
>Assignee: Ashish Singhi
>  Labels: audit
> Fix For: 1.0.1, 0.98.11
>
> Attachments: HBASE-12831-v2.patch, HBASE-12831-v3.patch, 
> HBASE-12831.patch
>
>
> Right now, the AccessController makes sure that (when users care about audit 
> events) we generate an audit log event for any access change, like granting 
> or removing a permission from a user.
> When the set of labels a user has access to is altered, it gets handled by 
> the VisibilityLabelService and we don't log anything to the audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11295) Long running scan produces OutOfOrderScannerNextException

2015-01-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275904#comment-14275904
 ] 

Lars Hofhansl commented on HBASE-11295:
---

Neither do I :)

> Long running scan produces OutOfOrderScannerNextException
> -
>
> Key: HBASE-11295
> URL: https://issues.apache.org/jira/browse/HBASE-11295
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.96.0
>Reporter: Jeff Cunningham
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: OutOfOrderScannerNextException.tar.gz
>
>
> Attached Files:
> HRegionServer.java - instramented from 0.96.1.1-cdh5.0.0
> HBaseLeaseTimeoutIT.java - reproducing JUnit 4 test
> WaitFilter.java - Scan filter (extends FilterBase) that overrides 
> filterRowKey() to sleep during invocation
> SpliceFilter.proto - Protobuf defintiion for WaitFilter.java
> OutOfOrderScann_InstramentedServer.log - instramented server log
> Steps.txt - this note
> Set up:
> In HBaseLeaseTimeoutIT, create a scan, set the given filter (which sleeps in 
> overridden filterRowKey() method) and set it on the scan, and scan the table.
> This is done in test client_0x0_server_15x10().
> Here's what I'm seeing (see also attached log):
> A new request comes into server (ID 1940798815214593802 - 
> RpcServer.handler=96) and a RegionScanner is created for it, cached by ID, 
> immediately looked up again and cached RegionScannerHolder's nextCallSeq 
> incremeted (now at 1).
> The RegionScan thread goes to sleep in WaitFilter#filterRowKey().
> A short (variable) period later, another request comes into the server (ID 
> 8946109289649235722 - RpcServer.handler=98) and the same series of events 
> happen to this request.
> At this point both RegionScanner threads are sleeping in 
> WaitFilter.filterRowKey(). After another period, the client retries another 
> scan request which thinks its next_call_seq is 0.  However, HRegionServer's 
> cached RegionScannerHolder thinks the matching RegionScanner's nextCallSeq 
> should be 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12849) LoadIncrementalHFiles should use unmanaged connection in branch-1

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275893#comment-14275893
 ] 

Hudson commented on HBASE-12849:


FAILURE: Integrated in HBase-1.1 #79 (See 
[https://builds.apache.org/job/HBase-1.1/79/])
HBASE-12849 LoadIncrementalHFiles should use unmanaged connection in branch-1 
(tedyu: rev 908779b88701a6d06366a3693d688c1ad1a40417)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java


> LoadIncrementalHFiles should use unmanaged connection in branch-1
> -
>
> Key: HBASE-12849
> URL: https://issues.apache.org/jira/browse/HBASE-12849
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12849-1.1-v2.patch, 12849-1.1.patch, 12849-master.patch
>
>
> From 
> https://builds.apache.org/job/HBase-1.1/78/testReport/org.apache.hadoop.hbase.mapreduce/TestLoadIncrementalHFiles/testSimpleLoad/
>  :
> {code}
> java.io.IOException: The connection has to be unmanaged.
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getAdmin(ConnectionManager.java:715)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:239)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:936)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:255)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:229)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:216)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:206)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:102)
> {code}
> LoadIncrementalHFiles should use unmanaged connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11983) HRegion constructors should not create HLog

2015-01-13 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-11983:
-
Attachment: HBASE-11983.05.patch

Ugh. Finding my own checkstyle warnings is a needle-haystack problem. We need a 
checkstyle-results.xml diff utility. Cleaned up the obvious unused import 
culprits, let's see how it goes.

> HRegion constructors should not create HLog 
> 
>
> Key: HBASE-11983
> URL: https://issues.apache.org/jira/browse/HBASE-11983
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Enis Soztutar
>Assignee: Nick Dimiduk
>  Labels: beginner
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-11983.00.patch, HBASE-11983.01.patch, 
> HBASE-11983.02.patch, HBASE-11983.03.patch, HBASE-11983.03.patch, 
> HBASE-11983.04.patch, HBASE-11983.05.patch
>
>
> We should get rid of HRegion creating its own HLog. It should ALWAYS get the 
> log from outside. 
> I think this was added for unit tests, but we should refrain from such 
> practice in the future (adding UT constructors always leads to weird and 
> critical bugs down the road). See recent: HBASE-11982, HBASE-11654. 
> Get rid of weird things like ignoreHLog:
> {code}
>   /**
>* @param ignoreHLog - true to skip generate new hlog if it is null, mostly 
> for createTable
>*/
>   public static HRegion createHRegion(final HRegionInfo info, final Path 
> rootDir,
>   final Configuration conf,
>   final HTableDescriptor hTableDescriptor,
>   final HLog hlog,
>   final boolean initialize, final boolean 
> ignoreHLog)
> {code}
> We can unify all the createXX and newXX methods and separate creating a 
> region in the file system vs opening a region. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11295) Long running scan produces OutOfOrderScannerNextException

2015-01-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275889#comment-14275889
 ] 

Andrew Purtell commented on HBASE-11295:


I don't have a strong opinion either way. We have two votes to change, one to 
close. 

> Long running scan produces OutOfOrderScannerNextException
> -
>
> Key: HBASE-11295
> URL: https://issues.apache.org/jira/browse/HBASE-11295
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.96.0
>Reporter: Jeff Cunningham
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: OutOfOrderScannerNextException.tar.gz
>
>
> Attached Files:
> HRegionServer.java - instramented from 0.96.1.1-cdh5.0.0
> HBaseLeaseTimeoutIT.java - reproducing JUnit 4 test
> WaitFilter.java - Scan filter (extends FilterBase) that overrides 
> filterRowKey() to sleep during invocation
> SpliceFilter.proto - Protobuf defintiion for WaitFilter.java
> OutOfOrderScann_InstramentedServer.log - instramented server log
> Steps.txt - this note
> Set up:
> In HBaseLeaseTimeoutIT, create a scan, set the given filter (which sleeps in 
> overridden filterRowKey() method) and set it on the scan, and scan the table.
> This is done in test client_0x0_server_15x10().
> Here's what I'm seeing (see also attached log):
> A new request comes into server (ID 1940798815214593802 - 
> RpcServer.handler=96) and a RegionScanner is created for it, cached by ID, 
> immediately looked up again and cached RegionScannerHolder's nextCallSeq 
> incremeted (now at 1).
> The RegionScan thread goes to sleep in WaitFilter#filterRowKey().
> A short (variable) period later, another request comes into the server (ID 
> 8946109289649235722 - RpcServer.handler=98) and the same series of events 
> happen to this request.
> At this point both RegionScanner threads are sleeping in 
> WaitFilter.filterRowKey(). After another period, the client retries another 
> scan request which thinks its next_call_seq is 0.  However, HRegionServer's 
> cached RegionScannerHolder thinks the matching RegionScanner's nextCallSeq 
> should be 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12849) LoadIncrementalHFiles should use unmanaged connection in branch-1

2015-01-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275884#comment-14275884
 ] 

Hudson commented on HBASE-12849:


SUCCESS: Integrated in HBase-TRUNK #6018 (See 
[https://builds.apache.org/job/HBase-TRUNK/6018/])
HBASE-12849 LoadIncrementalHFiles should use unmanaged connection in branch-1 
(tedyu: rev 72a6a670ace9061e45136b19ce34b83c4dbca11f)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java


> LoadIncrementalHFiles should use unmanaged connection in branch-1
> -
>
> Key: HBASE-12849
> URL: https://issues.apache.org/jira/browse/HBASE-12849
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12849-1.1-v2.patch, 12849-1.1.patch, 12849-master.patch
>
>
> From 
> https://builds.apache.org/job/HBase-1.1/78/testReport/org.apache.hadoop.hbase.mapreduce/TestLoadIncrementalHFiles/testSimpleLoad/
>  :
> {code}
> java.io.IOException: The connection has to be unmanaged.
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getAdmin(ConnectionManager.java:715)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:239)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:936)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:255)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:229)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:216)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:206)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:102)
> {code}
> LoadIncrementalHFiles should use unmanaged connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12728) buffered writes substantially less useful after removal of HTablePool

2015-01-13 Thread Solomon Duskis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Solomon Duskis updated HBASE-12728:
---
Status: Patch Available  (was: Open)

> buffered writes substantially less useful after removal of HTablePool
> -
>
> Key: HBASE-12728
> URL: https://issues.apache.org/jira/browse/HBASE-12728
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 0.98.0
>Reporter: Aaron Beppu
>Assignee: Solomon Duskis
>Priority: Blocker
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: 12728.connection-owns-buffers.example.branch-1.0.patch, 
> HBASE-12728.patch, bulk-mutator.patch
>
>
> In previous versions of HBase, when use of HTablePool was encouraged, HTable 
> instances were long-lived in that pool, and for that reason, if autoFlush was 
> set to false, the table instance could accumulate a full buffer of writes 
> before a flush was triggered. Writes from the client to the cluster could 
> then be substantially larger and less frequent than without buffering.
> However, when HTablePool was deprecated, the primary justification seems to 
> have been that creating HTable instances is cheap, so long as the connection 
> and executor service being passed to it are pre-provided. A use pattern was 
> encouraged where users should create a new HTable instance for every 
> operation, using an existing connection and executor service, and then close 
> the table. In this pattern, buffered writes are substantially less useful; 
> writes are as small and as frequent as they would have been with 
> autoflush=true, except the synchronous write is moved from the operation 
> itself to the table close call which immediately follows.
> More concretely :
> ```
> // Given these two helpers ...
> private HTableInterface getAutoFlushTable(String tableName) throws 
> IOException {
>   // (autoflush is true by default)
>   return storedConnection.getTable(tableName, executorService);
> }
> private HTableInterface getBufferedTable(String tableName) throws IOException 
> {
>   HTableInterface table = getAutoFlushTable(tableName);
>   table.setAutoFlush(false);
>   return table;
> }
> // it's my contention that these two methods would behave almost identically,
> // except the first will hit a synchronous flush during the put call,
> and the second will
> // flush during the (hidden) close call on table.
> private void writeAutoFlushed(Put somePut) throws IOException {
>   try (HTableInterface table = getAutoFlushTable(tableName)) {
> table.put(somePut); // will do synchronous flush
>   }
> }
> private void writeBuffered(Put somePut) throws IOException {
>   try (HTableInterface table = getBufferedTable(tableName)) {
> table.put(somePut);
>   } // auto-close will trigger synchronous flush
> }
> ```
> For buffered writes to actually provide a performance benefit to users, one 
> of two things must happen:
> - The writeBuffer itself shouldn't live, flush and die with the lifecycle of 
> it's HTableInstance. If the writeBuffer were managed elsewhere and had a long 
> lifespan, this could cease to be an issue. However, if the same writeBuffer 
> is appended to by multiple tables, then some additional concurrency control 
> will be needed around it.
> - Alternatively, there should be some pattern for having long-lived HTable 
> instances. However, since HTable is not thread-safe, we'd need multiple 
> instances, and a mechanism for leasing them out safely -- which sure sounds a 
> lot like the old HTablePool to me.
> See discussion on mailing list here : 
> http://mail-archives.apache.org/mod_mbox/hbase-user/201412.mbox/%3CCAPdJLkEzmUQZ_kvD%3D8mrxi4V%3DhCmUp3g9MUZsddD%2Bmon%2BAvNtg%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12728) buffered writes substantially less useful after removal of HTablePool

2015-01-13 Thread Solomon Duskis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Solomon Duskis updated HBASE-12728:
---
Attachment: HBASE-12728.patch

I implemented BulkMutator, and removed autoflush from Table.

There's more to do in terms of documentation, but I figured that this is good 
enough for further review.

> buffered writes substantially less useful after removal of HTablePool
> -
>
> Key: HBASE-12728
> URL: https://issues.apache.org/jira/browse/HBASE-12728
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 0.98.0
>Reporter: Aaron Beppu
>Assignee: Solomon Duskis
>Priority: Blocker
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: 12728.connection-owns-buffers.example.branch-1.0.patch, 
> HBASE-12728.patch, bulk-mutator.patch
>
>
> In previous versions of HBase, when use of HTablePool was encouraged, HTable 
> instances were long-lived in that pool, and for that reason, if autoFlush was 
> set to false, the table instance could accumulate a full buffer of writes 
> before a flush was triggered. Writes from the client to the cluster could 
> then be substantially larger and less frequent than without buffering.
> However, when HTablePool was deprecated, the primary justification seems to 
> have been that creating HTable instances is cheap, so long as the connection 
> and executor service being passed to it are pre-provided. A use pattern was 
> encouraged where users should create a new HTable instance for every 
> operation, using an existing connection and executor service, and then close 
> the table. In this pattern, buffered writes are substantially less useful; 
> writes are as small and as frequent as they would have been with 
> autoflush=true, except the synchronous write is moved from the operation 
> itself to the table close call which immediately follows.
> More concretely :
> ```
> // Given these two helpers ...
> private HTableInterface getAutoFlushTable(String tableName) throws 
> IOException {
>   // (autoflush is true by default)
>   return storedConnection.getTable(tableName, executorService);
> }
> private HTableInterface getBufferedTable(String tableName) throws IOException 
> {
>   HTableInterface table = getAutoFlushTable(tableName);
>   table.setAutoFlush(false);
>   return table;
> }
> // it's my contention that these two methods would behave almost identically,
> // except the first will hit a synchronous flush during the put call,
> and the second will
> // flush during the (hidden) close call on table.
> private void writeAutoFlushed(Put somePut) throws IOException {
>   try (HTableInterface table = getAutoFlushTable(tableName)) {
> table.put(somePut); // will do synchronous flush
>   }
> }
> private void writeBuffered(Put somePut) throws IOException {
>   try (HTableInterface table = getBufferedTable(tableName)) {
> table.put(somePut);
>   } // auto-close will trigger synchronous flush
> }
> ```
> For buffered writes to actually provide a performance benefit to users, one 
> of two things must happen:
> - The writeBuffer itself shouldn't live, flush and die with the lifecycle of 
> it's HTableInstance. If the writeBuffer were managed elsewhere and had a long 
> lifespan, this could cease to be an issue. However, if the same writeBuffer 
> is appended to by multiple tables, then some additional concurrency control 
> will be needed around it.
> - Alternatively, there should be some pattern for having long-lived HTable 
> instances. However, since HTable is not thread-safe, we'd need multiple 
> instances, and a mechanism for leasing them out safely -- which sure sounds a 
> lot like the old HTablePool to me.
> See discussion on mailing list here : 
> http://mail-archives.apache.org/mod_mbox/hbase-user/201412.mbox/%3CCAPdJLkEzmUQZ_kvD%3D8mrxi4V%3DhCmUp3g9MUZsddD%2Bmon%2BAvNtg%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12266) Slow Scan can cause dead loop in ClientScanner

2015-01-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275864#comment-14275864
 ] 

Lars Hofhansl commented on HBASE-12266:
---

[~tianq], you mean never retry after OutOfOrderScannerNextException?

> Slow Scan can cause dead loop in ClientScanner 
> ---
>
> Key: HBASE-12266
> URL: https://issues.apache.org/jira/browse/HBASE-12266
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 0.96.0
>Reporter: Qiang Tian
>Priority: Minor
> Attachments: 12266-v2.txt, HBASE-12266-master.patch
>
>
> see http://search-hadoop.com/m/DHED45SVsC1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11295) Long running scan produces OutOfOrderScannerNextException

2015-01-13 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275853#comment-14275853
 ] 

Lars Hofhansl commented on HBASE-11295:
---

I agree now. We should close this one.

> Long running scan produces OutOfOrderScannerNextException
> -
>
> Key: HBASE-11295
> URL: https://issues.apache.org/jira/browse/HBASE-11295
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.96.0
>Reporter: Jeff Cunningham
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: OutOfOrderScannerNextException.tar.gz
>
>
> Attached Files:
> HRegionServer.java - instramented from 0.96.1.1-cdh5.0.0
> HBaseLeaseTimeoutIT.java - reproducing JUnit 4 test
> WaitFilter.java - Scan filter (extends FilterBase) that overrides 
> filterRowKey() to sleep during invocation
> SpliceFilter.proto - Protobuf defintiion for WaitFilter.java
> OutOfOrderScann_InstramentedServer.log - instramented server log
> Steps.txt - this note
> Set up:
> In HBaseLeaseTimeoutIT, create a scan, set the given filter (which sleeps in 
> overridden filterRowKey() method) and set it on the scan, and scan the table.
> This is done in test client_0x0_server_15x10().
> Here's what I'm seeing (see also attached log):
> A new request comes into server (ID 1940798815214593802 - 
> RpcServer.handler=96) and a RegionScanner is created for it, cached by ID, 
> immediately looked up again and cached RegionScannerHolder's nextCallSeq 
> incremeted (now at 1).
> The RegionScan thread goes to sleep in WaitFilter#filterRowKey().
> A short (variable) period later, another request comes into the server (ID 
> 8946109289649235722 - RpcServer.handler=98) and the same series of events 
> happen to this request.
> At this point both RegionScanner threads are sleeping in 
> WaitFilter.filterRowKey(). After another period, the client retries another 
> scan request which thinks its next_call_seq is 0.  However, HRegionServer's 
> cached RegionScannerHolder thinks the matching RegionScanner's nextCallSeq 
> should be 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12849) LoadIncrementalHFiles should use unmanaged connection in branch-1

2015-01-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-12849.

Resolution: Fixed

TestLoadIncrementalHFiles passed in branch-1 build #79

> LoadIncrementalHFiles should use unmanaged connection in branch-1
> -
>
> Key: HBASE-12849
> URL: https://issues.apache.org/jira/browse/HBASE-12849
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12849-1.1-v2.patch, 12849-1.1.patch, 12849-master.patch
>
>
> From 
> https://builds.apache.org/job/HBase-1.1/78/testReport/org.apache.hadoop.hbase.mapreduce/TestLoadIncrementalHFiles/testSimpleLoad/
>  :
> {code}
> java.io.IOException: The connection has to be unmanaged.
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getAdmin(ConnectionManager.java:715)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:239)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:936)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:255)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:229)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:216)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:206)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:102)
> {code}
> LoadIncrementalHFiles should use unmanaged connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12480) Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover

2015-01-13 Thread Virag Kothari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virag Kothari updated HBASE-12480:
--
   Resolution: Fixed
Fix Version/s: 1.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to all branches. Thanks Jimmy and Ted for reviews.

> Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover 
> ---
>
> Key: HBASE-12480
> URL: https://issues.apache.org/jira/browse/HBASE-12480
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Virag Kothari
>Assignee: Virag Kothari
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12480-0.98.patch, HBASE-12480-branch_1.patch, 
> HBASE-12480.patch, HBASE-12480_v2.patch
>
>
> For zk assignment, we used to process this regions. For zk less assignment, 
> we should do the same



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2015-01-13 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-12393:
-
Fix Version/s: 1.1.0
   0.98.10
   2.0.0
   1.0.0

> The regionserver web will throw exception if we disable block cache
> ---
>
> Key: HBASE-12393
> URL: https://issues.apache.org/jira/browse/HBASE-12393
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, UI
>Affects Versions: 0.98.7
> Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12393.patch
>
>
> The CacheConfig.getBlockCache() will return the null point when we set 
> hfile.block.cache.size to zero.
> The BlockCacheTmpl.jamon doesn't make a check on null blockcache.
> {code}
> <%if cacheConfig == null %>
> CacheConfig is null
> <%else>
> 
> 
> Attribute
> Value
> Description
> 
> 
> Size
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().size()) %>
> Total size of Block Cache (bytes)
> 
> 
> Free
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().getFreeSize()) 
> %>
> Free space in Block Cache (bytes)
> 
> 
> Count
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getBlockCount()) %>
> Number of blocks in Block Cache
> 
> 
> Evicted
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictedCount()) %>
> Number of blocks evicted
> 
> 
> Evictions
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictionCount()) %>
> Number of times an eviction occurred
> 
> 
> Hits
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCount()) %>
> Number requests that were cache hits
> 
> 
> Hits Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCachingCount()) %>
> Cache hit block requests but only requests set to use Block 
> Cache
> 
> 
> Misses
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Number of requests that were cache misses
> 
> 
> Misses Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Block requests that were cache misses but only requests set to 
> use Block Cache
> 
> 
> Hit Ratio
> <% String.format("%,.2f", 
> cacheConfig.getBlockCache().getStats().getHitRatio() * 100) %><% "%" %>
> Hit Count divided by total requests count
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2015-01-13 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275816#comment-14275816
 ] 

Nick Dimiduk commented on HBASE-12393:
--

hi [~chia7712]. We have a test called TestRSStatusServlet. I think it makes 
sense to add a test there. You should be able to use Mockito or reflection to 
force CacheConfig.getBlockCache() to return null. Mind taking a stab at adding 
such a test? While you're at it, you can add the LOG.warn statements JM 
mentioned above. I've assigned the ticket to you.

FYI, we always apply patches first for master and then back them to the various 
release branches. So in this case, a fix will be needed for master, branch-1, 
branch-1.0, and 0.98. Let us know if you have troubles, I/we are happy to help.

> The regionserver web will throw exception if we disable block cache
> ---
>
> Key: HBASE-12393
> URL: https://issues.apache.org/jira/browse/HBASE-12393
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, UI
>Affects Versions: 0.98.7
> Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: HBASE-12393.patch
>
>
> The CacheConfig.getBlockCache() will return the null point when we set 
> hfile.block.cache.size to zero.
> The BlockCacheTmpl.jamon doesn't make a check on null blockcache.
> {code}
> <%if cacheConfig == null %>
> CacheConfig is null
> <%else>
> 
> 
> Attribute
> Value
> Description
> 
> 
> Size
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().size()) %>
> Total size of Block Cache (bytes)
> 
> 
> Free
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().getFreeSize()) 
> %>
> Free space in Block Cache (bytes)
> 
> 
> Count
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getBlockCount()) %>
> Number of blocks in Block Cache
> 
> 
> Evicted
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictedCount()) %>
> Number of blocks evicted
> 
> 
> Evictions
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictionCount()) %>
> Number of times an eviction occurred
> 
> 
> Hits
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCount()) %>
> Number requests that were cache hits
> 
> 
> Hits Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCachingCount()) %>
> Cache hit block requests but only requests set to use Block 
> Cache
> 
> 
> Misses
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Number of requests that were cache misses
> 
> 
> Misses Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Block requests that were cache misses but only requests set to 
> use Block Cache
> 
> 
> Hit Ratio
> <% String.format("%,.2f", 
> cacheConfig.getBlockCache().getStats().getHitRatio() * 100) %><% "%" %>
> Hit Count divided by total requests count
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12844) ServerManager.isServerReacable() should sleep between retries

2015-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275787#comment-14275787
 ] 

Hadoop QA commented on HBASE-12844:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12692022/HBASE-12844-0.98.patch
  against master branch at commit 72a6a670ace9061e45136b19ce34b83c4dbca11f.
  ATTACHMENT ID: 12692022

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12447//console

This message is automatically generated.

> ServerManager.isServerReacable() should sleep between retries
> -
>
> Key: HBASE-12844
> URL: https://issues.apache.org/jira/browse/HBASE-12844
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: HBASE-12844-0.98.patch, HBASE-12844-0.98.patch, 
> hbase-12844_v1.patch
>
>
> There is a fundamental problem with the way assignment manager and cluster 
> membership works. Basically,  the root cause of most of the complexity and 
> root cause for many bugs is that we do have multiple "cluster membership" 
> sources. This causes problems when they diverge from each other. 
> Master's in-memory ServerManager class keep track of what servers are online 
> and what servers are considered dead. We have online and dead servers list in 
> ServerManager, and a separate dead servers list in RegionStates. 
> There are at least 3 ways that a server can join into the dead list. First is 
> the zookeeper session. If a server loses it's zk session, the master gets 
> notification and expires the server. This is the regular way. 
> Second is calls through ServerManager.expireServer(). On master this is 
> mostly through master rejoining the cluster. Master waits for some time for 
> RS's to heartbeat and expires all others and process them as dead servers.  
> This method has the potential to hijack the regions in a region server 
> without  the region server knowing about it (and thus can cause multi homing 
> of regions for reads etc). 
> Third is the RegionStates calling ServerManager.isServerReachable() and if 
> not adding the server to it's own dead list, but not to the dead list of 
> ServerManager. 
> Obviously, as in the region assignment case as well as this, we should fix 
> the "state is kept in multiple places" syndrome, but not in this issue (we 
> already have HBASE-5487, etc for that). 
> In this issue we should at least solve the following case: 
> When a region server is starting up, it will throw exceptions when we want to 
> ping:
> {code}
> 2015-01-10 00:23:10,369 DEBUG [AM.-pool1-t5] master.ServerManager: Couldn't 
> reach os-enis-hbase-1.0-test-1.hw.com,16020,1420849386091, try=0 of 10
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: 
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not 
> running yet
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:886)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getServerInfo(RSRpcServices.java:1155)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:20886)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2028)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException

[jira] [Updated] (HBASE-12844) ServerManager.isServerReacable() should sleep between retries

2015-01-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12844:
---
Attachment: HBASE-12844-0.98.patch

The change is almost identical now. Only the hunks for imports and constructor 
rejected, and they were trivial to fix up.

> ServerManager.isServerReacable() should sleep between retries
> -
>
> Key: HBASE-12844
> URL: https://issues.apache.org/jira/browse/HBASE-12844
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: HBASE-12844-0.98.patch, HBASE-12844-0.98.patch, 
> hbase-12844_v1.patch
>
>
> There is a fundamental problem with the way assignment manager and cluster 
> membership works. Basically,  the root cause of most of the complexity and 
> root cause for many bugs is that we do have multiple "cluster membership" 
> sources. This causes problems when they diverge from each other. 
> Master's in-memory ServerManager class keep track of what servers are online 
> and what servers are considered dead. We have online and dead servers list in 
> ServerManager, and a separate dead servers list in RegionStates. 
> There are at least 3 ways that a server can join into the dead list. First is 
> the zookeeper session. If a server loses it's zk session, the master gets 
> notification and expires the server. This is the regular way. 
> Second is calls through ServerManager.expireServer(). On master this is 
> mostly through master rejoining the cluster. Master waits for some time for 
> RS's to heartbeat and expires all others and process them as dead servers.  
> This method has the potential to hijack the regions in a region server 
> without  the region server knowing about it (and thus can cause multi homing 
> of regions for reads etc). 
> Third is the RegionStates calling ServerManager.isServerReachable() and if 
> not adding the server to it's own dead list, but not to the dead list of 
> ServerManager. 
> Obviously, as in the region assignment case as well as this, we should fix 
> the "state is kept in multiple places" syndrome, but not in this issue (we 
> already have HBASE-5487, etc for that). 
> In this issue we should at least solve the following case: 
> When a region server is starting up, it will throw exceptions when we want to 
> ping:
> {code}
> 2015-01-10 00:23:10,369 DEBUG [AM.-pool1-t5] master.ServerManager: Couldn't 
> reach os-enis-hbase-1.0-test-1.hw.com,16020,1420849386091, try=0 of 10
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: 
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not 
> running yet
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:886)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getServerInfo(RSRpcServices.java:1155)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:20886)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2028)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:309)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1794)
> at 
> org.apache.hadoop.hbase.master.ServerManager.isServerReachable(ServerManager.java:810)
> at 
> org.apache.hadoop.hbase.master.RegionStates.isServerDeadAndNotProcessed(RegionStates.java:756)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.forceRegionStateToOffline(AssignmentManager.java:1952)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1559)
> at 
> org.apache.hadoop.hbase.master.AssignCallable.call(AssignCallable.java:48)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
>

[jira] [Updated] (HBASE-12393) The regionserver web will throw exception if we disable block cache

2015-01-13 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-12393:
-
Assignee: ChiaPing Tsai

> The regionserver web will throw exception if we disable block cache
> ---
>
> Key: HBASE-12393
> URL: https://issues.apache.org/jira/browse/HBASE-12393
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, UI
>Affects Versions: 0.98.7
> Environment: ubuntu 12.04 64bits, hadoop-2.2.0, hbase-0.98.7-hadoop2
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Attachments: HBASE-12393.patch
>
>
> The CacheConfig.getBlockCache() will return the null point when we set 
> hfile.block.cache.size to zero.
> The BlockCacheTmpl.jamon doesn't make a check on null blockcache.
> {code}
> <%if cacheConfig == null %>
> CacheConfig is null
> <%else>
> 
> 
> Attribute
> Value
> Description
> 
> 
> Size
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().size()) %>
> Total size of Block Cache (bytes)
> 
> 
> Free
> <% 
> StringUtils.humanReadableInt(cacheConfig.getBlockCache().getFreeSize()) 
> %>
> Free space in Block Cache (bytes)
> 
> 
> Count
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getBlockCount()) %>
> Number of blocks in Block Cache
> 
> 
> Evicted
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictedCount()) %>
> Number of blocks evicted
> 
> 
> Evictions
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getEvictionCount()) %>
> Number of times an eviction occurred
> 
> 
> Hits
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCount()) %>
> Number requests that were cache hits
> 
> 
> Hits Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getHitCachingCount()) %>
> Cache hit block requests but only requests set to use Block 
> Cache
> 
> 
> Misses
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Number of requests that were cache misses
> 
> 
> Misses Caching
> <% String.format("%,d", 
> cacheConfig.getBlockCache().getStats().getMissCount()) %>
> Block requests that were cache misses but only requests set to 
> use Block Cache
> 
> 
> Hit Ratio
> <% String.format("%,.2f", 
> cacheConfig.getBlockCache().getStats().getHitRatio() * 100) %><% "%" %>
> Hit Count divided by total requests count
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12844) ServerManager.isServerReacable() should sleep between retries

2015-01-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275760#comment-14275760
 ] 

Andrew Purtell commented on HBASE-12844:


I didn't realize we had RetryCounter and RetryCounterFactory in 0.98. Looked 
around and I see them now. Let me put up another patch, just a sec. 

> ServerManager.isServerReacable() should sleep between retries
> -
>
> Key: HBASE-12844
> URL: https://issues.apache.org/jira/browse/HBASE-12844
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: HBASE-12844-0.98.patch, hbase-12844_v1.patch
>
>
> There is a fundamental problem with the way assignment manager and cluster 
> membership works. Basically,  the root cause of most of the complexity and 
> root cause for many bugs is that we do have multiple "cluster membership" 
> sources. This causes problems when they diverge from each other. 
> Master's in-memory ServerManager class keep track of what servers are online 
> and what servers are considered dead. We have online and dead servers list in 
> ServerManager, and a separate dead servers list in RegionStates. 
> There are at least 3 ways that a server can join into the dead list. First is 
> the zookeeper session. If a server loses it's zk session, the master gets 
> notification and expires the server. This is the regular way. 
> Second is calls through ServerManager.expireServer(). On master this is 
> mostly through master rejoining the cluster. Master waits for some time for 
> RS's to heartbeat and expires all others and process them as dead servers.  
> This method has the potential to hijack the regions in a region server 
> without  the region server knowing about it (and thus can cause multi homing 
> of regions for reads etc). 
> Third is the RegionStates calling ServerManager.isServerReachable() and if 
> not adding the server to it's own dead list, but not to the dead list of 
> ServerManager. 
> Obviously, as in the region assignment case as well as this, we should fix 
> the "state is kept in multiple places" syndrome, but not in this issue (we 
> already have HBASE-5487, etc for that). 
> In this issue we should at least solve the following case: 
> When a region server is starting up, it will throw exceptions when we want to 
> ping:
> {code}
> 2015-01-10 00:23:10,369 DEBUG [AM.-pool1-t5] master.ServerManager: Couldn't 
> reach os-enis-hbase-1.0-test-1.hw.com,16020,1420849386091, try=0 of 10
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: 
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not 
> running yet
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:886)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getServerInfo(RSRpcServices.java:1155)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:20886)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2028)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:309)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1794)
> at 
> org.apache.hadoop.hbase.master.ServerManager.isServerReachable(ServerManager.java:810)
> at 
> org.apache.hadoop.hbase.master.RegionStates.isServerDeadAndNotProcessed(RegionStates.java:756)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.forceRegionStateToOffline(AssignmentManager.java:1952)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1559)
> at 
> org.apache.hadoop.hbase.master.AssignCallable.call(AssignCallable.java:48)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)

[jira] [Commented] (HBASE-12602) ResultScanner should implement Iterator

2015-01-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275758#comment-14275758
 ] 

stack commented on HBASE-12602:
---

[~brfrn169] Anything we can do to make it easier to do?  What was hard?  Thanks.

> ResultScanner should implement Iterator
> ---
>
> Key: HBASE-12602
> URL: https://issues.apache.org/jira/browse/HBASE-12602
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Toshihiro Suzuki
>Priority: Minor
>
> Currently, we can't call hasNext() from ResultScanner directly. I think It is 
> convenient that ResultScanner implements Iterator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12602) ResultScanner should implement Iterator

2015-01-13 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-12602.

Resolution: Won't Fix
  Assignee: (was: Toshihiro Suzuki)

Resolved as requested.

> ResultScanner should implement Iterator
> ---
>
> Key: HBASE-12602
> URL: https://issues.apache.org/jira/browse/HBASE-12602
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Toshihiro Suzuki
>Priority: Minor
>
> Currently, we can't call hasNext() from ResultScanner directly. I think It is 
> convenient that ResultScanner implements Iterator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12848) Utilize Flash storage for WAL

2015-01-13 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275736#comment-14275736
 ] 

Andrew Purtell commented on HBASE-12848:


Maybe we can combine this with HBASE-6572? If that one is just renamed to 
"Tiered storage" then this could be a subtask of that, and figuring out what 
would be useful to do with the SSD storage policy for HFiles would be the next.

> Utilize Flash storage for WAL
> -
>
> Key: HBASE-12848
> URL: https://issues.apache.org/jira/browse/HBASE-12848
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 12848-v1.patch
>
>
> One way to improve data ingestion rate is to make use of Flash storage.
> HDFS is doing the heavy lifting - see HDFS-7228.
> We assume an environment where:
> 1. Some servers have a mix of flash, e.g. 2 flash drives and 4 traditional 
> drives.
> 2. Some servers have all traditional storage.
> 3. RegionServers are deployed on both profiles within one HBase cluster.
> This JIRA allows WAL to be managed on flash in a mixed-profile environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12844) ServerManager.isServerReacable() should sleep between retries

2015-01-13 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275731#comment-14275731
 ] 

Enis Soztutar commented on HBASE-12844:
---

Andrew, there is exponential backoff policy with the RetryCounter used in the 
master patch. Do we want the same thing?

> ServerManager.isServerReacable() should sleep between retries
> -
>
> Key: HBASE-12844
> URL: https://issues.apache.org/jira/browse/HBASE-12844
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 1.0.0, 2.0.0, 1.1.0
>
> Attachments: HBASE-12844-0.98.patch, hbase-12844_v1.patch
>
>
> There is a fundamental problem with the way assignment manager and cluster 
> membership works. Basically,  the root cause of most of the complexity and 
> root cause for many bugs is that we do have multiple "cluster membership" 
> sources. This causes problems when they diverge from each other. 
> Master's in-memory ServerManager class keep track of what servers are online 
> and what servers are considered dead. We have online and dead servers list in 
> ServerManager, and a separate dead servers list in RegionStates. 
> There are at least 3 ways that a server can join into the dead list. First is 
> the zookeeper session. If a server loses it's zk session, the master gets 
> notification and expires the server. This is the regular way. 
> Second is calls through ServerManager.expireServer(). On master this is 
> mostly through master rejoining the cluster. Master waits for some time for 
> RS's to heartbeat and expires all others and process them as dead servers.  
> This method has the potential to hijack the regions in a region server 
> without  the region server knowing about it (and thus can cause multi homing 
> of regions for reads etc). 
> Third is the RegionStates calling ServerManager.isServerReachable() and if 
> not adding the server to it's own dead list, but not to the dead list of 
> ServerManager. 
> Obviously, as in the region assignment case as well as this, we should fix 
> the "state is kept in multiple places" syndrome, but not in this issue (we 
> already have HBASE-5487, etc for that). 
> In this issue we should at least solve the following case: 
> When a region server is starting up, it will throw exceptions when we want to 
> ping:
> {code}
> 2015-01-10 00:23:10,369 DEBUG [AM.-pool1-t5] master.ServerManager: Couldn't 
> reach os-enis-hbase-1.0-test-1.hw.com,16020,1420849386091, try=0 of 10
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: 
> org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not 
> running yet
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:886)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getServerInfo(RSRpcServices.java:1155)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:20886)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2028)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:309)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getServerInfo(ProtobufUtil.java:1794)
> at 
> org.apache.hadoop.hbase.master.ServerManager.isServerReachable(ServerManager.java:810)
> at 
> org.apache.hadoop.hbase.master.RegionStates.isServerDeadAndNotProcessed(RegionStates.java:756)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.forceRegionStateToOffline(AssignmentManager.java:1952)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1559)
> at 
> org.apache.hadoop.hbase.master.AssignCallable.call(AssignCallable.java:48)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.con

[jira] [Commented] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2015-01-13 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275720#comment-14275720
 ] 

Elliott Clark commented on HBASE-5878:
--

Will it ever enter the top of the if statement?

Here DFS is uppercase.
{code}.endsWith("DFSInputStream"){code}

However here it's lowercase dfs.
{code}hdfsDataInputStream = (HdfsDataInputStream) this.getWrappedStream();{code}

> Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
> ---
>
> Key: HBASE-5878
> URL: https://issues.apache.org/jira/browse/HBASE-5878
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Uma Maheswara Rao G
>Assignee: Ashish Singhi
> Fix For: 1.0.0, 2.0.0
>
> Attachments: HBASE-5878-v2.patch, HBASE-5878-v3.patch, 
> HBASE-5878.patch
>
>
> SequencFileLogReader: 
> Currently Hbase using getFileLength api from DFSInputStream class by 
> reflection. DFSInputStream is not exposed as public. So, this may change in 
> future. Now HDFS exposed HdfsDataInputStream as public API.
> We can make use of it, when we are not able to find the getFileLength api 
> from DFSInputStream as a else condition. So, that we will not have any sudden 
> surprise like we are facing today.
> Also,  it is just logging one warn message and proceeding if it throws any 
> exception while getting the length. I think we can re-throw the exception 
> because there is no point in continuing with dataloss.
> {code}
> long adjust = 0;
>   try {
> Field fIn = FilterInputStream.class.getDeclaredField("in");
> fIn.setAccessible(true);
> Object realIn = fIn.get(this.in);
> // In hadoop 0.22, DFSInputStream is a standalone class.  Before 
> this,
> // it was an inner class of DFSClient.
> if (realIn.getClass().getName().endsWith("DFSInputStream")) {
>   Method getFileLength = realIn.getClass().
> getDeclaredMethod("getFileLength", new Class []{});
>   getFileLength.setAccessible(true);
>   long realLength = ((Long)getFileLength.
> invoke(realIn, new Object []{})).longValue();
>   assert(realLength >= this.length);
>   adjust = realLength - this.length;
> } else {
>   LOG.info("Input stream class: " + realIn.getClass().getName() +
>   ", not adjusting length");
> }
>   } catch(Exception e) {
> SequenceFileLogReader.LOG.warn(
>   "Error while trying to get accurate file length.  " +
>   "Truncation / data loss may occur if RegionServers die.", e);
>   }
>   return adjust + super.getPos();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12849) LoadIncrementalHFiles should use unmanaged connection in branch-1

2015-01-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12849:
---
Attachment: 12849-1.1-v2.patch

> LoadIncrementalHFiles should use unmanaged connection in branch-1
> -
>
> Key: HBASE-12849
> URL: https://issues.apache.org/jira/browse/HBASE-12849
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12849-1.1-v2.patch, 12849-1.1.patch, 12849-master.patch
>
>
> From 
> https://builds.apache.org/job/HBase-1.1/78/testReport/org.apache.hadoop.hbase.mapreduce/TestLoadIncrementalHFiles/testSimpleLoad/
>  :
> {code}
> java.io.IOException: The connection has to be unmanaged.
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getAdmin(ConnectionManager.java:715)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:239)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:936)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:255)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:229)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:216)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:206)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:102)
> {code}
> LoadIncrementalHFiles should use unmanaged connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12849) LoadIncrementalHFiles should use unmanaged connection in branch-1

2015-01-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12849:
---
Status: Open  (was: Patch Available)

> LoadIncrementalHFiles should use unmanaged connection in branch-1
> -
>
> Key: HBASE-12849
> URL: https://issues.apache.org/jira/browse/HBASE-12849
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12849-1.1-v2.patch, 12849-1.1.patch, 12849-master.patch
>
>
> From 
> https://builds.apache.org/job/HBase-1.1/78/testReport/org.apache.hadoop.hbase.mapreduce/TestLoadIncrementalHFiles/testSimpleLoad/
>  :
> {code}
> java.io.IOException: The connection has to be unmanaged.
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getAdmin(ConnectionManager.java:715)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:239)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:936)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:255)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:229)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:216)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:206)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:102)
> {code}
> LoadIncrementalHFiles should use unmanaged connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12849) LoadIncrementalHFiles should use unmanaged connection in branch-1

2015-01-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12849:
---
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed

Integrated to branch-1 and master branch.

TestLoadIncrementalHFiles passes in both branches.

Will resolve after the next branch-1 build comes out.

> LoadIncrementalHFiles should use unmanaged connection in branch-1
> -
>
> Key: HBASE-12849
> URL: https://issues.apache.org/jira/browse/HBASE-12849
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.1.0
>
> Attachments: 12849-1.1.patch, 12849-master.patch
>
>
> From 
> https://builds.apache.org/job/HBase-1.1/78/testReport/org.apache.hadoop.hbase.mapreduce/TestLoadIncrementalHFiles/testSimpleLoad/
>  :
> {code}
> java.io.IOException: The connection has to be unmanaged.
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getAdmin(ConnectionManager.java:715)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:239)
>   at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:936)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:255)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:229)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:216)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.runTest(TestLoadIncrementalHFiles.java:206)
>   at 
> org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:102)
> {code}
> LoadIncrementalHFiles should use unmanaged connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >