[jira] [Updated] (HBASE-12989) region_mover.rb unloadRegions method uses ArrayList concurrently resulting in errors

2015-02-09 Thread Abhishek Singh Chouhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated HBASE-12989:
---
Description: 
While working on HBASE-12822 ran into 
{quote}
15/02/04 13:16:44 [main] INFO  reflect.GeneratedMethodAccessor35(?): Pool 
completed
NoMethodError: undefined method `toByteArray' for nil:NilClass
__for__ at region_mover.rb:270
   each at 
file:/home/sfdc/installed/bigdata-hbase__hbase.6_prod__9707645_Linux.x86_64.prod.runtime.bigdata-hbase_bigdata-hbase/hbase/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7
  writeFile at region_mover.rb:269
  unloadRegions at region_mover.rb:347
 (root) at region_mover.rb:501
{quote}
This is because 
movedRegions = java.util.ArrayList.new()
is being used concurrently in unloadRegions.

  was:
While working on HBASE-12822 ran into 
15/02/04 13:16:44 [main] INFO  reflect.GeneratedMethodAccessor35(?): Pool 
completed
NoMethodError: undefined method `toByteArray' for nil:NilClass
__for__ at region_mover.rb:270
   each at 
file:/home/sfdc/installed/bigdata-hbase__hbase.6_prod__9707645_Linux.x86_64.prod.runtime.bigdata-hbase_bigdata-hbase/hbase/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7
  writeFile at region_mover.rb:269
  unloadRegions at region_mover.rb:347
 (root) at region_mover.rb:501

This is because 
movedRegions = java.util.ArrayList.new()
is being used concurrently in unloadRegions.


 region_mover.rb unloadRegions method uses ArrayList concurrently resulting in 
 errors
 

 Key: HBASE-12989
 URL: https://issues.apache.org/jira/browse/HBASE-12989
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.98.10
Reporter: Abhishek Singh Chouhan
Assignee: Abhishek Singh Chouhan
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11


 While working on HBASE-12822 ran into 
 {quote}
 15/02/04 13:16:44 [main] INFO  reflect.GeneratedMethodAccessor35(?): Pool 
 completed
 NoMethodError: undefined method `toByteArray' for nil:NilClass
 __for__ at region_mover.rb:270
each at 
 file:/home/sfdc/installed/bigdata-hbase__hbase.6_prod__9707645_Linux.x86_64.prod.runtime.bigdata-hbase_bigdata-hbase/hbase/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7
   writeFile at region_mover.rb:269
   unloadRegions at region_mover.rb:347
  (root) at region_mover.rb:501
 {quote}
 This is because 
 movedRegions = java.util.ArrayList.new()
 is being used concurrently in unloadRegions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12989) region_mover.rb unloadRegions method uses ArrayList concurrently resulting in errors

2015-02-09 Thread Abhishek Singh Chouhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Singh Chouhan updated HBASE-12989:
---
Attachment: HBASE-12989.patch

Simple patch that uses synchronizedList instead of ArrayList

 region_mover.rb unloadRegions method uses ArrayList concurrently resulting in 
 errors
 

 Key: HBASE-12989
 URL: https://issues.apache.org/jira/browse/HBASE-12989
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.98.10
Reporter: Abhishek Singh Chouhan
Assignee: Abhishek Singh Chouhan
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-12989.patch


 While working on HBASE-12822 ran into 
 {quote}
 15/02/04 13:16:44 [main] INFO  reflect.GeneratedMethodAccessor35(?): Pool 
 completed
 NoMethodError: undefined method `toByteArray' for nil:NilClass
 __for__ at region_mover.rb:270
each at 
 file:/home/sfdc/installed/bigdata-hbase__hbase.6_prod__9707645_Linux.x86_64.prod.runtime.bigdata-hbase_bigdata-hbase/hbase/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7
   writeFile at region_mover.rb:269
   unloadRegions at region_mover.rb:347
  (root) at region_mover.rb:501
 {quote}
 This is because 
 movedRegions = java.util.ArrayList.new()
 is being used concurrently in unloadRegions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12992) TestChoreService doesn't close services, that can break test on slow virtual hosts.

2015-02-09 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12992:
-
Attachment: HBASE-12992.patch

and there very small period (10ms) in a couple of tests, changed them to 100ms 
as in other tests

 TestChoreService doesn't close services, that can break test on slow virtual 
 hosts.
 ---

 Key: HBASE-12992
 URL: https://issues.apache.org/jira/browse/HBASE-12992
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Attachments: HBASE-12992.patch, HBASE-12992.patch


 On my slow virtual machine it quite possible to fail this test due of 
 enormous amount of active threads to the end of test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12992) TestChoreService doesn't close services, that can break test on slow virtual hosts.

2015-02-09 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12992:
-
Status: Patch Available  (was: Open)

 TestChoreService doesn't close services, that can break test on slow virtual 
 hosts.
 ---

 Key: HBASE-12992
 URL: https://issues.apache.org/jira/browse/HBASE-12992
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Attachments: HBASE-12992.patch, HBASE-12992.patch


 On my slow virtual machine it quite possible to fail this test due of 
 enormous amount of active threads to the end of test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12035) Client does an RPC to master everytime a region is relocated

2015-02-09 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12035:
-
Status: Patch Available  (was: Open)

 Client does an RPC to master everytime a region is relocated
 

 Key: HBASE-12035
 URL: https://issues.apache.org/jira/browse/HBASE-12035
 Project: HBase
  Issue Type: Improvement
  Components: Client, master
Affects Versions: 2.0.0
Reporter: Enis Soztutar
Assignee: Andrey Stepachev
Priority: Critical
 Fix For: 2.0.0

 Attachments: 12035v2.txt, HBASE-12035 (1) (1).patch, HBASE-12035 (1) 
 (1).patch, HBASE-12035 (1).patch, HBASE-12035 (2).patch, HBASE-12035 
 (2).patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch


 HBASE-7767 moved table enabled|disabled state to be kept in hdfs instead of 
 zookeeper. isTableDisabled() which is used in 
 HConnectionImplementation.relocateRegion() now became a master RPC call 
 rather than a zookeeper client call. Since we do relocateRegion() calls 
 everytime we want to relocate a region (region moved, RS down, etc) this 
 implies that when the master is down, the some of the clients for uncached 
 regions will be affected. 
 See HBASE-7767 and HBASE-11974 for some more background. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12035) Client does an RPC to master everytime a region is relocated

2015-02-09 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12035:
-
Attachment: HBASE-12035.patch

 Client does an RPC to master everytime a region is relocated
 

 Key: HBASE-12035
 URL: https://issues.apache.org/jira/browse/HBASE-12035
 Project: HBase
  Issue Type: Improvement
  Components: Client, master
Affects Versions: 2.0.0
Reporter: Enis Soztutar
Assignee: Andrey Stepachev
Priority: Critical
 Fix For: 2.0.0

 Attachments: 12035v2.txt, HBASE-12035 (1) (1).patch, HBASE-12035 (1) 
 (1).patch, HBASE-12035 (1).patch, HBASE-12035 (2).patch, HBASE-12035 
 (2).patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch, 
 HBASE-12035.patch, HBASE-12035.patch, HBASE-12035.patch


 HBASE-7767 moved table enabled|disabled state to be kept in hdfs instead of 
 zookeeper. isTableDisabled() which is used in 
 HConnectionImplementation.relocateRegion() now became a master RPC call 
 rather than a zookeeper client call. Since we do relocateRegion() calls 
 everytime we want to relocate a region (region moved, RS down, etc) this 
 implies that when the master is down, the some of the clients for uncached 
 regions will be affected. 
 See HBASE-7767 and HBASE-11974 for some more background. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12993) Use HBase 1.0 interfaces in hbase-thrift

2015-02-09 Thread Solomon Duskis (JIRA)
Solomon Duskis created HBASE-12993:
--

 Summary: Use HBase 1.0 interfaces in hbase-thrift
 Key: HBASE-12993
 URL: https://issues.apache.org/jira/browse/HBASE-12993
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 1.0.1


hbase-thrift uses HTable and HBaesAdmin.  It also uses HTablePool, which is an 
outdated concept.

As per [~ndimiduk] on the user group:

{quote}
I believe HTablePool is completely eclipsed by the modern Connection
implementation. We'll need to keep the map of UserName = Connection (or
maybe the ConnectionCache handles this?) Probably a single Connection (per
user) with a large thread pool will do the trick.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12991) Use HBase 1.0 interfaces in hbase-rest

2015-02-09 Thread Solomon Duskis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Solomon Duskis updated HBASE-12991:
---
Status: Patch Available  (was: Open)

 Use HBase 1.0 interfaces in hbase-rest
 --

 Key: HBASE-12991
 URL: https://issues.apache.org/jira/browse/HBASE-12991
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 1.0.1

 Attachments: HBASE-12991.patch


 hbase-rest uses HTable and HBaseAdmin under the covers.  They should use the 
 new hbase 1.0 interfaces instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12993) Use HBase 1.0 interfaces in hbase-thrift

2015-02-09 Thread Solomon Duskis (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312373#comment-14312373
 ] 

Solomon Duskis commented on HBASE-12993:


hbase-rest and hbase-thrift both share some code that needs to be updated.  
HBASE-12991 will change the common code.  HBASE-12993 will track the changes 
unique to hbase-thrift.

 Use HBase 1.0 interfaces in hbase-thrift
 

 Key: HBASE-12993
 URL: https://issues.apache.org/jira/browse/HBASE-12993
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 1.0.1


 hbase-thrift uses HTable and HBaesAdmin.  It also uses HTablePool, which is 
 an outdated concept.
 As per [~ndimiduk] on the user group:
 {quote}
 I believe HTablePool is completely eclipsed by the modern Connection
 implementation. We'll need to keep the map of UserName = Connection (or
 maybe the ConnectionCache handles this?) Probably a single Connection (per
 user) with a large thread pool will do the trick.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8329) Limit compaction speed

2015-02-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312454#comment-14312454
 ] 

Andrew Purtell commented on HBASE-8329:
---

The compilation errors were fixed by the addendum. What I see now has already 
been reported as BUILDS-49

 Limit compaction speed
 --

 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: binlijin
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-8329-0.98-addendum.patch, HBASE-8329-0.98.patch, 
 HBASE-8329-10.patch, HBASE-8329-11.patch, HBASE-8329-12.patch, 
 HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, 
 HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, 
 HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, 
 HBASE-8329-branch-1.patch, HBASE-8329-trunk.patch, HBASE-8329_13.patch, 
 HBASE-8329_14.patch, HBASE-8329_15.patch, HBASE-8329_16.patch, 
 HBASE-8329_17.patch


 There is no speed or resource limit for compaction,I think we should add this 
 feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12991) Use HBase 1.0 interfaces in hbase-rest

2015-02-09 Thread Solomon Duskis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Solomon Duskis updated HBASE-12991:
---
Attachment: HBASE-12991.patch

 Use HBase 1.0 interfaces in hbase-rest
 --

 Key: HBASE-12991
 URL: https://issues.apache.org/jira/browse/HBASE-12991
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 1.0.1

 Attachments: HBASE-12991.patch


 hbase-rest uses HTable and HBaseAdmin under the covers.  They should use the 
 new hbase 1.0 interfaces instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12999) Make foreground_start return the correct exit code

2015-02-09 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark reassigned HBASE-12999:
-

Assignee: Elliott Clark

 Make foreground_start return the correct exit code
 --

 Key: HBASE-12999
 URL: https://issues.apache.org/jira/browse/HBASE-12999
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Elliott Clark
Assignee: Elliott Clark





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12999) Make foreground_start return the correct exit code

2015-02-09 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-12999:
-

 Summary: Make foreground_start return the correct exit code
 Key: HBASE-12999
 URL: https://issues.apache.org/jira/browse/HBASE-12999
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Elliott Clark






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12102) Duplicate keys in HBase.RegionServer metrics JSON

2015-02-09 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-12102:
--
Fix Version/s: (was: 1.0.0)
   1.1.0
   1.0.1

 Duplicate keys in HBase.RegionServer metrics JSON
 -

 Key: HBASE-12102
 URL: https://issues.apache.org/jira/browse/HBASE-12102
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 0.98.10
Reporter: Andrew Purtell
Assignee: Ravi Kishore Valeti
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-12102.patch


 The JSON returned by /jmx on the RegionServer contains duplicate 
 'tag.Context' keys for various HBase.RegionServer metrics. 
 Regions:
 {noformat}
 {
 name : Hadoop:service=HBase,name=RegionServer,sub=Regions,
 modelerType : RegionServer,sub=Regions,
 tag.Context : regionserver,
 tag.Context : regionserver,
 tag.Hostname : some.host.name,
 ...
 }
 {noformat}
 Server:
 {noformat}
 name : Hadoop:service=HBase,name=RegionServer,sub=Server,
 modelerType : RegionServer,sub=Server,
 tag.Context : regionserver,
 tag.zookeeperQuorum : some.zookeeper.quorum.peers,
 tag.serverName : some.server.name,
 tag.clusterId : 88c186ea-2308-4713-8b5f-5a3e829cbb10,
 tag.Context : regionserver,
 ...
 }
 {noformat}
 IPC:
 {noformat}
 {
 name : Hadoop:service=HBase,name=IPC,sub=IPC,
 modelerType : IPC,sub=IPC,
 tag.Context : ipc,
 tag.Context : ipc,
 tag.Hostname : some.host.name,
 ...
 }
 {noformat}
 This can cause issues with some JSON parsers. We should avoid emitting 
 duplicate keys if it is under our control.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12973) RegionCoprocessorEnvironment should provide HRegionInfo directly

2015-02-09 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-12973:
--
Fix Version/s: (was: 1.0.1)
   1.0.0

 RegionCoprocessorEnvironment should provide HRegionInfo directly
 

 Key: HBASE-12973
 URL: https://issues.apache.org/jira/browse/HBASE-12973
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12973-0.98.patch, HBASE-12973-branch-1.patch, 
 HBASE-12973.patch


 A coprocessor must go through RegionCoprocessorEnvironment#getRegion in order 
 to retrieve HRegionInfo for its associated region. It should be possible to 
 get HRegionInfo directly from RegionCoprocessorEnvironment. (Or Region, see 
 HBASE-12972)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-13000) Backport HBASE-11240 (Print hdfs pipeline when hlog's sync is slow) to 0.98

2015-02-09 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-13000:
---

Assignee: Sean Busbey  (was: Andrew Purtell)

 Backport HBASE-11240 (Print hdfs pipeline when hlog's sync is slow) to 0.98
 ---

 Key: HBASE-13000
 URL: https://issues.apache.org/jira/browse/HBASE-13000
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: Sean Busbey
Priority: Minor
 Fix For: 0.98.11


 Would be useful to learn about abnormal datanodes and slow seeks from logs in 
 an 0.98 install too. Implement for 0.98, incorporating addendums. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11910) Document Premptive Call Me Maybe HBase findings in the online manual

2015-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313665#comment-14313665
 ] 

Hadoop QA commented on HBASE-11910:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697652/HBASE-11910.patch
  against master branch at commit 9283b93e225edfaddbb8b24dd1b8214bcd328e97.
  ATTACHMENT ID: 12697652

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12748//console

This message is automatically generated.

 Document Premptive Call Me Maybe HBase findings in the online manual
 

 Key: HBASE-11910
 URL: https://issues.apache.org/jira/browse/HBASE-11910
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Misty Stanley-Jones
  Labels: documentation
 Fix For: 2.0.0

 Attachments: HBASE-11910.patch


 Document the Premptive Call Me Maybe HBase findings in the online manual.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12070) Add an option to hbck to fix ZK inconsistencies

2015-02-09 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12070:
---
Status: In Progress  (was: Patch Available)

 Add an option to hbck to fix ZK inconsistencies
 ---

 Key: HBASE-12070
 URL: https://issues.apache.org/jira/browse/HBASE-12070
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 1.1.0
Reporter: Sudarshan Kadambi
Assignee: Stephen Yuan Jiang
 Fix For: 1.1.0

 Attachments: HBASE-12070.v1-branch-1.patch, 
 HBASE-12070.v2-branch-1.patch


 If the HMaster bounces in the middle of table creation, we could be left in a 
 state where a znode exists for the table, but that hasn't percolated into 
 META or to HDFS. We've run into this a couple times on our clusters. Once the 
 table is in this state, the only fix is to rm the znode using the 
 zookeeper-client. Doing this manually looks a bit error prone. Could an 
 option be added to hbck to catch and fix such inconsistencies?
 A more general issue I'd like comment on is whether it makes sense for 
 HMaster to be maintaining its own write-ahead log? The idea would be that on 
 a bounce, the master would discover it was in the middle of creating a table 
 and either rollback or complete that operation? An issue that we observed 
 recently was that a table that was in DISABLING state before a bounce was not 
 in that state after. A write-ahead log to persist table state changes seems 
 useful. Now, all of this state could be in ZK instead of the WAL - it doesn't 
 matter where it gets persisted as long as it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12999) Make foreground_start return the correct exit code

2015-02-09 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12999:
--
Status: Patch Available  (was: Open)

 Make foreground_start return the correct exit code
 --

 Key: HBASE-12999
 URL: https://issues.apache.org/jira/browse/HBASE-12999
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-12999-v1.patch, HBASE-12999.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13002) Make encryption cipher configurable

2015-02-09 Thread Ashish Singhi (JIRA)
Ashish Singhi created HBASE-13002:
-

 Summary: Make encryption cipher configurable
 Key: HBASE-13002
 URL: https://issues.apache.org/jira/browse/HBASE-13002
 Project: HBase
  Issue Type: Improvement
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11


Make encryption cipher configurable currently it is hard coded to AES



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12102) Duplicate keys in HBase.RegionServer metrics JSON

2015-02-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313435#comment-14313435
 ] 

Andrew Purtell commented on HBASE-12102:


Thanks [~rvaleti]. Did you want to provide a 0.98 patch also? If so we'll need 
similar changes in the hbase-hadoop1-compat module. 

 Duplicate keys in HBase.RegionServer metrics JSON
 -

 Key: HBASE-12102
 URL: https://issues.apache.org/jira/browse/HBASE-12102
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 0.98.10
Reporter: Andrew Purtell
Assignee: Ravi Kishore Valeti
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-12102.patch


 The JSON returned by /jmx on the RegionServer contains duplicate 
 'tag.Context' keys for various HBase.RegionServer metrics. 
 Regions:
 {noformat}
 {
 name : Hadoop:service=HBase,name=RegionServer,sub=Regions,
 modelerType : RegionServer,sub=Regions,
 tag.Context : regionserver,
 tag.Context : regionserver,
 tag.Hostname : some.host.name,
 ...
 }
 {noformat}
 Server:
 {noformat}
 name : Hadoop:service=HBase,name=RegionServer,sub=Server,
 modelerType : RegionServer,sub=Server,
 tag.Context : regionserver,
 tag.zookeeperQuorum : some.zookeeper.quorum.peers,
 tag.serverName : some.server.name,
 tag.clusterId : 88c186ea-2308-4713-8b5f-5a3e829cbb10,
 tag.Context : regionserver,
 ...
 }
 {noformat}
 IPC:
 {noformat}
 {
 name : Hadoop:service=HBase,name=IPC,sub=IPC,
 modelerType : IPC,sub=IPC,
 tag.Context : ipc,
 tag.Context : ipc,
 tag.Hostname : some.host.name,
 ...
 }
 {noformat}
 This can cause issues with some JSON parsers. We should avoid emitting 
 duplicate keys if it is under our control.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313438#comment-14313438
 ] 

Enis Soztutar commented on HBASE-12996:
---

Ian or Stack, can you give some more background to this? Did you guys really 
mean {{volatile}} rather than {{transient}}? Filter does not have anything to 
do with Java serialization. 

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 0.98.11
Reporter: Ian Friedman
Priority: Trivial
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12996.patch


 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12984) SSL cannot be used by the InfoPort after removing deprecated code in HBASE-10336

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313437#comment-14313437
 ] 

Hudson commented on HBASE-12984:


SUCCESS: Integrated in HBase-1.0 #723 (See 
[https://builds.apache.org/job/HBase-1.0/723/])
HBASE-12984: SSL cannot be used by the InfoPort in branch-1 (enis: rev 
75d3334ce6e770bf2b2df7b17fbff7eba42c08c7)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/http/InfoServer.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/http/HttpConfig.java


 SSL cannot be used by the InfoPort after removing deprecated code in 
 HBASE-10336
 

 Key: HBASE-12984
 URL: https://issues.apache.org/jira/browse/HBASE-12984
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.1.0
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
Priority: Blocker
 Fix For: 1.0.0, 2.0.0, 1.1.0

 Attachments: HBASE-12984-v1.txt, HBASE-12984-v3.txt, 
 HBASE-12984-v3.txt, HBASE-12984-v4.txt


 Setting {{hbase.ssl.enabled}} to {{true}} doesn't enable SSL on the 
 InfoServer. Found that the problem is down the InfoServer and HttpConfig in 
 how we setup the protocol in the HttpServer:
 {code}
 for (URI ep : endpoints) {
 Connector listener = null;
 String scheme = ep.getScheme();
  if (http.equals(scheme)) {
   listener = HttpServer.createDefaultChannelConnector();
 } else if (https.equals(scheme)) {
   SslSocketConnector c = new SslSocketConnectorSecure();
   c.setNeedClientAuth(needsClientAuth);
   c.setKeyPassword(keyPassword);
 {code}
 It depends what end points have been added by the InfoServer:
 {code}
 builder
   .setName(name)
   .addEndpoint(URI.create(http://; + bindAddress + : + port))
   .setAppDir(HBASE_APP_DIR).setFindPort(findPort).setConf(c);
 {code}
 Basically we always use http and we don't look via HttConfig if 
 {{hbase.ssl.enabled}} was set to true and we assign the right schema based on 
 the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12999) Make foreground_start return the correct exit code

2015-02-09 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12999:
--
Attachment: HBASE-12999-v1.patch

 Make foreground_start return the correct exit code
 --

 Key: HBASE-12999
 URL: https://issues.apache.org/jira/browse/HBASE-12999
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-12999-v1.patch, HBASE-12999.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12701) Document how to set the split policy on a given table

2015-02-09 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-12701:

Attachment: HBASE-12701-asciidoc.patch

Converted the work to Asciidoc and addressed [~ndimiduk]'s feedback. Since it 
had a +1 already, I'll commit this.

 Document how to set the split policy on a given table
 -

 Key: HBASE-12701
 URL: https://issues.apache.org/jira/browse/HBASE-12701
 Project: HBase
  Issue Type: Task
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Attachments: HBASE-12701-asciidoc.patch, HBASE-12701.patch


 Need to document in the ref guide how to set/change the region split policy 
 for a single table user the API and the HBase shell as noted below as an 
 example.
 Using Java:
 HTableDescriptor tableDesc = new HTableDescriptor(test);
 tableDesc.setValue(HTableDescriptor.SPLIT_POLICY, 
 ConstantSizeRegionSplitPolicy.class.getName());
 tableDesc.addFamily(new HColumnDescriptor(Bytes.toBytes(cf1)));
 admin.createTable(tableDesc);
 Using HBase Shell:
 create 'test', {METHOD = 'table_att', CONFIG = {'SPLIT_POLICY' = 
 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}},
 {NAME = 'cf1'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13001) NullPointer in master logs for table.jsp

2015-02-09 Thread Vikas Vishwakarma (JIRA)
Vikas Vishwakarma created HBASE-13001:
-

 Summary: NullPointer in master logs for table.jsp
 Key: HBASE-13001
 URL: https://issues.apache.org/jira/browse/HBASE-13001
 Project: HBase
  Issue Type: Bug
Reporter: Vikas Vishwakarma
Priority: Trivial


Seeing a NullPointer issue in master logs similar to HBASE-6607

2015-02-09 14:04:00,622 ERROR org.mortbay.log: /table.jsp
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:71)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1087)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13001) NullPointer in master logs for table.jsp

2015-02-09 Thread Vikas Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Vishwakarma updated HBASE-13001:
--
Affects Version/s: 0.98.10

 NullPointer in master logs for table.jsp
 

 Key: HBASE-13001
 URL: https://issues.apache.org/jira/browse/HBASE-13001
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.10
Reporter: Vikas Vishwakarma
Priority: Trivial

 Seeing a NullPointer issue in master logs similar to HBASE-6607
 2015-02-09 14:04:00,622 ERROR org.mortbay.log: /table.jsp
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:71)
 at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1087)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at 
 org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
 at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313447#comment-14313447
 ] 

Dave Latham commented on HBASE-12996:
-

{{transient}} as much to document as anything else that the field is not part 
of the serialization contract of Filters, even though they don't use Java 
serialization.  Our unit tests, for example, do some population of fields with 
random values, seralize, deserialize, and does a deep equals comparison as an 
automatic serialization test.  It ignores fields marked transient.

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 0.98.11
Reporter: Ian Friedman
Priority: Trivial
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12996.patch


 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9910) TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class.

2015-02-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313443#comment-14313443
 ] 

Andrew Purtell commented on HBASE-9910:
---

Those errors are probably not related to this patch [~vik.karma]

 TestHFilePerformance and HFilePerformanceEvaluation should be merged in a 
 single HFile performance test class.
 --

 Key: HBASE-9910
 URL: https://issues.apache.org/jira/browse/HBASE-9910
 Project: HBase
  Issue Type: Bug
  Components: Performance, test
Affects Versions: 2.0.0
Reporter: Jean-Marc Spaggiari
Assignee: Vikas Vishwakarma
 Fix For: 2.0.0

 Attachments: HBASE-9910.patch


 Today TestHFilePerformance and HFilePerformanceEvaluation are doing slightly 
 different kind of performance tests both for the HFile. We should consider 
 merging those 2 tests in a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12070) Add an option to hbck to fix ZK inconsistencies

2015-02-09 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-12070:
---
Status: Patch Available  (was: In Progress)

 Add an option to hbck to fix ZK inconsistencies
 ---

 Key: HBASE-12070
 URL: https://issues.apache.org/jira/browse/HBASE-12070
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 1.1.0
Reporter: Sudarshan Kadambi
Assignee: Stephen Yuan Jiang
 Fix For: 1.1.0

 Attachments: HBASE-12070.v1-branch-1.patch, 
 HBASE-12070.v2-branch-1.patch


 If the HMaster bounces in the middle of table creation, we could be left in a 
 state where a znode exists for the table, but that hasn't percolated into 
 META or to HDFS. We've run into this a couple times on our clusters. Once the 
 table is in this state, the only fix is to rm the znode using the 
 zookeeper-client. Doing this manually looks a bit error prone. Could an 
 option be added to hbck to catch and fix such inconsistencies?
 A more general issue I'd like comment on is whether it makes sense for 
 HMaster to be maintaining its own write-ahead log? The idea would be that on 
 a bounce, the master would discover it was in the middle of creating a table 
 and either rollback or complete that operation? An issue that we observed 
 recently was that a table that was in DISABLING state before a bounce was not 
 in that state after. A write-ahead log to persist table state changes seems 
 useful. Now, all of this state could be in ZK instead of the WAL - it doesn't 
 matter where it gets persisted as long as it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12973) RegionCoprocessorEnvironment should provide HRegionInfo directly

2015-02-09 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12973:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed, thanks for the reviews

 RegionCoprocessorEnvironment should provide HRegionInfo directly
 

 Key: HBASE-12973
 URL: https://issues.apache.org/jira/browse/HBASE-12973
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-12973-0.98.patch, HBASE-12973-branch-1.patch, 
 HBASE-12973.patch


 A coprocessor must go through RegionCoprocessorEnvironment#getRegion in order 
 to retrieve HRegionInfo for its associated region. It should be possible to 
 get HRegionInfo directly from RegionCoprocessorEnvironment. (Or Region, see 
 HBASE-12972)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8329) Limit compaction speed

2015-02-09 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313382#comment-14313382
 ] 

zhangduo commented on HBASE-8329:
-

[~apurtell] My question is should we add 0.98.11 to the fix versions? Thanks~

 Limit compaction speed
 --

 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: binlijin
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-8329-0.98-addendum.patch, HBASE-8329-0.98.patch, 
 HBASE-8329-10.patch, HBASE-8329-11.patch, HBASE-8329-12.patch, 
 HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, 
 HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, 
 HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, 
 HBASE-8329-branch-1.patch, HBASE-8329-trunk.patch, HBASE-8329_13.patch, 
 HBASE-8329_14.patch, HBASE-8329_15.patch, HBASE-8329_16.patch, 
 HBASE-8329_17.patch


 There is no speed or resource limit for compaction,I think we should add this 
 feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-8329) Limit compaction speed

2015-02-09 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-8329:
--
Fix Version/s: 0.98.11

 Limit compaction speed
 --

 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: binlijin
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-8329-0.98-addendum.patch, HBASE-8329-0.98.patch, 
 HBASE-8329-10.patch, HBASE-8329-11.patch, HBASE-8329-12.patch, 
 HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, 
 HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, 
 HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, 
 HBASE-8329-branch-1.patch, HBASE-8329-trunk.patch, HBASE-8329_13.patch, 
 HBASE-8329_14.patch, HBASE-8329_15.patch, HBASE-8329_16.patch, 
 HBASE-8329_17.patch


 There is no speed or resource limit for compaction,I think we should add this 
 feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12998) Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647

2015-02-09 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-12998:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've pushed this. Thanks Stack for review. HDFS-7756 is related and might 
revert back the change it seems, but this jira will work either way. 

 Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647
 --

 Key: HBASE-12998
 URL: https://issues.apache.org/jira/browse/HBASE-12998
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11

 Attachments: hbase-12998-v1.patch


 HDFS-7647 changed internal API related to LocatedBlocks and DataNodeInfo.  We 
 can fix it trivially in HBase for now. 
 {code}
 [INFO] -
 [ERROR] COMPILATION ERROR : 
 [INFO] -
 [ERROR] 
 /Users/enis/projects/hbase-champlain/hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java:[175,41]
  error: incompatible types
 {code}
 Longer term, we should add an API for advanced hdfs users (like HBase) to 
 deprioritize / reorder locations for blocks based on what the client thinks. 
 [~arpitagarwal] FYI. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12989) region_mover.rb unloadRegions method uses ArrayList concurrently resulting in errors

2015-02-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313442#comment-14313442
 ] 

Andrew Purtell commented on HBASE-12989:


No reason we couldn't actually replace region_mover.rb with a wrapper script 
around a new Java tool. 

 region_mover.rb unloadRegions method uses ArrayList concurrently resulting in 
 errors
 

 Key: HBASE-12989
 URL: https://issues.apache.org/jira/browse/HBASE-12989
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.98.10
Reporter: Abhishek Singh Chouhan
Assignee: Abhishek Singh Chouhan
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-12989.patch


 While working on HBASE-12822 ran into 
 {quote}
 15/02/04 13:16:44 [main] INFO  reflect.GeneratedMethodAccessor35(?): Pool 
 completed
 NoMethodError: undefined method `toByteArray' for nil:NilClass
 __for__ at region_mover.rb:270
each at 
 file:/home/sfdc/installed/bigdata-hbase__hbase.6_prod__9707645_Linux.x86_64.prod.runtime.bigdata-hbase_bigdata-hbase/hbase/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7
   writeFile at region_mover.rb:269
   unloadRegions at region_mover.rb:347
  (root) at region_mover.rb:501
 {quote}
 This is because 
 movedRegions = java.util.ArrayList.new()
 is being used concurrently in unloadRegions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12989) region_mover.rb unloadRegions method uses ArrayList concurrently resulting in errors

2015-02-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313440#comment-14313440
 ] 

Andrew Purtell commented on HBASE-12989:


+1

What do you think about deleting region_mover.rb and replacing it with a Java 
tool implemented with attention paid to concurrency, now that multithreaded 
operation has been introduced? It's practically Java already and we've already 
had issues due to Ruby language level things like associative arrays not being 
threadsafe.

 region_mover.rb unloadRegions method uses ArrayList concurrently resulting in 
 errors
 

 Key: HBASE-12989
 URL: https://issues.apache.org/jira/browse/HBASE-12989
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.98.10
Reporter: Abhishek Singh Chouhan
Assignee: Abhishek Singh Chouhan
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-12989.patch


 While working on HBASE-12822 ran into 
 {quote}
 15/02/04 13:16:44 [main] INFO  reflect.GeneratedMethodAccessor35(?): Pool 
 completed
 NoMethodError: undefined method `toByteArray' for nil:NilClass
 __for__ at region_mover.rb:270
each at 
 file:/home/sfdc/installed/bigdata-hbase__hbase.6_prod__9707645_Linux.x86_64.prod.runtime.bigdata-hbase_bigdata-hbase/hbase/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7
   writeFile at region_mover.rb:269
   unloadRegions at region_mover.rb:347
  (root) at region_mover.rb:501
 {quote}
 This is because 
 movedRegions = java.util.ArrayList.new()
 is being used concurrently in unloadRegions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12998) Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313453#comment-14313453
 ] 

Hudson commented on HBASE-12998:


FAILURE: Integrated in HBase-TRUNK #6108 (See 
[https://builds.apache.org/job/HBase-TRUNK/6108/])
HBASE-12998 Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647 
(enis: rev f97c00fd99609214830e68f52c1ec48c4e506c1c)
* hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java


 Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647
 --

 Key: HBASE-12998
 URL: https://issues.apache.org/jira/browse/HBASE-12998
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11

 Attachments: hbase-12998-v1.patch


 HDFS-7647 changed internal API related to LocatedBlocks and DataNodeInfo.  We 
 can fix it trivially in HBase for now. 
 {code}
 [INFO] -
 [ERROR] COMPILATION ERROR : 
 [INFO] -
 [ERROR] 
 /Users/enis/projects/hbase-champlain/hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java:[175,41]
  error: incompatible types
 {code}
 Longer term, we should add an API for advanced hdfs users (like HBase) to 
 deprioritize / reorder locations for blocks based on what the client thinks. 
 [~arpitagarwal] FYI. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12747) IntegrationTestMTTR will OOME if launched with mvn verify

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313455#comment-14313455
 ] 

Hudson commented on HBASE-12747:


FAILURE: Integrated in HBase-TRUNK #6108 (See 
[https://builds.apache.org/job/HBase-TRUNK/6108/])
HBASE-12747 IntegrationTestMTTR will OOME if launched with mvn verify (Abhishek 
Singh Chouhan) (apurtell: rev 200ec5b191262ac356639b0390d7d72ab93feef3)
* pom.xml
* hbase-it/pom.xml


 IntegrationTestMTTR will OOME if launched with mvn verify
 -

 Key: HBASE-12747
 URL: https://issues.apache.org/jira/browse/HBASE-12747
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Andrew Purtell
Assignee: Abhishek Singh Chouhan
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-12747-v1.patch, HBASE-12747.patch, 
 org.apache.hadoop.hbase.mttr.IntegrationTestMTTR-output.txt.gz


 IntegrationTestMTRR will OOME if launched like:
 {noformat}
 cd hbase-it
 mvn verify -Dit.test=IntegrationTestMTTR
 {noformat}
 Linux environment, 7u67.
 Looks like we should bump the heap on the failsafe argline in the POM. 
 {noformat}
 2014-12-22 11:24:07,725 ERROR 
 [B.DefaultRpcServer.handler=2,queue=0,port=55672] ipc.RpcServer(2067): 
 Unexpected throwable o
 bject 
 java.lang.OutOfMemoryError: Java heap space
 at 
 org.apache.hadoop.hbase.regionserver.MemStoreLAB$Chunk.init(MemStoreLAB.java:246)
 at 
 org.apache.hadoop.hbase.regionserver.MemStoreLAB.getOrMakeChunk(MemStoreLAB.java:196)
 at 
 org.apache.hadoop.hbase.regionserver.MemStoreLAB.allocateBytes(MemStoreLAB.java:114)
 at 
 org.apache.hadoop.hbase.regionserver.MemStore.maybeCloneWithAllocator(MemStore.java:274)
 at 
 org.apache.hadoop.hbase.regionserver.MemStore.add(MemStore.java:229)
 at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:576)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.applyFamilyMapToMemstore(HRegion.java:3084)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2517)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2284)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2239)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2243)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4482)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3665)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3554)
 {noformat}
 Another minor issue: After taking the OOME, the test executor will linger 
 indefinitely as a zombie. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12585) Fix refguide so it does hbase 1.0 style API everywhere with callout on how we used to do it in pre-1.0

2015-02-09 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-12585:

Status: Patch Available  (was: Open)

 Fix refguide so it does hbase 1.0 style API everywhere with callout on how we 
 used to do it in pre-1.0
 --

 Key: HBASE-12585
 URL: https://issues.apache.org/jira/browse/HBASE-12585
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: stack
Assignee: Misty Stanley-Jones
 Fix For: 1.0.1, 1.1.0

 Attachments: HBASE-12585.patch


 Over in HBASE-12400, made a start on this project writing up how the new 
 HBase 1.0 API looks.  I started in on the refguide removing all HTable 
 references replacing with new style and in the hbase client chapter added 
 leadoff that has users go get a cluster Connection first
 Doing a thorough job of rinsing the doc of old style foregrounding the new 
 mode is a big job.
 [~misty] Any chance of help on this one?  Thanks boss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12585) Fix refguide so it does hbase 1.0 style API everywhere with callout on how we used to do it in pre-1.0

2015-02-09 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-12585:

Attachment: HBASE-12585.patch

Let's see if I have screwed up anything major in these examples. I'll put up a 
RB.

 Fix refguide so it does hbase 1.0 style API everywhere with callout on how we 
 used to do it in pre-1.0
 --

 Key: HBASE-12585
 URL: https://issues.apache.org/jira/browse/HBASE-12585
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: stack
Assignee: Misty Stanley-Jones
 Fix For: 1.0.1, 1.1.0

 Attachments: HBASE-12585.patch


 Over in HBASE-12400, made a start on this project writing up how the new 
 HBase 1.0 API looks.  I started in on the refguide removing all HTable 
 references replacing with new style and in the hbase client chapter added 
 leadoff that has users go get a cluster Connection first
 Doing a thorough job of rinsing the doc of old style foregrounding the new 
 mode is a big job.
 [~misty] Any chance of help on this one?  Thanks boss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12998) Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313518#comment-14313518
 ] 

Hudson commented on HBASE-12998:


FAILURE: Integrated in HBase-1.0 #724 (See 
[https://builds.apache.org/job/HBase-1.0/724/])
HBASE-12998 Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647 
(enis: rev ea431871db5a99eb64eb742558e2cccb867c355f)
* hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java


 Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647
 --

 Key: HBASE-12998
 URL: https://issues.apache.org/jira/browse/HBASE-12998
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11

 Attachments: hbase-12998-v1.patch


 HDFS-7647 changed internal API related to LocatedBlocks and DataNodeInfo.  We 
 can fix it trivially in HBase for now. 
 {code}
 [INFO] -
 [ERROR] COMPILATION ERROR : 
 [INFO] -
 [ERROR] 
 /Users/enis/projects/hbase-champlain/hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java:[175,41]
  error: incompatible types
 {code}
 Longer term, we should add an API for advanced hdfs users (like HBase) to 
 deprioritize / reorder locations for blocks based on what the client thinks. 
 [~arpitagarwal] FYI. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12747) IntegrationTestMTTR will OOME if launched with mvn verify

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313520#comment-14313520
 ] 

Hudson commented on HBASE-12747:


FAILURE: Integrated in HBase-1.0 #724 (See 
[https://builds.apache.org/job/HBase-1.0/724/])
HBASE-12747 IntegrationTestMTTR will OOME if launched with mvn verify (Abhishek 
Singh Chouhan) (apurtell: rev 71edf3ffef6a5c2f6ab5edc02a79b11b2f43)
* pom.xml
* hbase-it/pom.xml


 IntegrationTestMTTR will OOME if launched with mvn verify
 -

 Key: HBASE-12747
 URL: https://issues.apache.org/jira/browse/HBASE-12747
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Andrew Purtell
Assignee: Abhishek Singh Chouhan
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-12747-v1.patch, HBASE-12747.patch, 
 org.apache.hadoop.hbase.mttr.IntegrationTestMTTR-output.txt.gz


 IntegrationTestMTRR will OOME if launched like:
 {noformat}
 cd hbase-it
 mvn verify -Dit.test=IntegrationTestMTTR
 {noformat}
 Linux environment, 7u67.
 Looks like we should bump the heap on the failsafe argline in the POM. 
 {noformat}
 2014-12-22 11:24:07,725 ERROR 
 [B.DefaultRpcServer.handler=2,queue=0,port=55672] ipc.RpcServer(2067): 
 Unexpected throwable o
 bject 
 java.lang.OutOfMemoryError: Java heap space
 at 
 org.apache.hadoop.hbase.regionserver.MemStoreLAB$Chunk.init(MemStoreLAB.java:246)
 at 
 org.apache.hadoop.hbase.regionserver.MemStoreLAB.getOrMakeChunk(MemStoreLAB.java:196)
 at 
 org.apache.hadoop.hbase.regionserver.MemStoreLAB.allocateBytes(MemStoreLAB.java:114)
 at 
 org.apache.hadoop.hbase.regionserver.MemStore.maybeCloneWithAllocator(MemStore.java:274)
 at 
 org.apache.hadoop.hbase.regionserver.MemStore.add(MemStore.java:229)
 at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:576)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.applyFamilyMapToMemstore(HRegion.java:3084)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2517)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2284)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2239)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2243)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4482)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3665)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3554)
 {noformat}
 Another minor issue: After taking the OOME, the test executor will linger 
 indefinitely as a zombie. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12991) Use HBase 1.0 interfaces in hbase-rest

2015-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313517#comment-14313517
 ] 

Hadoop QA commented on HBASE-12991:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697512/HBASE-12991.patch
  against master branch at commit f97c00fd99609214830e68f52c1ec48c4e506c1c.
  ATTACHMENT ID: 12697512

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1941 checkstyle errors (more than the master's current 1940 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.reef.tests.taskresubmit.TaskResubmitTest.testTaskResubmission(TaskResubmitTest.java:67)
at 
org.apache.hadoop.hbase.TestAcidGuarantees.testScanAtomicity(TestAcidGuarantees.java:354)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12746//console

This message is automatically generated.

 Use HBase 1.0 interfaces in hbase-rest
 --

 Key: HBASE-12991
 URL: https://issues.apache.org/jira/browse/HBASE-12991
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 1.0.1

 Attachments: HBASE-12991.patch


 hbase-rest uses HTable and HBaseAdmin under the covers.  They should use the 
 new hbase 1.0 interfaces instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12973) RegionCoprocessorEnvironment should provide HRegionInfo directly

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313521#comment-14313521
 ] 

Hudson commented on HBASE-12973:


FAILURE: Integrated in HBase-1.0 #724 (See 
[https://builds.apache.org/job/HBase-1.0/724/])
HBASE-12973 RegionCoprocessorEnvironment should provide HRegionInfo directly 
(apurtell: rev 8afd2a872440c9be789be5f000420e8cec5712c8)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestTokenAuthentication.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java


 RegionCoprocessorEnvironment should provide HRegionInfo directly
 

 Key: HBASE-12973
 URL: https://issues.apache.org/jira/browse/HBASE-12973
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12973-0.98.patch, HBASE-12973-branch-1.patch, 
 HBASE-12973.patch


 A coprocessor must go through RegionCoprocessorEnvironment#getRegion in order 
 to retrieve HRegionInfo for its associated region. It should be possible to 
 get HRegionInfo directly from RegionCoprocessorEnvironment. (Or Region, see 
 HBASE-12972)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13000) Backport HBASE-11240 (Print hdfs pipeline when hlog's sync is slow) to 0.98

2015-02-09 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13000:

Attachment: HBASE-13000-0.98.1.patch

attaching for QA run.

 Backport HBASE-11240 (Print hdfs pipeline when hlog's sync is slow) to 0.98
 ---

 Key: HBASE-13000
 URL: https://issues.apache.org/jira/browse/HBASE-13000
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: Sean Busbey
Priority: Minor
 Fix For: 0.98.11

 Attachments: HBASE-13000-0.98.1.patch


 Would be useful to learn about abnormal datanodes and slow seeks from logs in 
 an 0.98 install too. Implement for 0.98, incorporating addendums. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13000) Backport HBASE-11240 (Print hdfs pipeline when hlog's sync is slow) to 0.98

2015-02-09 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13000:

Status: Patch Available  (was: Open)

 Backport HBASE-11240 (Print hdfs pipeline when hlog's sync is slow) to 0.98
 ---

 Key: HBASE-13000
 URL: https://issues.apache.org/jira/browse/HBASE-13000
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: Sean Busbey
Priority: Minor
 Fix For: 0.98.11

 Attachments: HBASE-13000-0.98.1.patch


 Would be useful to learn about abnormal datanodes and slow seeks from logs in 
 an 0.98 install too. Implement for 0.98, incorporating addendums. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13002) Make encryption cipher configurable

2015-02-09 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13002:
--
Description: Make encryption cipher configurable currently it is hard coded 
to AES, so that user can configure his/her own algorithm.  (was: Make 
encryption cipher configurable currently it is hard coded to AES)

 Make encryption cipher configurable
 ---

 Key: HBASE-13002
 URL: https://issues.apache.org/jira/browse/HBASE-13002
 Project: HBase
  Issue Type: Improvement
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11


 Make encryption cipher configurable currently it is hard coded to AES, so 
 that user can configure his/her own algorithm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12984) SSL cannot be used by the InfoPort after removing deprecated code in HBASE-10336

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313393#comment-14313393
 ] 

Hudson commented on HBASE-12984:


FAILURE: Integrated in HBase-TRUNK #6107 (See 
[https://builds.apache.org/job/HBase-TRUNK/6107/])
HBASE-12984: SSL cannot be used by the InfoPort in branch-1 (enis: rev 
1f830bea892df01bd657aeaab7d34926dbc372b4)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/http/InfoServer.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/http/HttpConfig.java


 SSL cannot be used by the InfoPort after removing deprecated code in 
 HBASE-10336
 

 Key: HBASE-12984
 URL: https://issues.apache.org/jira/browse/HBASE-12984
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.1.0
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
Priority: Blocker
 Fix For: 1.0.0, 2.0.0, 1.1.0

 Attachments: HBASE-12984-v1.txt, HBASE-12984-v3.txt, 
 HBASE-12984-v3.txt, HBASE-12984-v4.txt


 Setting {{hbase.ssl.enabled}} to {{true}} doesn't enable SSL on the 
 InfoServer. Found that the problem is down the InfoServer and HttpConfig in 
 how we setup the protocol in the HttpServer:
 {code}
 for (URI ep : endpoints) {
 Connector listener = null;
 String scheme = ep.getScheme();
  if (http.equals(scheme)) {
   listener = HttpServer.createDefaultChannelConnector();
 } else if (https.equals(scheme)) {
   SslSocketConnector c = new SslSocketConnectorSecure();
   c.setNeedClientAuth(needsClientAuth);
   c.setKeyPassword(keyPassword);
 {code}
 It depends what end points have been added by the InfoServer:
 {code}
 builder
   .setName(name)
   .addEndpoint(URI.create(http://; + bindAddress + : + port))
   .setAppDir(HBASE_APP_DIR).setFindPort(findPort).setConf(c);
 {code}
 Basically we always use http and we don't look via HttConfig if 
 {{hbase.ssl.enabled}} was set to true and we assign the right schema based on 
 the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12999) Make foreground_start return the correct exit code

2015-02-09 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12999:
--
Attachment: HBASE-12999.patch

I still need to check to make sure that the trap gets run for all of the cases 
that it should.

 Make foreground_start return the correct exit code
 --

 Key: HBASE-12999
 URL: https://issues.apache.org/jira/browse/HBASE-12999
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-12999.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12998) Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647

2015-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313406#comment-14313406
 ] 

Hadoop QA commented on HBASE-12998:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697587/hbase-12998-v1.patch
  against master branch at commit 9d6b237ae8676750c97dad2b9d2655dbd43f67fa.
  ATTACHMENT ID: 12697587

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12743//console

This message is automatically generated.

 Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647
 --

 Key: HBASE-12998
 URL: https://issues.apache.org/jira/browse/HBASE-12998
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11

 Attachments: hbase-12998-v1.patch


 HDFS-7647 changed internal API related to LocatedBlocks and DataNodeInfo.  We 
 can fix it trivially in HBase for now. 
 {code}
 [INFO] -
 [ERROR] COMPILATION ERROR : 
 [INFO] -
 [ERROR] 
 /Users/enis/projects/hbase-champlain/hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java:[175,41]
  error: incompatible types
 {code}
 Longer term, we should add an API for advanced hdfs users (like HBase) to 
 deprioritize / reorder locations for blocks based on what the client thinks. 
 [~arpitagarwal] FYI. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12984) SSL cannot be used by the InfoPort after removing deprecated code in HBASE-10336

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313418#comment-14313418
 ] 

Hudson commented on HBASE-12984:


FAILURE: Integrated in HBase-1.1 #158 (See 
[https://builds.apache.org/job/HBase-1.1/158/])
HBASE-12984: SSL cannot be used by the InfoPort in branch-1 (enis: rev 
93bfa26705d9d0c596b919c92fd73092b218ee16)
* hbase-server/src/main/java/org/apache/hadoop/hbase/http/InfoServer.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/http/HttpConfig.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java


 SSL cannot be used by the InfoPort after removing deprecated code in 
 HBASE-10336
 

 Key: HBASE-12984
 URL: https://issues.apache.org/jira/browse/HBASE-12984
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.1.0
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
Priority: Blocker
 Fix For: 1.0.0, 2.0.0, 1.1.0

 Attachments: HBASE-12984-v1.txt, HBASE-12984-v3.txt, 
 HBASE-12984-v3.txt, HBASE-12984-v4.txt


 Setting {{hbase.ssl.enabled}} to {{true}} doesn't enable SSL on the 
 InfoServer. Found that the problem is down the InfoServer and HttpConfig in 
 how we setup the protocol in the HttpServer:
 {code}
 for (URI ep : endpoints) {
 Connector listener = null;
 String scheme = ep.getScheme();
  if (http.equals(scheme)) {
   listener = HttpServer.createDefaultChannelConnector();
 } else if (https.equals(scheme)) {
   SslSocketConnector c = new SslSocketConnectorSecure();
   c.setNeedClientAuth(needsClientAuth);
   c.setKeyPassword(keyPassword);
 {code}
 It depends what end points have been added by the InfoServer:
 {code}
 builder
   .setName(name)
   .addEndpoint(URI.create(http://; + bindAddress + : + port))
   .setAppDir(HBASE_APP_DIR).setFindPort(findPort).setConf(c);
 {code}
 Basically we always use http and we don't look via HttConfig if 
 {{hbase.ssl.enabled}} was set to true and we assign the right schema based on 
 the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13000) Backport HBASE-11240 (Print hdfs pipeline when hlog's sync is slow) to 0.98

2015-02-09 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-13000:
--

 Summary: Backport HBASE-11240 (Print hdfs pipeline when hlog's 
sync is slow) to 0.98
 Key: HBASE-13000
 URL: https://issues.apache.org/jira/browse/HBASE-13000
 Project: HBase
  Issue Type: Task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.98.11


Would be useful to know about abnormal datanodes in an 0.98 install too. 
Implement for 0.98, incorporating addendums. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12997) FSHLog should print pipeline on low replication

2015-02-09 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12997:

Affects Version/s: 0.98.11
Fix Version/s: 0.98.11

 FSHLog should print pipeline on low replication
 ---

 Key: HBASE-12997
 URL: https://issues.apache.org/jira/browse/HBASE-12997
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 1.0.0, 0.98.11
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-12997.1.patch.txt


 We already have code in place for 1.0+ to print the pipeline when there are 
 slow syncs happening.
 We should also print the pipeline when we decide to roll due to low 
 replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12971) Replication stuck due to large default value for replication.source.maxretriesmultiplier

2015-02-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313702#comment-14313702
 ] 

Lars Hofhansl commented on HBASE-12971:
---

I do not think we should invent yet another config option.
We can already configure replication.source.socketTimeoutMultiplier, it's just 
about a good default.

In fact with that in mind maybe the socketTimeoutMultiplier should just be 
maxRetriesMultiplier (we declared maxRetriesMultiplier to be a good maximum 
since we configured it that way, on a socket timeout it seems good to wait for 
that maximum immediately).

Everybody good with that (socketTimeoutMultiplier = maxRetriesMultiplier)?


 Replication stuck due to large default value for 
 replication.source.maxretriesmultiplier
 

 Key: HBASE-12971
 URL: https://issues.apache.org/jira/browse/HBASE-12971
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 1.0.0, 0.98.10
Reporter: Adrian Muraru
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.94.27, 0.98.11


 We are setting in hbase-site the default value of 300 for 
 {{replication.source.maxretriesmultiplier}} introduced in HBASE-11964.
 While this value works fine to recover for transient errors with remote ZK 
 quorum from the peer Hbase cluster - it proved to have side effects in the 
 code introduced in HBASE-11367 Pluggable replication endpoint, where the 
 default is much lower (10).
 See:
 1. 
 https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L169
 2. 
 https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java#L79
 The the two default values are definitely conflicting - when 
 {{replication.source.maxretriesmultiplier}} is set in the hbase-site to 300 
 this will lead to a  sleep time of 300*300 (25h!) when a sockettimeout 
 exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12998) Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313712#comment-14313712
 ] 

Hudson commented on HBASE-12998:


SUCCESS: Integrated in HBase-0.98 #842 (See 
[https://builds.apache.org/job/HBase-0.98/842/])
HBASE-12998 Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647 
(enis: rev 433672a67b2589bc77d80007c219d03f6a6bf656)
* hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java


 Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647
 --

 Key: HBASE-12998
 URL: https://issues.apache.org/jira/browse/HBASE-12998
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11

 Attachments: hbase-12998-v1.patch


 HDFS-7647 changed internal API related to LocatedBlocks and DataNodeInfo.  We 
 can fix it trivially in HBase for now. 
 {code}
 [INFO] -
 [ERROR] COMPILATION ERROR : 
 [INFO] -
 [ERROR] 
 /Users/enis/projects/hbase-champlain/hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java:[175,41]
  error: incompatible types
 {code}
 Longer term, we should add an API for advanced hdfs users (like HBase) to 
 deprioritize / reorder locations for blocks based on what the client thinks. 
 [~arpitagarwal] FYI. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12992) TestChoreService doesn't close services, that can break test on slow virtual hosts.

2015-02-09 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12992:
-
Status: Open  (was: Patch Available)

 TestChoreService doesn't close services, that can break test on slow virtual 
 hosts.
 ---

 Key: HBASE-12992
 URL: https://issues.apache.org/jira/browse/HBASE-12992
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Attachments: HBASE-12992.patch, HBASE-12992.patch


 On my slow virtual machine it quite possible to fail this test due of 
 enormous amount of active threads to the end of test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312667#comment-14312667
 ] 

Ted Yu commented on HBASE-12996:


+1

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Ian Friedman
 Attachments: 12996.txt


 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Ian Friedman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Friedman updated HBASE-12996:
-
Attachment: (was: HBASE-12998.patch)

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Ian Friedman
 Attachments: HBASE-12996.patch


 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Ian Friedman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312673#comment-14312673
 ] 

Ian Friedman commented on HBASE-12996:
--

patch seems to apply fine on master as well

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Ian Friedman
 Attachments: HBASE-12996.patch


 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Ian Friedman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Friedman updated HBASE-12996:
-
Attachment: (was: 12996.txt)

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Ian Friedman
 Attachments: HBASE-12996.patch, HBASE-12998.patch


 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Ian Friedman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Friedman updated HBASE-12996:
-
Attachment: HBASE-12998.patch

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Ian Friedman
 Attachments: HBASE-12996.patch, HBASE-12998.patch


 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Ian Friedman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Friedman updated HBASE-12996:
-
Attachment: HBASE-12996.patch

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Ian Friedman
 Attachments: HBASE-12996.patch, HBASE-12998.patch


 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-12996:
---
Affects Version/s: (was: 0.98.9)
   0.98.11
   1.1.0
   2.0.0

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 0.98.11
Reporter: Ian Friedman
 Attachments: HBASE-12996.patch


 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12984) SSL cannot be used by the InfoPort after removing deprecated code in HBASE-10336

2015-02-09 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312705#comment-14312705
 ] 

Enis Soztutar commented on HBASE-12984:
---

Ok, lets commit this one for the RC. 

 SSL cannot be used by the InfoPort after removing deprecated code in 
 HBASE-10336
 

 Key: HBASE-12984
 URL: https://issues.apache.org/jira/browse/HBASE-12984
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.1.0
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
Priority: Blocker
 Fix For: 1.0.0, 2.0.0, 1.1.0

 Attachments: HBASE-12984-v1.txt, HBASE-12984-v3.txt, 
 HBASE-12984-v3.txt, HBASE-12984-v4.txt


 Setting {{hbase.ssl.enabled}} to {{true}} doesn't enable SSL on the 
 InfoServer. Found that the problem is down the InfoServer and HttpConfig in 
 how we setup the protocol in the HttpServer:
 {code}
 for (URI ep : endpoints) {
 Connector listener = null;
 String scheme = ep.getScheme();
  if (http.equals(scheme)) {
   listener = HttpServer.createDefaultChannelConnector();
 } else if (https.equals(scheme)) {
   SslSocketConnector c = new SslSocketConnectorSecure();
   c.setNeedClientAuth(needsClientAuth);
   c.setKeyPassword(keyPassword);
 {code}
 It depends what end points have been added by the InfoServer:
 {code}
 builder
   .setName(name)
   .addEndpoint(URI.create(http://; + bindAddress + : + port))
   .setAppDir(HBASE_APP_DIR).setFindPort(findPort).setConf(c);
 {code}
 Basically we always use http and we don't look via HttConfig if 
 {{hbase.ssl.enabled}} was set to true and we assign the right schema based on 
 the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12978) hbase:meta has a row missing hregioninfo and it causes my long-running job to fail

2015-02-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312766#comment-14312766
 ] 

Devaraj Das commented on HBASE-12978:
-

Try doing a raw scan. Get probably hides the deleted cells even if the VERSIONS 
are specified.

 hbase:meta has a row missing hregioninfo and it causes my long-running job to 
 fail
 --

 Key: HBASE-12978
 URL: https://issues.apache.org/jira/browse/HBASE-12978
 Project: HBase
  Issue Type: Bug
Reporter: stack
 Fix For: 1.0.1


 Testing 1.0.0 trying long-running tests.
 A row in hbase:meta was missing its HRI entry. It caused the job to fail. 
 Around the time of the first task failure, there are balances of the 
 hbase:meta region and it was on a server that crashed. I tried to look at 
 what happened around time of our writing hbase:meta and I ran into another 
 issue; 20 logs of 256MBs filled with WrongRegionException written over a 
 minute or two. The actual update of hbase:meta was not in the logs, it'd been 
 rotated off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12992) TestChoreService doesn't close services, that can break test on slow virtual hosts.

2015-02-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12992:
---
Fix Version/s: 1.1.0
   2.0.0

 TestChoreService doesn't close services, that can break test on slow virtual 
 hosts.
 ---

 Key: HBASE-12992
 URL: https://issues.apache.org/jira/browse/HBASE-12992
 Project: HBase
  Issue Type: Test
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12992.patch, HBASE-12992.patch


 On my slow virtual machine it quite possible to fail this test due of 
 enormous amount of active threads to the end of test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12992) TestChoreService doesn't close services, that can break test on slow virtual hosts.

2015-02-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12992:
---
  Issue Type: Test  (was: Improvement)
Hadoop Flags: Reviewed

TestChoreService passed in 
https://builds.apache.org/job/PreCommit-HBASE-Build/12740/console

 TestChoreService doesn't close services, that can break test on slow virtual 
 hosts.
 ---

 Key: HBASE-12992
 URL: https://issues.apache.org/jira/browse/HBASE-12992
 Project: HBase
  Issue Type: Test
Affects Versions: 2.0.0
Reporter: Andrey Stepachev
Assignee: Andrey Stepachev
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-12992.patch, HBASE-12992.patch


 On my slow virtual machine it quite possible to fail this test due of 
 enormous amount of active threads to the end of test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12978) hbase:meta has a row missing hregioninfo and it causes my long-running job to fail

2015-02-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312710#comment-14312710
 ] 

Devaraj Das commented on HBASE-12978:
-

[~stack] wondering if this is happening because of the fact that the balancer 
is doing its job in the background. Scenario is:
0. The balancer computed a plan for the regions but not executed all the moves 
yet.
1. The region in question is deleted from the meta (maybe due to a split etc)
2. The balancer does its job and assigns the region to some RS. That RS does a 
put in the meta table when it successfully opens the region. That put recreates 
the meta entry but this time the hri won't be there...

 hbase:meta has a row missing hregioninfo and it causes my long-running job to 
 fail
 --

 Key: HBASE-12978
 URL: https://issues.apache.org/jira/browse/HBASE-12978
 Project: HBase
  Issue Type: Bug
Reporter: stack
 Fix For: 1.0.1


 Testing 1.0.0 trying long-running tests.
 A row in hbase:meta was missing its HRI entry. It caused the job to fail. 
 Around the time of the first task failure, there are balances of the 
 hbase:meta region and it was on a server that crashed. I tried to look at 
 what happened around time of our writing hbase:meta and I ran into another 
 issue; 20 logs of 256MBs filled with WrongRegionException written over a 
 minute or two. The actual update of hbase:meta was not in the logs, it'd been 
 rotated off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12897) Minimum memstore size is a percentage

2015-02-09 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-12897:
---
Attachment: HBASE-12897.0.98.patch

the patch for 98 fellas, sorry for the wait.

 Minimum memstore size is a percentage
 -

 Key: HBASE-12897
 URL: https://issues.apache.org/jira/browse/HBASE-12897
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 0.98.10, 1.1.0
Reporter: churro morales
Assignee: churro morales
 Fix For: 1.0.0, 2.0.0, 1.1.0

 Attachments: HBASE-12897.0.98.patch, HBASE-12897.patch


 We have a cluster which is optimized for random reads.  Thus we have a large 
 block cache and a small memstore.  Currently our heap is 20GB and we wanted 
 to configure the memstore to take 4% or 800MB.  Right now the minimum 
 memstore size is 5%.  What do you guys think about reducing the minimum size 
 to 1%?  Suppose we log a warning if the memstore is below 5% but allow it?
 What do you folks think? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8329) Limit compaction speed

2015-02-09 Thread Lars George (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14311920#comment-14311920
 ] 

Lars George commented on HBASE-8329:


Ping [~apurtell] :)

 Limit compaction speed
 --

 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: binlijin
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-8329-0.98-addendum.patch, HBASE-8329-0.98.patch, 
 HBASE-8329-10.patch, HBASE-8329-11.patch, HBASE-8329-12.patch, 
 HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, 
 HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, 
 HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, 
 HBASE-8329-branch-1.patch, HBASE-8329-trunk.patch, HBASE-8329_13.patch, 
 HBASE-8329_14.patch, HBASE-8329_15.patch, HBASE-8329_16.patch, 
 HBASE-8329_17.patch


 There is no speed or resource limit for compaction,I think we should add this 
 feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12997) FSHLog should print pipeline on low replication

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313749#comment-14313749
 ] 

Hudson commented on HBASE-12997:


SUCCESS: Integrated in HBase-TRUNK #6110 (See 
[https://builds.apache.org/job/HBase-TRUNK/6110/])
HBASE-12997 print wal pipeline on low replication. (busbey: rev 
3d692cf044bc25327269328933299053ba19e2df)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java


 FSHLog should print pipeline on low replication
 ---

 Key: HBASE-12997
 URL: https://issues.apache.org/jira/browse/HBASE-12997
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 1.0.0, 0.98.11
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-12997.1.patch.txt


 We already have code in place for 1.0+ to print the pipeline when there are 
 slow syncs happening.
 We should also print the pipeline when we decide to roll due to low 
 replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13002) Make encryption cipher configurable

2015-02-09 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13002:
--
Status: Patch Available  (was: Open)

Added a new configuration {{hbase.crypto.key.algorithm}} which will allow user 
to configure his/her own crypto algorithm where default being 'AES'.
Please review.

 Make encryption cipher configurable
 ---

 Key: HBASE-13002
 URL: https://issues.apache.org/jira/browse/HBASE-13002
 Project: HBase
  Issue Type: Improvement
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13002.patch


 Make encryption cipher configurable currently it is hard coded to AES, so 
 that user can configure his/her own algorithm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12978) hbase:meta has a row missing hregioninfo and it causes my long-running job to fail

2015-02-09 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12978:
--
Attachment: e7cadabc6e5e46c7bf6b3d445f0c53cf

Here is the last hfile that had the missing Cell in it. The missing cell is:

{code}
 754 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:regioninfo/1423438436466/Put/vlen=82/seqid=126045
{code}

If I try to get this row from this file, I get nothing back, though the file 
has a bunch of entries on this row.  Here is dump of the entries that are in 
this file for this row:

{code}
 754 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:regioninfo/1423438436466/Put/vlen=82/seqid=126045
 755 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423442164769/Put/vlen=8/seqid=130685
 756 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423442143845/Put/vlen=8/seqid=130562
 757 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423442046094/Put/vlen=8/seqid=130346
 758 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423441959921/Put/vlen=8/seqid=130285
 759 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423441807390/Put/vlen=8/seqid=12
 760 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423441726587/Put/vlen=8/seqid=129821
 761 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423441661740/Put/vlen=8/seqid=129550
 762 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423441646296/Put/vlen=8/seqid=129459
 763 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423441601265/Put/vlen=8/seqid=129275
 764 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423441340085/Put/vlen=8/seqid=129031
 765 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423442164769/Put/vlen=30/seqid=130685
 766 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423442143845/Put/vlen=30/seqid=130562
 767 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423442046094/Put/vlen=30/seqid=130346
 768 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423441959921/Put/vlen=30/seqid=130285
 769 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423441807390/Put/vlen=30/seqid=12
 770 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423441726587/Put/vlen=30/seqid=129821
 771 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423441661740/Put/vlen=30/seqid=129550
 772 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423441646296/Put/vlen=30/seqid=129459
 773 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423441601265/Put/vlen=30/seqid=129275
 774 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423441340085/Put/vlen=30/seqid=129031
 775 K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:serverstartcode/1423442164769/Put/vlen=8/seqid=130685
 776 K: 

[jira] [Updated] (HBASE-13002) Make encryption cipher configurable

2015-02-09 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13002:
--
Attachment: HBASE-13002.patch

 Make encryption cipher configurable
 ---

 Key: HBASE-13002
 URL: https://issues.apache.org/jira/browse/HBASE-13002
 Project: HBase
  Issue Type: Improvement
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-13002.patch


 Make encryption cipher configurable currently it is hard coded to AES, so 
 that user can configure his/her own algorithm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12747) IntegrationTestMTTR will OOME if launched with mvn verify

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313761#comment-14313761
 ] 

Hudson commented on HBASE-12747:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #800 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/800/])
HBASE-12747 IntegrationTestMTTR will OOME if launched with mvn verify (Abhishek 
Singh Chouhan) (apurtell: rev a1342abbb2e17e9bdb39205f69e9ff1f60b6fc23)
* hbase-it/pom.xml
* pom.xml


 IntegrationTestMTTR will OOME if launched with mvn verify
 -

 Key: HBASE-12747
 URL: https://issues.apache.org/jira/browse/HBASE-12747
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Andrew Purtell
Assignee: Abhishek Singh Chouhan
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11

 Attachments: HBASE-12747-v1.patch, HBASE-12747.patch, 
 org.apache.hadoop.hbase.mttr.IntegrationTestMTTR-output.txt.gz


 IntegrationTestMTRR will OOME if launched like:
 {noformat}
 cd hbase-it
 mvn verify -Dit.test=IntegrationTestMTTR
 {noformat}
 Linux environment, 7u67.
 Looks like we should bump the heap on the failsafe argline in the POM. 
 {noformat}
 2014-12-22 11:24:07,725 ERROR 
 [B.DefaultRpcServer.handler=2,queue=0,port=55672] ipc.RpcServer(2067): 
 Unexpected throwable o
 bject 
 java.lang.OutOfMemoryError: Java heap space
 at 
 org.apache.hadoop.hbase.regionserver.MemStoreLAB$Chunk.init(MemStoreLAB.java:246)
 at 
 org.apache.hadoop.hbase.regionserver.MemStoreLAB.getOrMakeChunk(MemStoreLAB.java:196)
 at 
 org.apache.hadoop.hbase.regionserver.MemStoreLAB.allocateBytes(MemStoreLAB.java:114)
 at 
 org.apache.hadoop.hbase.regionserver.MemStore.maybeCloneWithAllocator(MemStore.java:274)
 at 
 org.apache.hadoop.hbase.regionserver.MemStore.add(MemStore.java:229)
 at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:576)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.applyFamilyMapToMemstore(HRegion.java:3084)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2517)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2284)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2239)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2243)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4482)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3665)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3554)
 {noformat}
 Another minor issue: After taking the OOME, the test executor will linger 
 indefinitely as a zombie. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12998) Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313759#comment-14313759
 ] 

Hudson commented on HBASE-12998:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #800 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/800/])
HBASE-12998 Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647 
(enis: rev 433672a67b2589bc77d80007c219d03f6a6bf656)
* hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java


 Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647
 --

 Key: HBASE-12998
 URL: https://issues.apache.org/jira/browse/HBASE-12998
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11

 Attachments: hbase-12998-v1.patch


 HDFS-7647 changed internal API related to LocatedBlocks and DataNodeInfo.  We 
 can fix it trivially in HBase for now. 
 {code}
 [INFO] -
 [ERROR] COMPILATION ERROR : 
 [INFO] -
 [ERROR] 
 /Users/enis/projects/hbase-champlain/hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java:[175,41]
  error: incompatible types
 {code}
 Longer term, we should add an API for advanced hdfs users (like HBase) to 
 deprioritize / reorder locations for blocks based on what the client thinks. 
 [~arpitagarwal] FYI. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12973) RegionCoprocessorEnvironment should provide HRegionInfo directly

2015-02-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313762#comment-14313762
 ] 

Hudson commented on HBASE-12973:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #800 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/800/])
HBASE-12973 RegionCoprocessorEnvironment should provide HRegionInfo directly 
(apurtell: rev 117a30ca0ba430bfa28e70c5a072632ce4ab)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestTokenAuthentication.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.java


 RegionCoprocessorEnvironment should provide HRegionInfo directly
 

 Key: HBASE-12973
 URL: https://issues.apache.org/jira/browse/HBASE-12973
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12973-0.98.patch, HBASE-12973-branch-1.patch, 
 HBASE-12973.patch


 A coprocessor must go through RegionCoprocessorEnvironment#getRegion in order 
 to retrieve HRegionInfo for its associated region. It should be possible to 
 get HRegionInfo directly from RegionCoprocessorEnvironment. (Or Region, see 
 HBASE-12972)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12999) Make foreground_start return the correct exit code

2015-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313769#comment-14313769
 ] 

Hadoop QA commented on HBASE-12999:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697631/HBASE-12999-v1.patch
  against master branch at commit 3d692cf044bc25327269328933299053ba19e2df.
  ATTACHMENT ID: 12697631

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.TestAcidGuarantees.testScanAtomicity(TestAcidGuarantees.java:354)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12751//console

This message is automatically generated.

 Make foreground_start return the correct exit code
 --

 Key: HBASE-12999
 URL: https://issues.apache.org/jira/browse/HBASE-12999
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-12999-v1.patch, HBASE-12999.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12995) Document that HConnection#getTable methods do not check table existence since 0.98.1

2015-02-09 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12995:
---
Description: [~jamestaylor] mentioned that recently Phoenix discovered at 
some point the {{HConnection#getTable}} lightweight table reference methods 
stopped throwing TableNotFoundExceptions. It used to be (in 0.94 and 0.96) that 
all APIs that construct HTables would check if the table is locatable and throw 
exceptions if not. Now, if using the {{HConnection#getTable}} APIs, such 
exceptions will only be thrown at the time of the first operation submitted 
using the table reference, should a problem be detected then. We did a bisect 
and it seems this was changed in the 0.98.1 release by HBASE-10080. Since the 
change has now shipped in 10 in total 0.98 releases we should just document the 
change, in the javadoc of the HConnection class, Connection in branch-1+.   
(was: [~jamestaylor] mentioned that recently Phoenix discovered at some point 
the {{HConnection#getTable}} lightweight table reference methods stopped 
throwing TableNotFoundExceptions. It used to be (in 0.94 and 0.96) that all 
APIs that construct HTables would check if the table is locatable and throw 
exceptions if not. Now, such exceptions will only be thrown at the time of the 
first operation submitted using the table reference, should a problem be 
detected then. We did a bisect and it seems this was changed in the 0.98.1 
release by HBASE-10080. Since the change has now shipped in 10 in total 0.98 
releases we should just document the change, in the javadoc of the HConnection 
class, Connection in branch-1+. )

 Document that HConnection#getTable methods do not check table existence since 
 0.98.1
 

 Key: HBASE-12995
 URL: https://issues.apache.org/jira/browse/HBASE-12995
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11


 [~jamestaylor] mentioned that recently Phoenix discovered at some point the 
 {{HConnection#getTable}} lightweight table reference methods stopped 
 throwing TableNotFoundExceptions. It used to be (in 0.94 and 0.96) that all 
 APIs that construct HTables would check if the table is locatable and throw 
 exceptions if not. Now, if using the {{HConnection#getTable}} APIs, such 
 exceptions will only be thrown at the time of the first operation submitted 
 using the table reference, should a problem be detected then. We did a bisect 
 and it seems this was changed in the 0.98.1 release by HBASE-10080. Since the 
 change has now shipped in 10 in total 0.98 releases we should just document 
 the change, in the javadoc of the HConnection class, Connection in branch-1+. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12995) Document that HConnection#getTable methods do not check table existence since 0.98.1

2015-02-09 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12995:
---
Summary: Document that HConnection#getTable methods do not check table 
existence since 0.98.1  (was: Document that HConnection#getTable methods do not 
throw TableNotFoundException since 0.98.1)

 Document that HConnection#getTable methods do not check table existence since 
 0.98.1
 

 Key: HBASE-12995
 URL: https://issues.apache.org/jira/browse/HBASE-12995
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11


 [~jamestaylor] mentioned that recently Phoenix discovered at some point the 
 {{HConnection#getTable}} lightweight table reference methods stopped 
 throwing TableNotFoundExceptions. It used to be (in 0.94 and 0.96) that all 
 APIs that construct HTables would check if the table is locatable and throw 
 exceptions if not. Now, such exceptions will only be thrown at the time of 
 the first operation submitted using the table reference, should a problem be 
 detected then. We did a bisect and it seems this was changed in the 0.98.1 
 release by HBASE-10080. Since the change has now shipped in 10 in total 0.98 
 releases we should just document the change, in the javadoc of the 
 HConnection class, Connection in branch-1+. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Ian Friedman (JIRA)
Ian Friedman created HBASE-12996:


 Summary: Reversed field on Filter should be transient
 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Ian Friedman


Filter has the field
{code}
  protected boolean reversed;
{code}
which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Ian Friedman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Friedman updated HBASE-12996:
-
Status: Patch Available  (was: Open)

for 0.98 branch

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Ian Friedman

 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12995) Document that HConnection#getTable methods do not throw TableNotFoundException since 0.98.1

2015-02-09 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-12995:
--

 Summary: Document that HConnection#getTable methods do not throw 
TableNotFoundException since 0.98.1
 Key: HBASE-12995
 URL: https://issues.apache.org/jira/browse/HBASE-12995
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11


[~jamestaylor] mentioned that recently Phoenix discovered at some point the 
{{HConnection#getTable}} lightweight table reference methods stopped throwing 
TableNotFoundExceptions. It used to be (in 0.94 and 0.96) that all APIs that 
construct HTables would check if the table is locatable and throw exceptions if 
not. Now, such exceptions will only be thrown at the time of the first 
operation submitted using the table reference, should a problem be detected 
then. We did a bisect and it seems this was changed in the 0.98.1 release by 
HBASE-10080. Since the change has now shipped in 10 in total 0.98 releases we 
should just document the change, in the javadoc of the HConnection class, 
Connection in branch-1+. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12994) Improve network utilization for scanning RPC requests by preloading the next set of results on the server

2015-02-09 Thread Jonathan Lawlor (JIRA)
Jonathan Lawlor created HBASE-12994:
---

 Summary: Improve network utilization for scanning RPC requests by 
preloading the next set of results on the server
 Key: HBASE-12994
 URL: https://issues.apache.org/jira/browse/HBASE-12994
 Project: HBase
  Issue Type: Improvement
Reporter: Jonathan Lawlor


As [~lhofhansl] has called out in HBASE-11544, RPC is inefficient when 
scanning. The way it currently works is the client requests a buffer worth of 
results and works through that buffer. Once that buffer of results is 
exhausted, the client realizes it has run out of results and then requests the 
next buffer. This could be improved by beginning to load the next buffer while 
the client is working through the current buffer of results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-09 Thread Jonathan Lawlor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14312533#comment-14312533
 ] 

Jonathan Lawlor commented on HBASE-11544:
-

[~lhofhansl] I see what you mean, that definitely seems like it could be 
improved. I have logged the issue in HBASE-12994 as it seems like it would be 
dealt with better separately rather than grouped into this change.

 [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
 batch even if it means OOME
 --

 Key: HBASE-11544
 URL: https://issues.apache.org/jira/browse/HBASE-11544
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Jonathan Lawlor
Priority: Critical
  Labels: beginner

 Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
 large cells.  I kept OOME'ing.
 Serverside, we should measure how much we've accumulated and return to the 
 client whatever we've gathered once we pass out a certain size threshold 
 rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12994) Improve network utilization for scanning RPC requests by preloading the next set of results on the server

2015-02-09 Thread Jonathan Lawlor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Lawlor updated HBASE-12994:

Description: As [~lhofhansl] has called out in HBASE-11544, RPC is 
inefficient when scanning. The way it currently works is the client requests a 
buffer worth of results and works through that buffer. Once that buffer of 
results is exhausted, the client realizes it has run out of results and then 
requests the next buffer. This could be improved by beginning to load the next 
buffer on the server while the client is working through the current buffer of 
results.  (was: As [~lhofhansl] has called out in HBASE-11544, RPC is 
inefficient when scanning. The way it currently works is the client requests a 
buffer worth of results and works through that buffer. Once that buffer of 
results is exhausted, the client realizes it has run out of results and then 
requests the next buffer. This could be improved by beginning to load the next 
buffer while the client is working through the current buffer of results.)

 Improve network utilization for scanning RPC requests by preloading the next 
 set of results on the server
 -

 Key: HBASE-12994
 URL: https://issues.apache.org/jira/browse/HBASE-12994
 Project: HBase
  Issue Type: Improvement
Reporter: Jonathan Lawlor

 As [~lhofhansl] has called out in HBASE-11544, RPC is inefficient when 
 scanning. The way it currently works is the client requests a buffer worth of 
 results and works through that buffer. Once that buffer of results is 
 exhausted, the client realizes it has run out of results and then requests 
 the next buffer. This could be improved by beginning to load the next buffer 
 on the server while the client is working through the current buffer of 
 results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Ian Friedman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Friedman updated HBASE-12996:
-
Attachment: 12996.txt

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.9
Reporter: Ian Friedman
 Attachments: 12996.txt


 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12997) FSHLog should print pipeline on low replication

2015-02-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313259#comment-14313259
 ] 

stack commented on HBASE-12997:
---

+1

 FSHLog should print pipeline on low replication
 ---

 Key: HBASE-12997
 URL: https://issues.apache.org/jira/browse/HBASE-12997
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 2.0.0, 1.0.1, 1.1.0

 Attachments: HBASE-12997.1.patch.txt


 We already have code in place for 1.0+ to print the pipeline when there are 
 slow syncs happening.
 We should also print the pipeline when we decide to roll due to low 
 replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12891) Parallel execution for Hbck checkRegionConsistency

2015-02-09 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313268#comment-14313268
 ] 

Enis Soztutar commented on HBASE-12891:
---

[~churromorales], [~davelatham] you mind addressing the concurrency issues? I 
think this is still useful. 

 Parallel execution for Hbck checkRegionConsistency
 --

 Key: HBASE-12891
 URL: https://issues.apache.org/jira/browse/HBASE-12891
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0, 0.98.10, 1.1.0
Reporter: churro morales
Assignee: churro morales
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12891-v1.patch, HBASE-12891.98.patch, 
 HBASE-12891.patch, HBASE-12891.patch, hbase-12891-addendum1.patch


 We have a lot of regions on our cluster ~500k and noticed that hbck took 
 quite some time in checkAndFixConsistency().  [~davelatham] patched our 
 cluster to do this check in parallel to speed things up.  I'll attach the 
 patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12998) Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647

2015-02-09 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313274#comment-14313274
 ] 

Enis Soztutar commented on HBASE-12998:
---

Thanks Stack. Yes compilation with earlier Hadoop versions are fine. See also 
HBASE-12920 (needs a +1). 

 Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647
 --

 Key: HBASE-12998
 URL: https://issues.apache.org/jira/browse/HBASE-12998
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11

 Attachments: hbase-12998-v1.patch


 HDFS-7647 changed internal API related to LocatedBlocks and DataNodeInfo.  We 
 can fix it trivially in HBase for now. 
 {code}
 [INFO] -
 [ERROR] COMPILATION ERROR : 
 [INFO] -
 [ERROR] 
 /Users/enis/projects/hbase-champlain/hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java:[175,41]
  error: incompatible types
 {code}
 Longer term, we should add an API for advanced hdfs users (like HBase) to 
 deprioritize / reorder locations for blocks based on what the client thinks. 
 [~arpitagarwal] FYI. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10216) Change HBase to support local compactions

2015-02-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313284#comment-14313284
 ] 

Andrew Purtell commented on HBASE-10216:


We could propose a new HDFS API that would merge files so that the merging and 
deleting can be performed on local data nodes with no file contents moving over 
the network, but does this not only push something implemented today in the 
HBase regionserver down into the HDFS datanodes? Could a merge as described be 
safely executed in parallel on multiple datanodes without coordination? No, 
because the result is not a 1:1 map of input block to output block. Therefore a 
single datanode would handle the merge procedure. From a block device and 
network perspective nothing would change.

Set the above aside. We can't push something as critical to HBase as compaction 
down into HDFS. First, the HDFS project is unlikely to accept the idea or 
implement it in the first place. Even in the unlikely event that happens, we 
would need reimplement compaction using the new HDFS facility to take advantage 
of it, yet we will need to support older versions of HDFS without the new API 
for a while, and if the new HDFS API ever doesn't perfectly address the 
minutiae of HBase compaction then or going forward we would be back where we 
started. 

Let's look at the read and write aspects with an eye toward what we have today, 
and assuming no new HDFS API.

Reads: With short circuit reads enabled, recommended for all deployments, if 
file blocks are available on the local datanode then block reads are fully 
local via a file descriptor passed over a unix domain socket, we never touch a 
TCP/IP socket. The probability that a block read for an HFile is local can be 
made very high by taking care to align region placement with block placement 
and/or fix up where block locality has dropped below a threshold using an 
existing HDFS API, see HBASE-4755 and HDFS-4606 

Writes: Writers like regionservers always contact the local datanode, assuming 
colocation of datanode and regionserver, as the first hop in the write 
pipeline. The datanode will then pipeline the write over the network to 
replicas, but only the second hop in the pipeline (from local datanode to first 
remote replica) will add contention on the local NIC, the third (from remote 
replica to other remote replica) will be pipelined from the remote. It's true 
we can initially avoid second-replica network IO initially by writing to a 
local file. Or we can have the equivalent in HDFS by setting the initial 
replication factor of the new file to 1.  In either case after closing the 
file, to make the result robust against node loss, we need to replicate all 
blocks of the newly written file immediately afterward. So now we are waiting 
for network IO and contending the NIC anyway, we have just deferred network IO 
until the file was completely written. We are not saving a single byte in 
transmission on the local NIC. We would have to add housekeeping that insures 
we don't delete older HFiles until the new/merged HFile is completely 
replicated; this makes something our business that today HDFS handles 
transparently.

For us to see any significant impact, I think the proposal on this issue must 
be replaced with one where we flush from memstore to local files and then at 
some point merge locally flushed files to a compacted file on disk. Only then 
are we really saving on IO. All of those locally flushed files represent data 
that never leaves the local node, never crosses the network, never causes reads 
or writes beyond the local node. This is the benefit *and* the nature of the 
data availability problem that follows: We can't consider locally flushed files 
as persisted data. If a node crashes before they are compacted they are lost 
(until the node comes back online... maybe), or if a local file is corrupted 
before compaction the data inside is also lost. We can only consider flushed 
data persisted after a completed compaction, only after the compaction result 
is fully replicated in HDFS. We somehow have to track all of the data in local 
flush files and insure it has all been compacted before deleting the WALs that 
contain those edits. We somehow need to detect when local flush files after 
node recovery are stale. Etc etc. Will the savings be worth the added 
complexity and additional failure modes? Maybe, but I believe Facebook 
published a paper on this that was inconclusive. 

 Change HBase to support local compactions
 -

 Key: HBASE-10216
 URL: https://issues.apache.org/jira/browse/HBASE-10216
 Project: HBase
  Issue Type: New Feature
  Components: Compaction
 Environment: All
Reporter: David Witten

 As I understand it compactions will read data from DFS and write to DFS.  
 

[jira] [Updated] (HBASE-12984) SSL cannot be used by the InfoPort after removing deprecated code in HBASE-10336

2015-02-09 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-12984:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I have pushed this. Thanks Esteban for the patch. You mind doing follow up 
jiras for the remaining work? 

 SSL cannot be used by the InfoPort after removing deprecated code in 
 HBASE-10336
 

 Key: HBASE-12984
 URL: https://issues.apache.org/jira/browse/HBASE-12984
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 2.0.0, 1.1.0
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
Priority: Blocker
 Fix For: 1.0.0, 2.0.0, 1.1.0

 Attachments: HBASE-12984-v1.txt, HBASE-12984-v3.txt, 
 HBASE-12984-v3.txt, HBASE-12984-v4.txt


 Setting {{hbase.ssl.enabled}} to {{true}} doesn't enable SSL on the 
 InfoServer. Found that the problem is down the InfoServer and HttpConfig in 
 how we setup the protocol in the HttpServer:
 {code}
 for (URI ep : endpoints) {
 Connector listener = null;
 String scheme = ep.getScheme();
  if (http.equals(scheme)) {
   listener = HttpServer.createDefaultChannelConnector();
 } else if (https.equals(scheme)) {
   SslSocketConnector c = new SslSocketConnectorSecure();
   c.setNeedClientAuth(needsClientAuth);
   c.setKeyPassword(keyPassword);
 {code}
 It depends what end points have been added by the InfoServer:
 {code}
 builder
   .setName(name)
   .addEndpoint(URI.create(http://; + bindAddress + : + port))
   .setAppDir(HBASE_APP_DIR).setFindPort(findPort).setConf(c);
 {code}
 Basically we always use http and we don't look via HttConfig if 
 {{hbase.ssl.enabled}} was set to true and we assign the right schema based on 
 the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12996) Reversed field on Filter should be transient

2015-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313292#comment-14313292
 ] 

Hadoop QA commented on HBASE-12996:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697543/HBASE-12996.patch
  against master branch at commit 9d6b237ae8676750c97dad2b9d2655dbd43f67fa.
  ATTACHMENT ID: 12697543

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.coprocessor.TestMasterObserver.testRegionTransitionOperations(TestMasterObserver.java:1604)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12742//console

This message is automatically generated.

 Reversed field on Filter should be transient
 

 Key: HBASE-12996
 URL: https://issues.apache.org/jira/browse/HBASE-12996
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 0.98.11
Reporter: Ian Friedman
Priority: Trivial
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12996.patch


 Filter has the field
 {code}
   protected boolean reversed;
 {code}
 which should be marked transient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-10216) Change HBase to support local compactions

2015-02-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313284#comment-14313284
 ] 

Andrew Purtell edited comment on HBASE-10216 at 2/10/15 12:52 AM:
--

We could propose a new HDFS API that would merge files so that the merging and 
deleting can be performed on local data nodes with no file contents moving over 
the network, but does this not only push something implemented today in the 
HBase regionserver down into the HDFS datanodes? Could a merge as described be 
safely executed in parallel on multiple datanodes without coordination? No, 
because the result is not a 1:1 map of input block to output block. Therefore 
in a realistic implementation (IMHO) a single datanode would handle the merge 
procedure. From a block device and network perspective nothing would change.

Set the above aside. We can't push something as critical to HBase as compaction 
down into HDFS. First, the HDFS project is unlikely to accept the idea or 
implement it in the first place. Even in the unlikely event that happens, we 
would need reimplement compaction using the new HDFS facility to take advantage 
of it, yet we will need to support older versions of HDFS without the new API 
for a while, and if the new HDFS API ever doesn't perfectly address the 
minutiae of HBase compaction then or going forward we would be back where we 
started. 

Let's look at the read and write aspects with an eye toward what we have today, 
and assuming no new HDFS API.

Reads: With short circuit reads enabled, recommended for all deployments, if 
file blocks are available on the local datanode then block reads are fully 
local via a file descriptor passed over a unix domain socket, we never touch a 
TCP/IP socket. The probability that a block read for an HFile is local can be 
made very high by taking care to align region placement with block placement 
and/or fix up where block locality has dropped below a threshold using an 
existing HDFS API, see HBASE-4755 and HDFS-4606 

Writes: Writers like regionservers always contact the local datanode, assuming 
colocation of datanode and regionserver, as the first hop in the write 
pipeline. The datanode will then pipeline the write over the network to 
replicas, but only the second hop in the pipeline (from local datanode to first 
remote replica) will add contention on the local NIC, the third (from remote 
replica to other remote replica) will be pipelined from the remote. It's true 
we can initially avoid second-replica network IO initially by writing to a 
local file. Or we can have the equivalent in HDFS by setting the initial 
replication factor of the new file to 1.  In either case after closing the 
file, to make the result robust against node loss, we need to replicate all 
blocks of the newly written file immediately afterward. So now we are waiting 
for network IO and contending the NIC anyway, we have just deferred network IO 
until the file was completely written. We are not saving a single byte in 
transmission on the local NIC. We would have to add housekeeping that insures 
we don't delete older HFiles until the new/merged HFile is completely 
replicated; this makes something our business that today HDFS handles 
transparently.

For us to see any significant impact, I think the proposal on this issue must 
be replaced with one where we flush from memstore to local files and then at 
some point merge locally flushed files to a compacted file on disk. Only then 
are we really saving on IO. All of those locally flushed files represent data 
that never leaves the local node, never crosses the network, never causes reads 
or writes beyond the local node. This is the benefit *and* the nature of the 
data availability problem that follows: We can't consider locally flushed files 
as persisted data. If a node crashes before they are compacted they are lost 
(until the node comes back online... maybe), or if a local file is corrupted 
before compaction the data inside is also lost. We can only consider flushed 
data persisted after a completed compaction, only after the compaction result 
is fully replicated in HDFS. We somehow have to track all of the data in local 
flush files and insure it has all been compacted before deleting the WALs that 
contain those edits. We somehow need to detect when local flush files after 
node recovery are stale. Etc etc. Will the savings be worth the added 
complexity and additional failure modes? Maybe, but I believe Facebook 
published a paper on this that was inconclusive. 


was (Author: apurtell):
We could propose a new HDFS API that would merge files so that the merging and 
deleting can be performed on local data nodes with no file contents moving over 
the network, but does this not only push something implemented today in the 
HBase regionserver down into the HDFS datanodes? Could a merge as described 

[jira] [Commented] (HBASE-12920) hadoopqa should compile with different hadoop versions

2015-02-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313295#comment-14313295
 ] 

stack commented on HBASE-12920:
---

That is very nice [~enis] Check it in. Its hard to test this w/o checking it 
in. +1.

 hadoopqa should compile with different hadoop versions 
 ---

 Key: HBASE-12920
 URL: https://issues.apache.org/jira/browse/HBASE-12920
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0

 Attachments: hbase-12920_v1.patch


 From time to time, we break compilation with hadoop-2.4 or other earlier 
 versions, and only realize that at the time of a release candidate. 
 We should fix hadoopqa to do the compilation for us. 
 What I have locally is something like this: 
 {code}
 HADOOP2_VERSIONS=2.2.0 2.3.0 2.4.0 2.5.0 2.6.0
 function buildWithHadoop2 {
   for HADOOP2_VERSION in $HADOOP2_VERSIONS ; do
 echo 
 echo # BUILDING $ARTIFACT WITH HADOOP 2 VERSION $HADOOP2_VERSION 
 #
 echo 
 echo mvn clean install -DskipTests -Dhadoop-two.version=$HADOOP2_VERSION
 mvn clean install -DskipTests -Dhadoop-two.version=$HADOOP2_VERSION
   done
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12997) FSHLog should print pipeline on low replication

2015-02-09 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313296#comment-14313296
 ] 

Elliott Clark commented on HBASE-12997:
---

+1

 FSHLog should print pipeline on low replication
 ---

 Key: HBASE-12997
 URL: https://issues.apache.org/jira/browse/HBASE-12997
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 2.0.0, 1.0.1, 1.1.0

 Attachments: HBASE-12997.1.patch.txt


 We already have code in place for 1.0+ to print the pipeline when there are 
 slow syncs happening.
 We should also print the pipeline when we decide to roll due to low 
 replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-10216) Change HBase to support local compactions

2015-02-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313284#comment-14313284
 ] 

Andrew Purtell edited comment on HBASE-10216 at 2/10/15 1:01 AM:
-

We could propose a new HDFS API that would merge files so that the merging and 
deleting can be performed on local data nodes with no file contents moving over 
the network, but does this not only push something implemented today in the 
HBase regionserver down into the HDFS datanodes? Could a merge as described be 
safely executed in parallel on multiple datanodes without coordination? No, 
because the result is not a 1:1 map of input block to output block. Therefore 
in a realistic implementation (IMHO) a single datanode would handle the merge 
procedure. From a block device and network perspective nothing would change.

Set the above aside. We can't push something as critical to HBase as compaction 
down into HDFS. First, the HDFS project is unlikely to accept the idea or 
implement it in the first place. Even in the unlikely event that happens, we 
would need reimplement compaction using the new HDFS facility to take advantage 
of it, yet we will need to support older versions of HDFS without the new API 
for a while, and if the new HDFS API ever doesn't perfectly address the 
minutiae of HBase compaction then or going forward we would be back where we 
started. 

Let's look at the read and write aspects with an eye toward what we have today, 
and assuming no new HDFS API.

Reads: With short circuit reads enabled, recommended for all deployments, if 
file blocks are available on the local datanode then block reads are fully 
local via a file descriptor passed over a unix domain socket, we never touch a 
TCP/IP socket. The probability that a block read for an HFile is local can be 
made very high by taking care to align region placement with block placement 
and/or fix up where block locality has dropped below a threshold using an 
existing HDFS API, see HBASE-4755 and HDFS-4606 

Writes: Writers like regionservers always contact the local datanode, assuming 
colocation of datanode and regionserver, as the first hop in the write 
pipeline. The datanode will then pipeline the write over the network to 
replicas, but only the second hop in the pipeline (from local datanode to first 
remote replica) will add contention on the local NIC, the third (from remote 
replica to other remote replica) will be pipelined from the remote. It's true 
we can initially avoid second-replica network IO by writing to a local file. Or 
we can have the equivalent in HDFS by setting the initial replication factor of 
the new file to 1.  In either case after closing the file, to make the result 
robust against node loss, we need to replicate all blocks of the newly written 
file immediately afterward. So now we are waiting for network IO and contending 
the NIC anyway, we have just deferred network IO until the file was completely 
written. We are not saving a single byte in transmission on the local NIC. We 
would have to add housekeeping that insures we don't delete older HFiles until 
the new/merged HFile is completely replicated; this makes something our 
business that today is transparent because we don't defer writes, when close() 
on the file we are writing to HDFS directly completes we know it has been fully 
replicated already. 

For us to see any significant impact, I think the proposal on this issue must 
be replaced with one where we flush from memstore to local files and then at 
some point merge locally flushed files to a compacted file on HDFS. Only then 
are we really saving on IO. All of those locally flushed files represent data 
that never leaves the local node, never crosses the network, never causes reads 
or writes beyond the local node. This is the benefit *and* the nature of the 
data availability problem that follows: We can't consider locally flushed files 
as persisted data. If a node crashes before they are compacted they are lost 
(until the node comes back online... maybe), or if a local file is corrupted 
before compaction the data inside is also lost. We can only consider flushed 
data persisted after a completed compaction, only after the compaction result 
is fully replicated in HDFS. We somehow have to track all of the data in local 
flush files and insure it has all been compacted before deleting the WALs that 
contain those edits. We somehow need to detect when local flush files after 
node recovery are stale. Etc etc. Will the savings be worth the added 
complexity and additional failure modes? Maybe, but I believe Facebook 
published a paper on this that was inconclusive. 


was (Author: apurtell):
We could propose a new HDFS API that would merge files so that the merging and 
deleting can be performed on local data nodes with no file contents moving over 
the network, but does this not 

[jira] [Commented] (HBASE-12978) hbase:meta has a row missing hregioninfo and it causes my long-running job to fail

2015-02-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313312#comment-14313312
 ] 

stack commented on HBASE-12978:
---

Your scenario may be possible [~devaraj] I need to look at it more and try and 
put in measures to prevent it.

It is not the case here though.

Here are current edits in hbase:meta:

K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423442394783/Put/vlen=8/seqid=131112
K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:seqnumDuringOpen/1423442332641/Put/vlen=8/seqid=131019
K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423442394783/Put/vlen=30/seqid=131112
K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:server/1423442332641/Put/vlen=30/seqid=131019
K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:serverstartcode/1423442394783/Put/vlen=8/seqid=131112
K: 
IntegrationTestBigLinkedList,+\x84\xFF\xFC\xE4%\xF2\x11\xDE\x97t\xF0(\xF1$\xE8,1423438433508.014990fd6eb13141c04018f19c8910c8./info:serverstartcode/1423442332641/Put/vlen=8/seqid=131019

The info:regioninfo is missing.

This region has not been split. which as best as I can tell, is time we'd 
remove the info:regioninfo entry.

I went back over the moved off hfiles and recovered edits files and only see 
mention of the Put of the original info:regioninfo edit. There is no 'Delete'. 
It looks like the edit was 'dropped'. Let me see if I can find the time at 
which the edit was dropped.

 hbase:meta has a row missing hregioninfo and it causes my long-running job to 
 fail
 --

 Key: HBASE-12978
 URL: https://issues.apache.org/jira/browse/HBASE-12978
 Project: HBase
  Issue Type: Bug
Reporter: stack
 Fix For: 1.0.1


 Testing 1.0.0 trying long-running tests.
 A row in hbase:meta was missing its HRI entry. It caused the job to fail. 
 Around the time of the first task failure, there are balances of the 
 hbase:meta region and it was on a server that crashed. I tried to look at 
 what happened around time of our writing hbase:meta and I ran into another 
 issue; 20 logs of 256MBs filled with WrongRegionException written over a 
 minute or two. The actual update of hbase:meta was not in the logs, it'd been 
 rotated off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12332) [mob] use filelink instead of retry when resolving mobfiles

2015-02-09 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313322#comment-14313322
 ] 

Jiajia Li commented on HBASE-12332:
---

Hi, [~j...@cloudera.com], what is the plan for this jira?

 [mob] use filelink instead of retry when resolving mobfiles
 ---

 Key: HBASE-12332
 URL: https://issues.apache.org/jira/browse/HBASE-12332
 Project: HBase
  Issue Type: Sub-task
  Components: mob
Affects Versions: hbase-11339
Reporter: Jonathan Hsieh
 Fix For: hbase-11339

 Attachments: HBASE-12332-V1.diff, HBASE-12332-V2.patch, 
 HBASE-12332-V3.patch, HBASE-12332-V5.patch, hbase-12332.link.v4.patch, 
 hbase-12332.patch


 in the snapshot code, hmobstore was modified to traverse an hfile link to a 
 mob.   Ideally this should use the transparent filelink code to read the data.
 Also there will likely be some issues with the mob file cache with these 
 links.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12991) Use HBase 1.0 interfaces in hbase-rest

2015-02-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14313324#comment-14313324
 ] 

Andrew Purtell commented on HBASE-12991:


+1 for the REST changes

This:
{code}
diff --git 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/ConnectionCache.java 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/ConnectionCache.java
index 21714af..8bcdda9 100644
--- hbase-server/src/main/java/org/apache/hadoop/hbase/util/ConnectionCache.java
+++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/ConnectionCache.java
[...]
{code}

and this:
{code}
diff --git 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
index 9f23c09..2752a1a 100644
--- 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
+++ 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
[...]
{code}

are out of scope per the issue description. Or we can make this about replacing 
HBaseAdmin with Admin more generally. Perhaps sweep up more occurrences if 
there are any. 

Anyway, I'm happy to commit the patch and resolve the issue as is minus the 
called out changes.

 Use HBase 1.0 interfaces in hbase-rest
 --

 Key: HBASE-12991
 URL: https://issues.apache.org/jira/browse/HBASE-12991
 Project: HBase
  Issue Type: Bug
Reporter: Solomon Duskis
Assignee: Solomon Duskis
 Fix For: 2.0.0, 1.0.1

 Attachments: HBASE-12991.patch


 hbase-rest uses HTable and HBaseAdmin under the covers.  They should use the 
 new hbase 1.0 interfaces instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >