[jira] [Commented] (HBASE-9631) add murmur3 hash

2013-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776043#comment-13776043
 ] 

Hadoop QA commented on HBASE-9631:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604541/HBase-9631.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7350//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7350//console

This message is automatically generated.

> add murmur3 hash
> 
>
> Key: HBASE-9631
> URL: https://issues.apache.org/jira/browse/HBASE-9631
> Project: HBase
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBase-9631.txt
>
>
> MurmurHash3 is the successor to MurmurHash2. It comes in 3 variants - a 
> 32-bit version that targets low latency for hash table use and two 128-bit 
> versions for generating unique identifiers for large blocks of data, one each 
> for x86 and x64 platforms.
> several open source projects have added murmur3 already, like cassandra, 
> mahout, etc.
> I just port the murmur3 from MAHOUT-862. due to compatibility, let's keep the 
> default Hash algo(murmur2) without changing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9631) add murmur3 hash

2013-09-23 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776040#comment-13776040
 ] 

Liang Xie commented on HBASE-9631:
--

please see CASSANDRA-2975 as a reference

> add murmur3 hash
> 
>
> Key: HBASE-9631
> URL: https://issues.apache.org/jira/browse/HBASE-9631
> Project: HBase
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBase-9631.txt
>
>
> MurmurHash3 is the successor to MurmurHash2. It comes in 3 variants - a 
> 32-bit version that targets low latency for hash table use and two 128-bit 
> versions for generating unique identifiers for large blocks of data, one each 
> for x86 and x64 platforms.
> several open source projects have added murmur3 already, like cassandra, 
> mahout, etc.
> I just port the murmur3 from MAHOUT-862. due to compatibility, let's keep the 
> default Hash algo(murmur2) without changing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9631) add murmur3 hash

2013-09-23 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776039#comment-13776039
 ] 

Liang Xie commented on HBASE-9631:
--

i think we can recommend a new bootstrap cluster to config murmur3.
for a existing cluster, if we change it on-the-fly, probably has some issues 
for compatibility. since the murmur3 seems doesn't guarantee generate the same 
result as murmur2, so the existing data's hash  will probably has diff result 
during change algo. (myabe MurmurHash3_x86_32 could reture same as murmur2, but 
i am not sure right now yet)

> add murmur3 hash
> 
>
> Key: HBASE-9631
> URL: https://issues.apache.org/jira/browse/HBASE-9631
> Project: HBase
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBase-9631.txt
>
>
> MurmurHash3 is the successor to MurmurHash2. It comes in 3 variants - a 
> 32-bit version that targets low latency for hash table use and two 128-bit 
> versions for generating unique identifiers for large blocks of data, one each 
> for x86 and x64 platforms.
> several open source projects have added murmur3 already, like cassandra, 
> mahout, etc.
> I just port the murmur3 from MAHOUT-862. due to compatibility, let's keep the 
> default Hash algo(murmur2) without changing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-9643) HBase Table is in 'disabled' state, but the corresponding 'enable' is throwing 'TableNotDisabledException' exception

2013-09-23 Thread shankarlingayya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shankarlingayya resolved HBASE-9643.


Resolution: Duplicate

duplicate of HBASE-6469

> HBase Table is in 'disabled' state, but the corresponding 'enable' is 
> throwing 'TableNotDisabledException' exception 
> -
>
> Key: HBASE-9643
> URL: https://issues.apache.org/jira/browse/HBASE-9643
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.94.11
> Environment: SuSE11
>Reporter: shankarlingayya
>
> {noformat}
> HBase Table is in 'disabled' state, but the corresponding 'enable' is 
> throwing 'TableNotDisabledException' exception
> hbase(main):025:0> list
> TABLE 
>   
> t1
>   
> t2
>   
> 2 row(s) in 0.0220 seconds
> hbase(main):026:0> describe 't1'
> DESCRIPTION   
>  ENABLED  
>  't1', {NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', 
> REPLI false
>  CATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => 
> '0',  
>   TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', 
> IN_MEM  
>  ORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}  
>   
> 1 row(s) in 0.0430 seconds
> hbase(main):027:0> enable 't1'
> ERROR: org.apache.hadoop.hbase.TableNotDisabledException: 
> org.apache.hadoop.hbase.TableNotDisabledException: t1
> at 
> org.apache.hadoop.hbase.master.handler.EnableTableHandler.(EnableTableHandler.java:95)
> at 
> org.apache.hadoop.hbase.master.HMaster.enableTable(HMaster.java:1471)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1434)
> Here is some help for this command:
> Start enable of named table: e.g. "hbase> enable 't1'"
> hbase(main):028:0> 
> HMaster Log:
> 2013-09-24 11:36:20,630 DEBUG org.apache.hadoop.hbase.client.MetaScanner: 
> Scanning .META. starting at row= for max=2147483647 rows using 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@215200be
> 2013-09-24 11:36:20,649 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: 
> Scanned 348 catalog row(s) and gc'd 0 unreferenced parent region(s)
> 2013-09-24 11:37:59,428 INFO org.apache.hadoop.hbase.master.LoadBalancer: 
> Skipping load balancing because balanced cluster; servers=2 regions=347 
> average=173.5 mostloaded=174 leastloaded=173
> 2013-09-24 11:37:59,428 INFO org.apache.hadoop.hbase.master.LoadBalancer: 
> Skipping load balancing because balanced cluster; servers=2 regions=1 
> average=0.5 mostloaded=1 leastloaded=0
> 2013-09-24 11:41:11,339 DEBUG org.apache.hadoop.hbase.client.ClientScanner: 
> Creating scanner over .META. starting at key 't1,,'
> 2013-09-24 11:41:11,339 DEBUG org.apache.hadoop.hbase.client.ClientScanner: 
> Advancing internal scanner to startKey at 't1,,'
> 2013-09-24 11:41:11,342 INFO 
> org.apache.hadoop.hbase.master.handler.EnableTableHandler: Table t1 isn't 
> disabled; skipping enable
> 2013-09-24 11:41:20,630 DEBUG org.apache.hadoop.hbase.client.MetaScanner: 
> Scanning .META. starting at row= for max=2147483647 rows using 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@215200be
> 2013-09-24 11:41:20,652 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: 
> Scanned 348 catalog row(s) and gc'd 0 unreferenced parent region(s)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9643) HBase Table is in 'disabled' state, but the corresponding 'enable' is throwing 'TableNotDisabledException' exception

2013-09-23 Thread shankarlingayya (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776033#comment-13776033
 ] 

shankarlingayya commented on HBASE-9643:


table is in disabling state,
duplicate of HBASE-6469.

> HBase Table is in 'disabled' state, but the corresponding 'enable' is 
> throwing 'TableNotDisabledException' exception 
> -
>
> Key: HBASE-9643
> URL: https://issues.apache.org/jira/browse/HBASE-9643
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.94.11
> Environment: SuSE11
>Reporter: shankarlingayya
>
> {noformat}
> HBase Table is in 'disabled' state, but the corresponding 'enable' is 
> throwing 'TableNotDisabledException' exception
> hbase(main):025:0> list
> TABLE 
>   
> t1
>   
> t2
>   
> 2 row(s) in 0.0220 seconds
> hbase(main):026:0> describe 't1'
> DESCRIPTION   
>  ENABLED  
>  't1', {NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', 
> REPLI false
>  CATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => 
> '0',  
>   TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', 
> IN_MEM  
>  ORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}  
>   
> 1 row(s) in 0.0430 seconds
> hbase(main):027:0> enable 't1'
> ERROR: org.apache.hadoop.hbase.TableNotDisabledException: 
> org.apache.hadoop.hbase.TableNotDisabledException: t1
> at 
> org.apache.hadoop.hbase.master.handler.EnableTableHandler.(EnableTableHandler.java:95)
> at 
> org.apache.hadoop.hbase.master.HMaster.enableTable(HMaster.java:1471)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1434)
> Here is some help for this command:
> Start enable of named table: e.g. "hbase> enable 't1'"
> hbase(main):028:0> 
> HMaster Log:
> 2013-09-24 11:36:20,630 DEBUG org.apache.hadoop.hbase.client.MetaScanner: 
> Scanning .META. starting at row= for max=2147483647 rows using 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@215200be
> 2013-09-24 11:36:20,649 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: 
> Scanned 348 catalog row(s) and gc'd 0 unreferenced parent region(s)
> 2013-09-24 11:37:59,428 INFO org.apache.hadoop.hbase.master.LoadBalancer: 
> Skipping load balancing because balanced cluster; servers=2 regions=347 
> average=173.5 mostloaded=174 leastloaded=173
> 2013-09-24 11:37:59,428 INFO org.apache.hadoop.hbase.master.LoadBalancer: 
> Skipping load balancing because balanced cluster; servers=2 regions=1 
> average=0.5 mostloaded=1 leastloaded=0
> 2013-09-24 11:41:11,339 DEBUG org.apache.hadoop.hbase.client.ClientScanner: 
> Creating scanner over .META. starting at key 't1,,'
> 2013-09-24 11:41:11,339 DEBUG org.apache.hadoop.hbase.client.ClientScanner: 
> Advancing internal scanner to startKey at 't1,,'
> 2013-09-24 11:41:11,342 INFO 
> org.apache.hadoop.hbase.master.handler.EnableTableHandler: Table t1 isn't 
> disabled; skipping enable
> 2013-09-24 11:41:20,630 DEBUG org.apache.hadoop.hbase.client.MetaScanner: 
> Scanning .META. starting at row= for max=2147483647 rows using 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@215200be
> 2013-09-24 11:41:20,652 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: 
> Scanned 348 catalog row(s) and gc'd 0 unreferenced parent region(s)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9630) Add thread which detects JVM pauses like HADOOP's

2013-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776030#comment-13776030
 ] 

Hadoop QA commented on HBASE-9630:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604729/HBase-9630-v2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to cause Findbugs 
(version 1.3.9) to fail.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7349//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7349//console

This message is automatically generated.

> Add thread which detects JVM pauses like HADOOP's
> -
>
> Key: HBASE-9630
> URL: https://issues.apache.org/jira/browse/HBASE-9630
> Project: HBase
>  Issue Type: New Feature
>  Components: regionserver
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBase-9630.txt, HBase-9630-v2.txt
>
>
> Todd adds daemon threads for dn&nn to indicate the VM or kernel caused pause 
> in application log, it's pretty handy for diagnose, i thought it's great to 
> have similar ability in HBase.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9631) add murmur3 hash

2013-09-23 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776028#comment-13776028
 ] 

Elliott Clark commented on HBASE-9631:
--

What do we gain by just having that hash there if it's not used?  The hash is 
interesting, should we turn it on by default ?

> add murmur3 hash
> 
>
> Key: HBASE-9631
> URL: https://issues.apache.org/jira/browse/HBASE-9631
> Project: HBase
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBase-9631.txt
>
>
> MurmurHash3 is the successor to MurmurHash2. It comes in 3 variants - a 
> 32-bit version that targets low latency for hash table use and two 128-bit 
> versions for generating unique identifiers for large blocks of data, one each 
> for x86 and x64 platforms.
> several open source projects have added murmur3 already, like cassandra, 
> mahout, etc.
> I just port the murmur3 from MAHOUT-862. due to compatibility, let's keep the 
> default Hash algo(murmur2) without changing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-9643) HBase Table is in 'disabled' state, but the corresponding 'enable' is throwing 'TableNotDisabledException' exception

2013-09-23 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776026#comment-13776026
 ] 

rajeshbabu edited comment on HBASE-9643 at 9/24/13 6:15 AM:


Can you check whether the table is disabled properly by is_disabled 't' command.
It may be in partially disabled state(DISABLING) or disable in progress thats 
why TableNotDisabledException is coming.


  was (Author: rajesh23):
Can you check whether is table is disable by is_disabled 't'

  
> HBase Table is in 'disabled' state, but the corresponding 'enable' is 
> throwing 'TableNotDisabledException' exception 
> -
>
> Key: HBASE-9643
> URL: https://issues.apache.org/jira/browse/HBASE-9643
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.94.11
> Environment: SuSE11
>Reporter: shankarlingayya
>
> {noformat}
> HBase Table is in 'disabled' state, but the corresponding 'enable' is 
> throwing 'TableNotDisabledException' exception
> hbase(main):025:0> list
> TABLE 
>   
> t1
>   
> t2
>   
> 2 row(s) in 0.0220 seconds
> hbase(main):026:0> describe 't1'
> DESCRIPTION   
>  ENABLED  
>  't1', {NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', 
> REPLI false
>  CATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => 
> '0',  
>   TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', 
> IN_MEM  
>  ORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}  
>   
> 1 row(s) in 0.0430 seconds
> hbase(main):027:0> enable 't1'
> ERROR: org.apache.hadoop.hbase.TableNotDisabledException: 
> org.apache.hadoop.hbase.TableNotDisabledException: t1
> at 
> org.apache.hadoop.hbase.master.handler.EnableTableHandler.(EnableTableHandler.java:95)
> at 
> org.apache.hadoop.hbase.master.HMaster.enableTable(HMaster.java:1471)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1434)
> Here is some help for this command:
> Start enable of named table: e.g. "hbase> enable 't1'"
> hbase(main):028:0> 
> HMaster Log:
> 2013-09-24 11:36:20,630 DEBUG org.apache.hadoop.hbase.client.MetaScanner: 
> Scanning .META. starting at row= for max=2147483647 rows using 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@215200be
> 2013-09-24 11:36:20,649 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: 
> Scanned 348 catalog row(s) and gc'd 0 unreferenced parent region(s)
> 2013-09-24 11:37:59,428 INFO org.apache.hadoop.hbase.master.LoadBalancer: 
> Skipping load balancing because balanced cluster; servers=2 regions=347 
> average=173.5 mostloaded=174 leastloaded=173
> 2013-09-24 11:37:59,428 INFO org.apache.hadoop.hbase.master.LoadBalancer: 
> Skipping load balancing because balanced cluster; servers=2 regions=1 
> average=0.5 mostloaded=1 leastloaded=0
> 2013-09-24 11:41:11,339 DEBUG org.apache.hadoop.hbase.client.ClientScanner: 
> Creating scanner over .META. starting at key 't1,,'
> 2013-09-24 11:41:11,339 DEBUG org.apache.hadoop.hbase.client.ClientScanner: 
> Advancing internal scanner to startKey at 't1,,'
> 2013-09-24 11:41:11,342 INFO 
> org.apache.hadoop.hbase.master.handler.EnableTableHandler: Table t1 isn't 
> disabled; skipping enable
> 2013-09-24 11:41:20,630 DEBUG org.apache.hadoop.hbase.client.MetaScanner: 
> Scanning .META. starting at row= for max=2147483647 rows using 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@215200be
> 2013-09-24 11:41:20,652 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: 
> Scanned 348 

[jira] [Commented] (HBASE-9643) HBase Table is in 'disabled' state, but the corresponding 'enable' is throwing 'TableNotDisabledException' exception

2013-09-23 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13776026#comment-13776026
 ] 

rajeshbabu commented on HBASE-9643:
---

Can you check whether is table is disable by is_disabled 't'


> HBase Table is in 'disabled' state, but the corresponding 'enable' is 
> throwing 'TableNotDisabledException' exception 
> -
>
> Key: HBASE-9643
> URL: https://issues.apache.org/jira/browse/HBASE-9643
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.94.11
> Environment: SuSE11
>Reporter: shankarlingayya
>
> {noformat}
> HBase Table is in 'disabled' state, but the corresponding 'enable' is 
> throwing 'TableNotDisabledException' exception
> hbase(main):025:0> list
> TABLE 
>   
> t1
>   
> t2
>   
> 2 row(s) in 0.0220 seconds
> hbase(main):026:0> describe 't1'
> DESCRIPTION   
>  ENABLED  
>  't1', {NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', 
> REPLI false
>  CATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => 
> '0',  
>   TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', 
> IN_MEM  
>  ORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}  
>   
> 1 row(s) in 0.0430 seconds
> hbase(main):027:0> enable 't1'
> ERROR: org.apache.hadoop.hbase.TableNotDisabledException: 
> org.apache.hadoop.hbase.TableNotDisabledException: t1
> at 
> org.apache.hadoop.hbase.master.handler.EnableTableHandler.(EnableTableHandler.java:95)
> at 
> org.apache.hadoop.hbase.master.HMaster.enableTable(HMaster.java:1471)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1434)
> Here is some help for this command:
> Start enable of named table: e.g. "hbase> enable 't1'"
> hbase(main):028:0> 
> HMaster Log:
> 2013-09-24 11:36:20,630 DEBUG org.apache.hadoop.hbase.client.MetaScanner: 
> Scanning .META. starting at row= for max=2147483647 rows using 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@215200be
> 2013-09-24 11:36:20,649 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: 
> Scanned 348 catalog row(s) and gc'd 0 unreferenced parent region(s)
> 2013-09-24 11:37:59,428 INFO org.apache.hadoop.hbase.master.LoadBalancer: 
> Skipping load balancing because balanced cluster; servers=2 regions=347 
> average=173.5 mostloaded=174 leastloaded=173
> 2013-09-24 11:37:59,428 INFO org.apache.hadoop.hbase.master.LoadBalancer: 
> Skipping load balancing because balanced cluster; servers=2 regions=1 
> average=0.5 mostloaded=1 leastloaded=0
> 2013-09-24 11:41:11,339 DEBUG org.apache.hadoop.hbase.client.ClientScanner: 
> Creating scanner over .META. starting at key 't1,,'
> 2013-09-24 11:41:11,339 DEBUG org.apache.hadoop.hbase.client.ClientScanner: 
> Advancing internal scanner to startKey at 't1,,'
> 2013-09-24 11:41:11,342 INFO 
> org.apache.hadoop.hbase.master.handler.EnableTableHandler: Table t1 isn't 
> disabled; skipping enable
> 2013-09-24 11:41:20,630 DEBUG org.apache.hadoop.hbase.client.MetaScanner: 
> Scanning .META. starting at row= for max=2147483647 rows using 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@215200be
> 2013-09-24 11:41:20,652 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: 
> Scanned 348 catalog row(s) and gc'd 0 unreferenced parent region(s)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9643) HBase Table is in 'disabled' state, but the corresponding 'enable' is throwing 'TableNotDisabledException' exception

2013-09-23 Thread shankarlingayya (JIRA)
shankarlingayya created HBASE-9643:
--

 Summary: HBase Table is in 'disabled' state, but the corresponding 
'enable' is throwing 'TableNotDisabledException' exception 
 Key: HBASE-9643
 URL: https://issues.apache.org/jira/browse/HBASE-9643
 Project: HBase
  Issue Type: Bug
  Components: master, regionserver
Affects Versions: 0.94.11
 Environment: SuSE11
Reporter: shankarlingayya


{noformat}

HBase Table is in 'disabled' state, but the corresponding 'enable' is throwing 
'TableNotDisabledException' exception

hbase(main):025:0> list
TABLE   

t1  

t2  

2 row(s) in 0.0220 seconds

hbase(main):026:0> describe 't1'
DESCRIPTION 
   ENABLED  
 't1', {NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', 
REPLI false
 CATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => 
'0',  
  TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', 
IN_MEM  
 ORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}

1 row(s) in 0.0430 seconds

hbase(main):027:0> enable 't1'

ERROR: org.apache.hadoop.hbase.TableNotDisabledException: 
org.apache.hadoop.hbase.TableNotDisabledException: t1
at 
org.apache.hadoop.hbase.master.handler.EnableTableHandler.(EnableTableHandler.java:95)
at org.apache.hadoop.hbase.master.HMaster.enableTable(HMaster.java:1471)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1434)

Here is some help for this command:
Start enable of named table: e.g. "hbase> enable 't1'"


hbase(main):028:0> 

HMaster Log:

2013-09-24 11:36:20,630 DEBUG org.apache.hadoop.hbase.client.MetaScanner: 
Scanning .META. starting at row= for max=2147483647 rows using 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@215200be
2013-09-24 11:36:20,649 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: 
Scanned 348 catalog row(s) and gc'd 0 unreferenced parent region(s)
2013-09-24 11:37:59,428 INFO org.apache.hadoop.hbase.master.LoadBalancer: 
Skipping load balancing because balanced cluster; servers=2 regions=347 
average=173.5 mostloaded=174 leastloaded=173
2013-09-24 11:37:59,428 INFO org.apache.hadoop.hbase.master.LoadBalancer: 
Skipping load balancing because balanced cluster; servers=2 regions=1 
average=0.5 mostloaded=1 leastloaded=0
2013-09-24 11:41:11,339 DEBUG org.apache.hadoop.hbase.client.ClientScanner: 
Creating scanner over .META. starting at key 't1,,'
2013-09-24 11:41:11,339 DEBUG org.apache.hadoop.hbase.client.ClientScanner: 
Advancing internal scanner to startKey at 't1,,'
2013-09-24 11:41:11,342 INFO 
org.apache.hadoop.hbase.master.handler.EnableTableHandler: Table t1 isn't 
disabled; skipping enable
2013-09-24 11:41:20,630 DEBUG org.apache.hadoop.hbase.client.MetaScanner: 
Scanning .META. starting at row= for max=2147483647 rows using 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@215200be
2013-09-24 11:41:20,652 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: 
Scanned 348 catalog row(s) and gc'd 0 unreferenced parent region(s)


{noformat}



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9642) AM ZK Workers stuck doing 100% CPU on HashMap.put

2013-09-23 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-9642:
---

Status: Patch Available  (was: Open)

> AM ZK Workers stuck doing 100% CPU on HashMap.put
> -
>
> Key: HBASE-9642
> URL: https://issues.apache.org/jira/browse/HBASE-9642
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.0
>Reporter: Jean-Daniel Cryans
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9642-1.txt
>
>
> I just noticed on my test cluster that my master is using all my CPUs even 
> though it's completely idle. 5 threads are doing this:
> {noformat}
> "AM.ZK.Worker-pool2-t34" daemon prio=10 tid=0x7f68ac176800 nid=0x5251 
> runnable [0x7f688cc83000]
>java.lang.Thread.State: RUNNABLE
>   at java.util.HashMap.put(HashMap.java:374)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.handleRegion(AssignmentManager.java:954)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$6.run(AssignmentManager.java:1419)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:1247)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> {noformat}
> Looking at the code, I see HBASE-9095 introduced two HashMaps *for tests 
> only* but they end up being used concurrently in the AM _and_ are never 
> cleaned up. It seems to me that any master running since that patch was 
> committed has a time bomb in it.
> I'm marking this as a blocker. [~devaraj] and [~jxiang], you guys wanna take 
> a look at this?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9642) AM ZK Workers stuck doing 100% CPU on HashMap.put

2013-09-23 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-9642:
---

Attachment: 9642-1.txt

Sorry about this one. I have fixed this so that the maps are null under the 
non-test situations, and the updates don't happen to the maps then.. (imo this 
is good enough for now).

> AM ZK Workers stuck doing 100% CPU on HashMap.put
> -
>
> Key: HBASE-9642
> URL: https://issues.apache.org/jira/browse/HBASE-9642
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.0
>Reporter: Jean-Daniel Cryans
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9642-1.txt
>
>
> I just noticed on my test cluster that my master is using all my CPUs even 
> though it's completely idle. 5 threads are doing this:
> {noformat}
> "AM.ZK.Worker-pool2-t34" daemon prio=10 tid=0x7f68ac176800 nid=0x5251 
> runnable [0x7f688cc83000]
>java.lang.Thread.State: RUNNABLE
>   at java.util.HashMap.put(HashMap.java:374)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.handleRegion(AssignmentManager.java:954)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$6.run(AssignmentManager.java:1419)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:1247)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> {noformat}
> Looking at the code, I see HBASE-9095 introduced two HashMaps *for tests 
> only* but they end up being used concurrently in the AM _and_ are never 
> cleaned up. It seems to me that any master running since that patch was 
> committed has a time bomb in it.
> I'm marking this as a blocker. [~devaraj] and [~jxiang], you guys wanna take 
> a look at this?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9640) Increment of loadSequence in CoprocessorHost#loadInstance() is thread-unsafe

2013-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775995#comment-13775995
 ] 

Hadoop QA commented on HBASE-9640:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604618/9640.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7348//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7348//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7348//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7348//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7348//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7348//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7348//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7348//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7348//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7348//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7348//console

This message is automatically generated.

> Increment of loadSequence in CoprocessorHost#loadInstance() is thread-unsafe 
> -
>
> Key: HBASE-9640
> URL: https://issues.apache.org/jira/browse/HBASE-9640
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9640.txt
>
>
> {code}
> E env = createEnvironment(implClass, impl, priority, ++loadSequence, 
> conf);
> {code}
> Increment of loadSequence doesn't have proper synchronization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9588) Expose checkAndPut/checkAndDelete with comparators to HTableInterface

2013-09-23 Thread Robert Roland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Roland updated HBASE-9588:
-

Attachment: checkAndPut_HBASE-9588_TRUNK-v2.patch

> Expose checkAndPut/checkAndDelete with comparators to HTableInterface
> -
>
> Key: HBASE-9588
> URL: https://issues.apache.org/jira/browse/HBASE-9588
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0
>Reporter: Robert Roland
> Attachments: checkAndPut_HBASE-9588_0.94.patch, 
> checkAndPut_HBASE-9588_TRUNK.patch, checkAndPut_HBASE-9588_TRUNK.patch.1, 
> checkAndPut_HBASE-9588_TRUNK-v2.patch
>
>
> HRegionInterface allows you to specify a comparator to checkAndPut and 
> checkAndDelete, but that isn't available to the standard HTableInterface.
> The attached patches expose these functions to the client. It adds two 
> methods to HTableInterface, which required implementing in several places.
> They are not implemented in RemoteHTable - I couldn't see an obvious way to 
> implement there. Following the pattern of increment, batch, etc, they are 
> "not supported."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9588) Expose checkAndPut/checkAndDelete with comparators to HTableInterface

2013-09-23 Thread Robert Roland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Roland updated HBASE-9588:
-

Status: Patch Available  (was: Open)

> Expose checkAndPut/checkAndDelete with comparators to HTableInterface
> -
>
> Key: HBASE-9588
> URL: https://issues.apache.org/jira/browse/HBASE-9588
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0
>Reporter: Robert Roland
> Attachments: checkAndPut_HBASE-9588_0.94.patch, 
> checkAndPut_HBASE-9588_TRUNK.patch, checkAndPut_HBASE-9588_TRUNK.patch.1
>
>
> HRegionInterface allows you to specify a comparator to checkAndPut and 
> checkAndDelete, but that isn't available to the standard HTableInterface.
> The attached patches expose these functions to the client. It adds two 
> methods to HTableInterface, which required implementing in several places.
> They are not implemented in RemoteHTable - I couldn't see an obvious way to 
> implement there. Following the pattern of increment, batch, etc, they are 
> "not supported."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-9642) AM ZK Workers stuck doing 100% CPU on HashMap.put

2013-09-23 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das reassigned HBASE-9642:
--

Assignee: Devaraj Das

> AM ZK Workers stuck doing 100% CPU on HashMap.put
> -
>
> Key: HBASE-9642
> URL: https://issues.apache.org/jira/browse/HBASE-9642
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.0
>Reporter: Jean-Daniel Cryans
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
>
> I just noticed on my test cluster that my master is using all my CPUs even 
> though it's completely idle. 5 threads are doing this:
> {noformat}
> "AM.ZK.Worker-pool2-t34" daemon prio=10 tid=0x7f68ac176800 nid=0x5251 
> runnable [0x7f688cc83000]
>java.lang.Thread.State: RUNNABLE
>   at java.util.HashMap.put(HashMap.java:374)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.handleRegion(AssignmentManager.java:954)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$6.run(AssignmentManager.java:1419)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:1247)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> {noformat}
> Looking at the code, I see HBASE-9095 introduced two HashMaps *for tests 
> only* but they end up being used concurrently in the AM _and_ are never 
> cleaned up. It seems to me that any master running since that patch was 
> committed has a time bomb in it.
> I'm marking this as a blocker. [~devaraj] and [~jxiang], you guys wanna take 
> a look at this?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9629) SnapshotReferenceUtil#snapshot should catch RemoteWithExtrasException

2013-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775991#comment-13775991
 ] 

Hadoop QA commented on HBASE-9629:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604470/9629.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7347//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7347//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7347//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7347//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7347//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7347//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7347//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7347//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7347//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7347//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7347//console

This message is automatically generated.

> SnapshotReferenceUtil#snapshot should catch RemoteWithExtrasException
> -
>
> Key: HBASE-9629
> URL: https://issues.apache.org/jira/browse/HBASE-9629
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9629.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/7329//testReport/org.apache.hadoop.hbase.snapshot/TestFlushSnapshotFromClient/testTakeSnapshotAfterMerge/
>  :
> {code}
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=snapshotAfterMerge table=test type=FLUSH } had an error.  Procedure 
> snapshotAfterMerge { waiting=[] done=[] }
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:208)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:219)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:123)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:94)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseA

[jira] [Updated] (HBASE-9588) Expose checkAndPut/checkAndDelete with comparators to HTableInterface

2013-09-23 Thread Robert Roland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Roland updated HBASE-9588:
-

Status: Open  (was: Patch Available)

> Expose checkAndPut/checkAndDelete with comparators to HTableInterface
> -
>
> Key: HBASE-9588
> URL: https://issues.apache.org/jira/browse/HBASE-9588
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0
>Reporter: Robert Roland
> Attachments: checkAndPut_HBASE-9588_0.94.patch, 
> checkAndPut_HBASE-9588_TRUNK.patch, checkAndPut_HBASE-9588_TRUNK.patch.1
>
>
> HRegionInterface allows you to specify a comparator to checkAndPut and 
> checkAndDelete, but that isn't available to the standard HTableInterface.
> The attached patches expose these functions to the client. It adds two 
> methods to HTableInterface, which required implementing in several places.
> They are not implemented in RemoteHTable - I couldn't see an obvious way to 
> implement there. Following the pattern of increment, batch, etc, they are 
> "not supported."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9084) HBase admin flush has a data loss risk even after HBASE-7671

2013-09-23 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775978#comment-13775978
 ] 

Liang Xie commented on HBASE-9084:
--

any more comments? or could it go ahead to trunk?

> HBase admin flush has a data loss risk even after HBASE-7671
> 
>
> Key: HBASE-9084
> URL: https://issues.apache.org/jira/browse/HBASE-9084
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.95.1, 0.94.10
>Reporter: Liang Xie
>Assignee: Liang Xie
>Priority: Critical
> Attachments: HBASE-9084-0.94.txt, HBASE-9084.txt, hbase-9084v2.patch
>
>
> see 
> https://issues.apache.org/jira/browse/HBASE-7671?focusedCommentId=13722148&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13722148
> will attach a simple patch soon

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9502) HStore.seekToScanner should handle magic value

2013-09-23 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775976#comment-13775976
 ] 

Liang Xie commented on HBASE-9502:
--

any more comments?

> HStore.seekToScanner should handle magic value
> --
>
> Key: HBASE-9502
> URL: https://issues.apache.org/jira/browse/HBASE-9502
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 0.98.0, 0.95.2, 0.96.1
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBASE-9502.txt, HBASE-9502-v2.txt
>
>
> due to faked key, the seekTo probably reture -2, and HStore.seekToScanner 
> should handle this corner case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9519) fix NPE in EncodedScannerV2.getFirstKeyInBlock()

2013-09-23 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775975#comment-13775975
 ] 

Liang Xie commented on HBASE-9519:
--

any more comments? or could it go ahead to trunk?

> fix NPE in EncodedScannerV2.getFirstKeyInBlock()
> 
>
> Key: HBASE-9519
> URL: https://issues.apache.org/jira/browse/HBASE-9519
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 0.96.1
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBASE-9519.txt, HBASE-9519-v2.txt
>
>
> we observed a reproducable NPE while scanning special table under special 
> condition in our IntegratedTesting scenario, it was fixed by appling the 
> attached patch.
> org.apache.hadoop.hbase.client.ScannerCallable@67ee75a5, java.io.IOException: 
> java.io.IOException: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1186)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.convertThrowableToIOE(HRegionServer.java:1175)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2391)
> at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:456)
> at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1071)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:547)
> at 
> org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:159)
> at 
> org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekBefore(HalfStoreFileReader.java:142)
> at 
> org.apache.hadoop.hbase.io.HalfStoreFileReader.getLastKey(HalfStoreFileReader.java:267)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.passesKeyRangeFilter(StoreFile.java:1543)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.shouldUseScanner(StoreFileScanner.java:375)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.selectScannersFrom(StoreScanner.java:298)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:262)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:149)
> at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:2122)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3460)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1645)
> at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1635)
> at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1610)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2377)
> ... 5 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9631) add murmur3 hash

2013-09-23 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775974#comment-13775974
 ] 

Liang Xie commented on HBASE-9631:
--

any comments?

> add murmur3 hash
> 
>
> Key: HBASE-9631
> URL: https://issues.apache.org/jira/browse/HBASE-9631
> Project: HBase
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBase-9631.txt
>
>
> MurmurHash3 is the successor to MurmurHash2. It comes in 3 variants - a 
> 32-bit version that targets low latency for hash table use and two 128-bit 
> versions for generating unique identifiers for large blocks of data, one each 
> for x86 and x64 platforms.
> several open source projects have added murmur3 already, like cassandra, 
> mahout, etc.
> I just port the murmur3 from MAHOUT-862. due to compatibility, let's keep the 
> default Hash algo(murmur2) without changing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8870) Store.needsCompaction() should include minFilesToCompact

2013-09-23 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775973#comment-13775973
 ] 

Liang Xie commented on HBASE-8870:
--

Thanks very much [~sershe]!

> Store.needsCompaction() should include minFilesToCompact
> 
>
> Key: HBASE-8870
> URL: https://issues.apache.org/jira/browse/HBASE-8870
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 0.95.1
>Reporter: Liang Xie
>Assignee: Liang Xie
>Priority: Minor
> Attachments: HBASE-8870.txt, HBASE-8870.txt, HBase-8870-v2.txt
>
>
> read here:
> {code}
>   public boolean needsCompaction() {
> return (storefiles.size() - filesCompacting.size()) > minFilesToCompact;
>   }
> {code}
> imho, it should be 
> {code}
>   public boolean needsCompaction() {
> return (storefiles.size() - filesCompacting.size()) >= minFilesToCompact;
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9634) HBase Table few regions are not getting recovered from the 'Transition'/'OFFLINE state'

2013-09-23 Thread shankarlingayya (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775963#comment-13775963
 ] 

shankarlingayya commented on HBASE-9634:


how many zookeeper instances to you have? Do you kill it/them?
===> 1 zookeeper instance, not killed at all, it is running normal

what kill is it? unplug, kill 9, kill 15?
===> used the 'hbase-daemon.sh stop regionserver'

What's the replication factor, and do you kill the datanode(s)?
===> Replication factor is 3, no datanode is killed, it is running normally

after step 5, do you flush the table?
===> No flush is done, data are added successfuly in the hbase

what are the logs of the region server which it failing to open the region?
===> We added huge number of records, but only the below region is transition, 
but all the other regions are fine.

2013-09-23 18:28:06,610 INFO 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opening of 
region {NAME => 't1,row507465,1379937224590.2d9fad2aee78103f928d8c7fe16ba6cd.', 
STARTKEY => 'row507465', ENDKEY => 'row508987', ENCODED => 
2d9fad2aee78103f928d8c7fe16ba6cd,} failed, marking as FAILED_OPEN in ZK

2013-09-23 18:46:12,160 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
Instantiated t1,row507465,1379937224590.2d9fad2aee78103f928d8c7fe16ba6cd.
2013-09-23 18:46:12,160 ERROR 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of 
region=t1,row507465,1379937224590.2d9fad2aee78103f928d8c7fe16ba6cd., starting 
to roll back the global memstore size.




> HBase Table few regions are not getting recovered from the 
> 'Transition'/'OFFLINE state' 
> 
>
> Key: HBASE-9634
> URL: https://issues.apache.org/jira/browse/HBASE-9634
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.94.11
> Environment: SuSE11
>Reporter: shankarlingayya
>
> {noformat}
> HBase Table few regions are not getting recovered from the 
> 'Transition'/'OFFLINE state'
> Test Procedure:
> 1. Setup Non HA Hadoop Cluster with two nodes (Node1-XX.XX.XX.XX,  
> Node2-YY.YY.YY.YY)
> 2. Install Zookeeper & HRegionServer in Node-1
> 3. Install HMaster & HRegionServer in Node-2
> 4. From Node2 create HBase Table ( table name 't1' with one column family 
> 'cf1' )
> 5. Perform addrecord 99649 rows 
> 6. Perform kill and restart of Node1 Region Server & Node2 Region Server in a 
> loop for 10-20 times
> 2013-09-23 18:28:06,610 INFO 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opening of 
> region {NAME => 
> 't1,row507465,1379937224590.2d9fad2aee78103f928d8c7fe16ba6cd.', STARTKEY => 
> 'row507465', ENDKEY => 'row508987', ENCODED => 
> 2d9fad2aee78103f928d8c7fe16ba6cd,} failed, marking as FAILED_OPEN in ZK
> 2013-09-23 18:46:12,160 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
> Instantiated t1,row507465,1379937224590.2d9fad2aee78103f928d8c7fe16ba6cd.
> 2013-09-23 18:46:12,160 ERROR 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open 
> of region=t1,row507465,1379937224590.2d9fad2aee78103f928d8c7fe16ba6cd., 
> starting to roll back the global memstore size.
> {noformat}  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9635) HBase Table regions are not getting re-assigned to the new region server when it comes up (when the existing region server not able to handle the load)

2013-09-23 Thread shankarlingayya (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775959#comment-13775959
 ] 

shankarlingayya commented on HBASE-9635:


RegionServer1 side file descriptors got exhausted, but the RegionServer2 has 
enough file descriptors, now the RegionServer1 needs to communicate with 
HMaster to re-assign the open failed regions to the New RegionServer2, which is 
not happening in the above problem.

After some 30 minutes duration then it is getting re-assigned to the 
RegionServer2.

> HBase Table regions are not getting re-assigned to the new region server when 
> it comes up (when the existing region server not able to handle the load) 
> 
>
> Key: HBASE-9635
> URL: https://issues.apache.org/jira/browse/HBASE-9635
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.94.11
> Environment: SuSE11
>Reporter: shankarlingayya
>
> {noformat}
> HBase Table regions are not getting assigned to the new region server for a 
> period of 30 minutes (when the existing region server not able to handle the 
> load)
> Procedure:
> 1. Setup Non HA Hadoop Cluster with two nodes (Node1-XX.XX.XX.XX,  
> Node2-YY.YY.YY.YY)
> 2. Install Zookeeper & HRegionServer in Node-1
> 3. Install HMaster & HRegionServer in Node-2
> 4. From Node2 create HBase Table ( table name 't1' with one column family 
> 'cf1' )
> 5. Perform addrecord 99649 rows 
> 6. kill both the node Region Server and limit the Node1 Region Server FD to 
> 600
> 7. Start only the Node1 Region server ==> so that FD exhaust can happen in 
> Node1 Region Server
> 8. After some 5-10 minuites start the Node2 Region Server
> ===> Huge number of regions of table 't1' are in OPENING state, which are not 
> getting re assigned to the Node2 region server which is free. 
> ===> When the new region server comes up then the master should detect and 
> allocate the open failed regions to the region server (here it is staying the 
> OPENINING state for 30 minutes which will have huge impcat user app which 
> makes use of this table)
> 2013-09-23 18:46:12,160 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
> Instantiated t1,row507465,1379937224590.2d9fad2aee78103f928d8c7fe16ba6cd.
> 2013-09-23 18:46:12,160 ERROR 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open 
> of region=t1,row507465,1379937224590.2d9fad2aee78103f928d8c7fe16ba6cd., 
> starting to roll back the global memstore size.
> 2013-09-23 18:50:55,284 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to 
> renew lease for 
> [DFSClient_hb_rs_HOST-XX.XX.XX.XX,61020,1379940823286_-641204614_48] for 309 
> seconds.  Will retry shortly ...
> java.io.IOException: Failed on local exception: java.net.SocketException: Too 
> many open files; Host Details : local host is: 
> "HOST-XX.XX.XX.XX/XX.XX.XX.XX"; destination host is: "HOST-XX.XX.XX.XX":8020;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
> at org.apache.hadoop.ipc.Client.call(Client.java:1351)
> at org.apache.hadoop.ipc.Client.call(Client.java:1300)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at $Proxy13.renewLease(Unknown Source)
> at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at $Proxy13.renewLease(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:522)
> at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:679)
> at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
> at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
> at 
> org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
> at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.SocketException: Too many open files
> at sun.nio.ch.Net.socket0(Native Method)
> at sun.nio.ch.Net.socket(Net.java:97)
> at sun.nio.ch.SocketChannelImpl.(SocketChannelImpl.java:84)
> at 
> sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
>   

[jira] [Commented] (HBASE-9511) LZ4 codec retrieval executes redundant code

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775958#comment-13775958
 ] 

Hudson commented on HBASE-9511:
---

SUCCESS: Integrated in HBase-TRUNK #4553 (See 
[https://builds.apache.org/job/HBase-TRUNK/4553/])
HBASE-9511 LZ4 codec retrieval executes redundant code (ndimiduk: rev 1525717)
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> LZ4 codec retrieval executes redundant code
> ---
>
> Key: HBASE-9511
> URL: https://issues.apache.org/jira/browse/HBASE-9511
> Project: HBase
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9511.00.patch
>
>
> the LZ4 implementation of {{Compression.Algorithm#getCodec(Configuration)}} 
> makes an extra call to {{buildCodec}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7680) implement compaction policy for stripe compactions

2013-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775955#comment-13775955
 ] 

Hadoop QA commented on HBASE-7680:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12604678/HBASE-7680-latest-with-dependencies.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 16 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7344//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7344//console

This message is automatically generated.

> implement compaction policy for stripe compactions
> --
>
> Key: HBASE-7680
> URL: https://issues.apache.org/jira/browse/HBASE-7680
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 0.98.0
>
> Attachments: HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, HBASE-7680-v0.patch, 
> HBASE-7680-v10.patch, HBASE-7680-v10.patch, HBASE-7680-v11.patch, 
> HBASE-7680-v12.patch, HBASE-7680-v13.patch, HBASE-7680-v13.patch, 
> HBASE-7680-v14.patch, HBASE-7680-v15.patch, HBASE-7680-v16.patch, 
> HBASE-7680-v1.patch, HBASE-7680-v2.patch, HBASE-7680-v3.patch, 
> HBASE-7680-v4.patch, HBASE-7680-v5.patch, HBASE-7680-v6.patch, 
> HBASE-7680-v7.patch, HBASE-7680-v8.patch, HBASE-7680-v9.patch
>
>
> Bringing into 0.95.2 so gets some consideration

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4811) Support reverse Scan

2013-09-23 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775951#comment-13775951
 ] 

Liang Xie commented on HBASE-4811:
--

seems no progress ?

> Support reverse Scan
> 
>
> Key: HBASE-4811
> URL: https://issues.apache.org/jira/browse/HBASE-4811
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Affects Versions: 0.20.6, 0.94.7
>Reporter: John Carrino
>Assignee: chunhui shen
> Fix For: 0.98.0
>
> Attachments: 4811-0.94-v3.txt, 4811-trunk-v10.txt, 
> 4811-trunk-v5.patch, HBase-4811-0.94.3modified.txt, HBase-4811-0.94-v2.txt, 
> hbase-4811-trunkv11.patch, hbase-4811-trunkv12.patch, 
> hbase-4811-trunkv13.patch, hbase-4811-trunkv14.patch, 
> hbase-4811-trunkv15.patch, hbase-4811-trunkv16.patch, 
> hbase-4811-trunkv17.patch, hbase-4811-trunkv18.patch, 
> hbase-4811-trunkv19.patch, hbase-4811-trunkv1.patch, 
> hbase-4811-trunkv20.patch, hbase-4811-trunkv4.patch, 
> hbase-4811-trunkv6.patch, hbase-4811-trunkv7.patch, hbase-4811-trunkv8.patch, 
> hbase-4811-trunkv9.patch
>
>
> Reversed scan means scan the rows backward. 
> And StartRow bigger than StopRow in a reversed scan.
> For example, for the following rows:
> aaa/c1:q1/value1
> aaa/c1:q2/value2
> bbb/c1:q1/value1
> bbb/c1:q2/value2
> ccc/c1:q1/value1
> ccc/c1:q2/value2
> ddd/c1:q1/value1
> ddd/c1:q2/value2
> eee/c1:q1/value1
> eee/c1:q2/value2
> you could do a reversed scan from 'ddd' to 'bbb'(exclude) like this:
> Scan scan = new Scan();
> scan.setStartRow('ddd');
> scan.setStopRow('bbb');
> scan.setReversed(true);
> for(Result result:htable.getScanner(scan)){
>  System.out.println(result);
> }
> Aslo you could do the reversed scan with shell like this:
> hbase> scan 'table',{REVERSED => true,STARTROW=>'ddd', STOPROW=>'bbb'}
> And the output is:
> ddd/c1:q1/value1
> ddd/c1:q2/value2
> ccc/c1:q1/value1
> ccc/c1:q2/value2
> NOTE: when setting reversed as true for a client scan, you must set the start 
> row, else will throw exception. Through {@link 
> Scan#createBiggestByteArray(int)},you could get a big enough byte array as 
> the start row
> All the documentation I find about HBase says that if you want forward and 
> reverse scans you should just build 2 tables and one be ascending and one 
> descending.  Is there a fundamental reason that HBase only supports forward 
> Scan?  It seems like a lot of extra space overhead and coding overhead (to 
> keep them in sync) to support 2 tables.  
> I am assuming this has been discussed before, but I can't find the 
> discussions anywhere about it or why it would be infeasible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9630) Add thread which detects JVM pauses like HADOOP's

2013-09-23 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HBASE-9630:
-

Status: Patch Available  (was: Open)

> Add thread which detects JVM pauses like HADOOP's
> -
>
> Key: HBASE-9630
> URL: https://issues.apache.org/jira/browse/HBASE-9630
> Project: HBase
>  Issue Type: New Feature
>  Components: regionserver
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBase-9630.txt, HBase-9630-v2.txt
>
>
> Todd adds daemon threads for dn&nn to indicate the VM or kernel caused pause 
> in application log, it's pretty handy for diagnose, i thought it's great to 
> have similar ability in HBase.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9630) Add thread which detects JVM pauses like HADOOP's

2013-09-23 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775942#comment-13775942
 ] 

Liang Xie commented on HBASE-9630:
--

How about patch v2?

> Add thread which detects JVM pauses like HADOOP's
> -
>
> Key: HBASE-9630
> URL: https://issues.apache.org/jira/browse/HBASE-9630
> Project: HBase
>  Issue Type: New Feature
>  Components: regionserver
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBase-9630.txt, HBase-9630-v2.txt
>
>
> Todd adds daemon threads for dn&nn to indicate the VM or kernel caused pause 
> in application log, it's pretty handy for diagnose, i thought it's great to 
> have similar ability in HBase.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9630) Add thread which detects JVM pauses like HADOOP's

2013-09-23 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HBASE-9630:
-

Attachment: HBase-9630-v2.txt

> Add thread which detects JVM pauses like HADOOP's
> -
>
> Key: HBASE-9630
> URL: https://issues.apache.org/jira/browse/HBASE-9630
> Project: HBase
>  Issue Type: New Feature
>  Components: regionserver
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBase-9630.txt, HBase-9630-v2.txt
>
>
> Todd adds daemon threads for dn&nn to indicate the VM or kernel caused pause 
> in application log, it's pretty handy for diagnose, i thought it's great to 
> have similar ability in HBase.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9642) AM ZK Workers stuck doing 100% CPU on HashMap.put

2013-09-23 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775938#comment-13775938
 ] 

Jimmy Xiang commented on HBASE-9642:


I think we should remove the maps and fix the tests.

> AM ZK Workers stuck doing 100% CPU on HashMap.put
> -
>
> Key: HBASE-9642
> URL: https://issues.apache.org/jira/browse/HBASE-9642
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.0
>Reporter: Jean-Daniel Cryans
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
>
> I just noticed on my test cluster that my master is using all my CPUs even 
> though it's completely idle. 5 threads are doing this:
> {noformat}
> "AM.ZK.Worker-pool2-t34" daemon prio=10 tid=0x7f68ac176800 nid=0x5251 
> runnable [0x7f688cc83000]
>java.lang.Thread.State: RUNNABLE
>   at java.util.HashMap.put(HashMap.java:374)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.handleRegion(AssignmentManager.java:954)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$6.run(AssignmentManager.java:1419)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:1247)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> {noformat}
> Looking at the code, I see HBASE-9095 introduced two HashMaps *for tests 
> only* but they end up being used concurrently in the AM _and_ are never 
> cleaned up. It seems to me that any master running since that patch was 
> committed has a time bomb in it.
> I'm marking this as a blocker. [~devaraj] and [~jxiang], you guys wanna take 
> a look at this?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9639) SecureBulkLoad dispatches file load requests to all Regions

2013-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775937#comment-13775937
 ] 

Hadoop QA commented on HBASE-9639:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604661/HBASE-9639.00.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7345//console

This message is automatically generated.

> SecureBulkLoad dispatches file load requests to all Regions
> ---
>
> Key: HBASE-9639
> URL: https://issues.apache.org/jira/browse/HBASE-9639
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Coprocessors
>Affects Versions: 0.95.2
> Environment: Hadoop2, Kerberos 
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9639.00.patch
>
>
> When running a bulk load on a secure environment and loading data into the 
> first region of a table, the request to load the HFile set is dispatched to 
> all Regions for the table. This is reproduced consistently by running 
> IntegrationTestBulkLoad on a secure cluster. The load fails with an exception 
> that looks like:
> {noformat}
> 2013-08-30 07:37:22,993 INFO  [main] mapreduce.LoadIncrementalHFiles: Split 
> occured while grouping HFiles, retry attempt 1 with 3 files remaining to 
> group or split
> 2013-08-30 07:37:22,999 ERROR [main] mapreduce.LoadIncrementalHFiles: 
> IOException during splitting
> java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File 
> does not exist: 
> /user/hbase/test-data/c45ddfe9-ee30-4d32-8042-928db12b1cee/IntegrationTestBulkLoad-0/L/bf41ea13997b4e228d05e67ba7b1b686
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1489)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1438)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1418)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1392)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:438)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:403)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:284)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runLinkedListMRJob(IntegrationTestBulkLoad.java:200)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:133)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7967) implement compactor for stripe compactions

2013-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775936#comment-13775936
 ] 

Hadoop QA commented on HBASE-7967:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12604685/HBASE-7967-latest-with-dependencies.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 7 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7343//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7343//console

This message is automatically generated.

> implement compactor for stripe compactions
> --
>
> Key: HBASE-7967
> URL: https://issues.apache.org/jira/browse/HBASE-7967
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, HBASE-7967-v0.patch, 
> HBASE-7967-v10.patch, HBASE-7967-v11.patch, HBASE-7967-v1.patch, 
> HBASE-7967-v2.patch, HBASE-7967-v3.patch, HBASE-7967-v4.patch, 
> HBASE-7967-v5.patch, HBASE-7967-v6.patch, HBASE-7967-v7.patch, 
> HBASE-7967-v7.patch, HBASE-7967-v7.patch, HBASE-7967-v8.patch, 
> HBASE-7967-v9.patch
>
>
> Compactor needs to be implemented. See details in parent and blocking jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9630) Add thread which detects JVM pauses like HADOOP's

2013-09-23 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775922#comment-13775922
 ] 

Liang Xie commented on HBASE-9630:
--

Thanks [~ndimiduk], now it links with the original hadoop jira:)
it seems HADOOP-9618 is in hadoop trunk only, so to me, it's not easy to use 
the hadoop's directly, probably need reflection...

will upload another patch according to [~liochon] and [~ndimiduk]'s comments.

> Add thread which detects JVM pauses like HADOOP's
> -
>
> Key: HBASE-9630
> URL: https://issues.apache.org/jira/browse/HBASE-9630
> Project: HBase
>  Issue Type: New Feature
>  Components: regionserver
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Attachments: HBase-9630.txt
>
>
> Todd adds daemon threads for dn&nn to indicate the VM or kernel caused pause 
> in application log, it's pretty handy for diagnose, i thought it's great to 
> have similar ability in HBase.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9583) add document for getShortMidpointKey

2013-09-23 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775923#comment-13775923
 ] 

Liang Xie commented on HBASE-9583:
--

sound good to me, let me try it today:)

> add document for getShortMidpointKey
> 
>
> Key: HBASE-9583
> URL: https://issues.apache.org/jira/browse/HBASE-9583
> Project: HBase
>  Issue Type: Task
>  Components: HFile
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>
> add the faked key to documentation http://hbase.apache.org/book.html#hfilev2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9511) LZ4 codec retrieval executes redundant code

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775914#comment-13775914
 ] 

Hudson commented on HBASE-9511:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #758 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/758/])
HBASE-9511 LZ4 codec retrieval executes redundant code (ndimiduk: rev 1525717)
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> LZ4 codec retrieval executes redundant code
> ---
>
> Key: HBASE-9511
> URL: https://issues.apache.org/jira/browse/HBASE-9511
> Project: HBase
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9511.00.patch
>
>
> the LZ4 implementation of {{Compression.Algorithm#getCodec(Configuration)}} 
> makes an extra call to {{buildCodec}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9420) Math.max() on syncedTillHere lacks synchronization

2013-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775911#comment-13775911
 ] 

Hadoop QA commented on HBASE-9420:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604645/9420-v2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7342//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7342//console

This message is automatically generated.

> Math.max() on syncedTillHere lacks synchronization
> --
>
> Key: HBASE-9420
> URL: https://issues.apache.org/jira/browse/HBASE-9420
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Trivial
> Fix For: 0.98.0
>
> Attachments: 9420-v1.txt, 9420-v2.txt
>
>
> In FSHlog#syncer(), around line 1080:
> {code}
>   this.syncedTillHere = Math.max(this.syncedTillHere, doneUpto);
> {code}
> Assignment to syncedTillHere after computing max value is not protected by 
> proper synchronization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9511) LZ4 codec retrieval executes redundant code

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775895#comment-13775895
 ] 

Hudson commented on HBASE-9511:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #55 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/55/])
HBASE-9511 LZ4 codec retrieval executes redundant code (ndimiduk: rev 1525718)
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> LZ4 codec retrieval executes redundant code
> ---
>
> Key: HBASE-9511
> URL: https://issues.apache.org/jira/browse/HBASE-9511
> Project: HBase
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9511.00.patch
>
>
> the LZ4 implementation of {{Compression.Algorithm#getCodec(Configuration)}} 
> makes an extra call to {{buildCodec}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9511) LZ4 codec retrieval executes redundant code

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775894#comment-13775894
 ] 

Hudson commented on HBASE-9511:
---

FAILURE: Integrated in hbase-0.96 #88 (See 
[https://builds.apache.org/job/hbase-0.96/88/])
HBASE-9511 LZ4 codec retrieval executes redundant code (ndimiduk: rev 1525718)
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> LZ4 codec retrieval executes redundant code
> ---
>
> Key: HBASE-9511
> URL: https://issues.apache.org/jira/browse/HBASE-9511
> Project: HBase
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9511.00.patch
>
>
> the LZ4 implementation of {{Compression.Algorithm#getCodec(Configuration)}} 
> makes an extra call to {{buildCodec}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9642) AM ZK Workers stuck doing 100% CPU on HashMap.put

2013-09-23 Thread Jean-Daniel Cryans (JIRA)
Jean-Daniel Cryans created HBASE-9642:
-

 Summary: AM ZK Workers stuck doing 100% CPU on HashMap.put
 Key: HBASE-9642
 URL: https://issues.apache.org/jira/browse/HBASE-9642
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Jean-Daniel Cryans
Priority: Blocker
 Fix For: 0.98.0, 0.96.0


I just noticed on my test cluster that my master is using all my CPUs even 
though it's completely idle. 5 threads are doing this:

{noformat}
"AM.ZK.Worker-pool2-t34" daemon prio=10 tid=0x7f68ac176800 nid=0x5251 
runnable [0x7f688cc83000]
   java.lang.Thread.State: RUNNABLE
at java.util.HashMap.put(HashMap.java:374)
at 
org.apache.hadoop.hbase.master.AssignmentManager.handleRegion(AssignmentManager.java:954)
at 
org.apache.hadoop.hbase.master.AssignmentManager$6.run(AssignmentManager.java:1419)
at 
org.apache.hadoop.hbase.master.AssignmentManager$3.run(AssignmentManager.java:1247)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
{noformat}

Looking at the code, I see HBASE-9095 introduced two HashMaps *for tests only* 
but they end up being used concurrently in the AM _and_ are never cleaned up. 
It seems to me that any master running since that patch was committed has a 
time bomb in it.

I'm marking this as a blocker. [~devaraj] and [~jxiang], you guys wanna take a 
look at this?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8598) enhance multithreadedaction/reader/writer to test better

2013-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775878#comment-13775878
 ] 

Hadoop QA commented on HBASE-8598:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12604696/HBASE-8598-v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7341//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7341//console

This message is automatically generated.

> enhance multithreadedaction/reader/writer to test better
> 
>
> Key: HBASE-8598
> URL: https://issues.apache.org/jira/browse/HBASE-8598
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-8598-v0.patch, HBASE-8598-v1.patch, 
> HBASE-8598-v2.patch, HBASE-8598-v3.patch
>
>
> To be able to test more varied scenarios, I am adding delete and overwrite 
> threads to writer; adding random read thread to reader; and adding some 
> machine-oriented (csv) metric collection (QPS, histogram of all requests 
> during the tests) because grepping logs for such stuff is PITA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9390) coprocessors observers are not called during a recovery with the new log replay algorithm

2013-09-23 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775839#comment-13775839
 ] 

Jeffrey Zhong commented on HBASE-9390:
--

The QA run on v2 patch is clean. I'll commit the v2 patch tomorrow evening if 
there is no objections. Thanks.

> coprocessors observers are not called during a recovery with the new log 
> replay algorithm
> -
>
> Key: HBASE-9390
> URL: https://issues.apache.org/jira/browse/HBASE-9390
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, MTTR
>Affects Versions: 0.95.2
>Reporter: Nicolas Liochon
>Assignee: Jeffrey Zhong
> Attachments: copro.patch, hbase-9390-part2.patch, 
> hbase-9390-part2-v2.patch, hbase-9390.patch, hbase-9390-v2.patch
>
>
> See the patch to reproduce the issue: If we activate log replay we don't have 
> the events on WAL restore.
> Pinging [~jeffreyz], we discussed this offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9390) coprocessors observers are not called during a recovery with the new log replay algorithm

2013-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775835#comment-13775835
 ] 

Hadoop QA commented on HBASE-9390:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12604666/hbase-9390-part2-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7340//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7340//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7340//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7340//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7340//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7340//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7340//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7340//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7340//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7340//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7340//console

This message is automatically generated.

> coprocessors observers are not called during a recovery with the new log 
> replay algorithm
> -
>
> Key: HBASE-9390
> URL: https://issues.apache.org/jira/browse/HBASE-9390
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, MTTR
>Affects Versions: 0.95.2
>Reporter: Nicolas Liochon
>Assignee: Jeffrey Zhong
> Attachments: copro.patch, hbase-9390-part2.patch, 
> hbase-9390-part2-v2.patch, hbase-9390.patch, hbase-9390-v2.patch
>
>
> See the patch to reproduce the issue: If we activate log replay we don't have 
> the events on WAL restore.
> Pinging [~jeffreyz], we discussed this offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9610) TestThriftServer.testAll failing

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775828#comment-13775828
 ] 

Hudson commented on HBASE-9610:
---

FAILURE: Integrated in HBase-TRUNK #4552 (See 
[https://builds.apache.org/job/HBase-TRUNK/4552/])
HBASE-9610 TestThriftServer.testAll failing (stack: rev 1525679)
* 
/hbase/trunk/hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java


> TestThriftServer.testAll failing
> 
>
> Key: HBASE-9610
> URL: https://issues.apache.org/jira/browse/HBASE-9610
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Elliott Clark
> Attachments: 9610-debugging.txt, more_debugging.txt
>
>
> http://jenkins-public.iridiant.net/job/hbase-0.96-hadoop2/140/org.apache.hbase$hbase-thrift/testReport/junit/org.apache.hadoop.hbase.thrift/TestThriftServer/testAll/
> {code}
> java.lang.AssertionError: Metrics Counters should be equal expected:<2> but 
> was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
> {code}
> Here too:
> http://jenkins-public.iridiant.net/job/hbase-0.96/134/
> http://jenkins-public.iridiant.net/job/hbase-0.96-hadoop2/140/
> Mind taking a looksee [~eclark]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9533) List of dependency jars for MR jobs is hard-coded and does not include netty, breaking MRv1 jobs

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775795#comment-13775795
 ] 

Hudson commented on HBASE-9533:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #757 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/757/])
HBASE-9633  Partial reverse of HBASE-9533 (nkeywal: rev 1525648)
* /hbase/trunk/hbase-client/pom.xml
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* /hbase/trunk/pom.xml


> List of dependency jars for MR jobs is hard-coded and does not include netty, 
> breaking MRv1 jobs
> 
>
> Key: HBASE-9533
> URL: https://issues.apache.org/jira/browse/HBASE-9533
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 0.95.2, 0.96.1
>Reporter: Aleksandr Shulman
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9533.txt, 9533v3.txt, 
> failed_mrv1_rowcounter_tt_taskoutput.out
>
>
> Observed behavior:
> Against trunk, using MRv1 with hadoop 1.0.4, r1393290, I am able to run MRv1 
> jobs (e.g. pi 2 4).
> However, when I use it to run MR over HBase jobs, they fail with the stack 
> trace below.
> From the trace, the issue seems to be that it cannot find a class that the 
> netty jar contains. This would make sense, given that the dependency jars 
> that we use for the MapReduce job are hard-coded, and that the netty jar is 
> not one of them.
> https://github.com/apache/hbase/blob/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java#L519
> Strangely, this is only an issue in trunk, not 0.95, even though the code 
> hasn't changed.
> Command:
> {code}/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter 
> sampletable{code}
> TT logs (attached)
> Output from console running job:
> {code}13/09/13 16:02:58 INFO mapred.JobClient: Task Id : 
> attempt_201309131601_0002_m_00_2, Status : FAILED
> java.io.IOException: Cannot create a record reader because of a previous 
> error. Please look at the previous logs lines from the task's full log for 
> more details.
>   at 
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.createRecordReader(TableInputFormatBase.java:119)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.(MapTask.java:489)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>   at org.apache.hadoop.mapred.Child.main(Child.java:249)
> 13/09/13 16:03:09 INFO mapred.JobClient: Job complete: job_201309131601_0002
> 13/09/13 16:03:09 INFO mapred.JobClient: Counters: 7
> 13/09/13 16:03:09 INFO mapred.JobClient:   Job Counters 
> 13/09/13 16:03:09 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=29913
> 13/09/13 16:03:09 INFO mapred.JobClient: Total time spent by all reduces 
> waiting after reserving slots (ms)=0
> 13/09/13 16:03:09 INFO mapred.JobClient: Total time spent by all maps 
> waiting after reserving slots (ms)=0
> 13/09/13 16:03:09 INFO mapred.JobClient: Launched map tasks=4
> 13/09/13 16:03:09 INFO mapred.JobClient: Data-local map tasks=4
> 13/09/13 16:03:09 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
> 13/09/13 16:03:09 INFO mapred.JobClient: Failed map tasks=1{code}
> Expected behavior:
> As a stopgap, the netty jar should be included in that list. More generally, 
> there should be a more elegant way to include the jars that are needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9633) Partial reverse of HBASE-9533

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775793#comment-13775793
 ] 

Hudson commented on HBASE-9633:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #757 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/757/])
HBASE-9633  Partial reverse of HBASE-9533 (nkeywal: rev 1525648)
* /hbase/trunk/hbase-client/pom.xml
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* /hbase/trunk/pom.xml


> Partial reverse of HBASE-9533
> -
>
> Key: HBASE-9633
> URL: https://issues.apache.org/jira/browse/HBASE-9633
> Project: HBase
>  Issue Type: Bug
>  Components: build, Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9633.v1.patch
>
>
> I don't understand the solution in HBASE-9533
> In netty 3.3, they changed the group id, if I understand well for legal 
> reasons see https://github.com/netty/netty/issues/103). But they have not 
> changed the java package name: it's still org.jboss.netty. So we should not 
> have to remove our dependency to netty 3.6. 
> So:
> - this comment is wrong imho: the explicit load is not related to the package 
> name but to how mapreduce load work.
> {code}
> +  // This is ugly.  Our zk3.4.5 depends on the org.jboss.netty, not 
> hadoops io.netty
> +  // so need to load it up explicitly while on 3.4.5 zk
> {code}
> - We do use Netty (for the multicast message), so now we're have a missing 
> dependency, as maven says:
> {code}
>  [INFO] — maven-dependency-plugin:2.1:analyze (default-cli) @ hbase-client —
> [WARNING] Used undeclared dependencies found:
> [WARNING] org.jboss.netty:netty:jar:3.2.2.Final:compile
> {code}
> So I propose a partial reverse. [~saint@gmail.com], [@Aleksandr Shulman] 
> would it work for you?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9609) AsyncProcess doesn't increase all the counters when trying to limit the per region flow.

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775791#comment-13775791
 ] 

Hudson commented on HBASE-9609:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #757 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/757/])
HBASE-9609  AsyncProcess doesn't incrase all the counters when trying to limit 
the per region flow. (nkeywal: rev 1525643)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java


> AsyncProcess doesn't increase all the counters when trying to limit the per 
> region flow.
> 
>
> Key: HBASE-9609
> URL: https://issues.apache.org/jira/browse/HBASE-9609
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9609.v1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9632) Put the shell in a maven sub module (hbase-shell) instead of hbase-server

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775794#comment-13775794
 ] 

Hudson commented on HBASE-9632:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #757 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/757/])
HBASE-9632 Put the shell in a maven sub module (hbase-shell) instead of 
hbase-server (nkeywal: rev 1525641)
* /hbase/trunk/bin/hbase
* /hbase/trunk/bin/hbase.cmd
* /hbase/trunk/hbase-assembly/src/main/assembly/components.xml
* /hbase/trunk/hbase-it/pom.xml
* /hbase/trunk/hbase-server/pom.xml
* /hbase/trunk/hbase-server/src/main/ruby/hbase.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/hbase.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/replication_admin.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/security.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/table.rb
* /hbase/trunk/hbase-server/src/main/ruby/irb/hirb.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/add_peer.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/alter.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/alter_async.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/alter_namespace.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/alter_status.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/assign.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/balance_switch.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/balancer.rb
* 
/hbase/trunk/hbase-server/src/main/ruby/shell/commands/catalogjanitor_enabled.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/catalogjanitor_run.rb
* 
/hbase/trunk/hbase-server/src/main/ruby/shell/commands/catalogjanitor_switch.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/clone_snapshot.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/close_region.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/compact.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/count.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/create.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/create_namespace.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/delete.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/delete_snapshot.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/deleteall.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/describe.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/describe_namespace.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/disable.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/disable_all.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/disable_peer.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/drop.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/drop_all.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/drop_namespace.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/enable.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/enable_all.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/enable_peer.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/exists.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/flush.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/get.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/get_counter.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/get_table.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/grant.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/hlog_roll.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/incr.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/is_disabled.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/is_enabled.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/list.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/list_namespace.rb
* 
/hbase/trunk/hbase-server/src/main/ruby/shell/commands/list_namespace_tables.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/list_peers.rb
* 
/hbase/trunk/hbase-server/src/main/ruby/shell/commands/list_replicated_tables.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/list_snapshots.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/major_compact.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/merge_region.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/move.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/put.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/remove_peer.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/rename_snapshot.rb
* /hbase/trunk/hbase-server/s

[jira] [Commented] (HBASE-9610) TestThriftServer.testAll failing

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775796#comment-13775796
 ] 

Hudson commented on HBASE-9610:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #757 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/757/])
HBASE-9610 TestThriftServer.testAll failing (stack: rev 1525679)
* 
/hbase/trunk/hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java


> TestThriftServer.testAll failing
> 
>
> Key: HBASE-9610
> URL: https://issues.apache.org/jira/browse/HBASE-9610
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Elliott Clark
> Attachments: 9610-debugging.txt, more_debugging.txt
>
>
> http://jenkins-public.iridiant.net/job/hbase-0.96-hadoop2/140/org.apache.hbase$hbase-thrift/testReport/junit/org.apache.hadoop.hbase.thrift/TestThriftServer/testAll/
> {code}
> java.lang.AssertionError: Metrics Counters should be equal expected:<2> but 
> was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
> {code}
> Here too:
> http://jenkins-public.iridiant.net/job/hbase-0.96/134/
> http://jenkins-public.iridiant.net/job/hbase-0.96-hadoop2/140/
> Mind taking a looksee [~eclark]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9590) TableEventHandler#reOpenAllRegions() should close the HTable instance

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775790#comment-13775790
 ] 

Hudson commented on HBASE-9590:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #757 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/757/])
HBASE-9590 TableEventHandler#reOpenAllRegions() should close the HTable 
instance (tedyu: rev 1525640)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java


> TableEventHandler#reOpenAllRegions() should close the HTable instance
> -
>
> Key: HBASE-9590
> URL: https://issues.apache.org/jira/browse/HBASE-9590
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Trivial
> Fix For: 0.98.0
>
> Attachments: 9590-v1.txt
>
>
> {code}
> HTable table = new HTable(masterServices.getConfiguration(), tableName);
> TreeMap> serverToRegions = Maps
> .newTreeMap();
> NavigableMap hriHserverMapping = 
> table.getRegionLocations();
> {code}
> table should be closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9606) Apply small scan to meta scan where rowLimit is low

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775792#comment-13775792
 ] 

Hudson commented on HBASE-9606:
---

FAILURE: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #757 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/757/])
HBASE-9606 Apply small scan to meta scan where rowLimit is low (tedyu: rev 
1525634)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java


> Apply small scan to meta scan where rowLimit is low
> ---
>
> Key: HBASE-9606
> URL: https://issues.apache.org/jira/browse/HBASE-9606
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 9606-v2.txt, small-v3.txt
>
>
> HBASE-9488 added the feature for small scan where RPC calls are reduced.
> We can apply small scan to MetaScanner#metaScan() where rowLimit is low.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8598) enhance multithreadedaction/reader/writer to test better

2013-09-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-8598:


Attachment: HBASE-8598-v3.patch

rebase... I will probably make HBASE-8000 independent of this to commit it 
sooner

> enhance multithreadedaction/reader/writer to test better
> 
>
> Key: HBASE-8598
> URL: https://issues.apache.org/jira/browse/HBASE-8598
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-8598-v0.patch, HBASE-8598-v1.patch, 
> HBASE-8598-v2.patch, HBASE-8598-v3.patch
>
>
> To be able to test more varied scenarios, I am adding delete and overwrite 
> threads to writer; adding random read thread to reader; and adding some 
> machine-oriented (csv) metric collection (QPS, histogram of all requests 
> during the tests) because grepping logs for such stuff is PITA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8870) Store.needsCompaction() should include minFilesToCompact

2013-09-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775745#comment-13775745
 ] 

Sergey Shelukhin commented on HBASE-8870:
-

Fell thru the cracks somehow. Let me rebase if necessary and commit this week

> Store.needsCompaction() should include minFilesToCompact
> 
>
> Key: HBASE-8870
> URL: https://issues.apache.org/jira/browse/HBASE-8870
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 0.95.1
>Reporter: Liang Xie
>Assignee: Liang Xie
>Priority: Minor
> Attachments: HBASE-8870.txt, HBASE-8870.txt, HBase-8870-v2.txt
>
>
> read here:
> {code}
>   public boolean needsCompaction() {
> return (storefiles.size() - filesCompacting.size()) > minFilesToCompact;
>   }
> {code}
> imho, it should be 
> {code}
>   public boolean needsCompaction() {
> return (storefiles.size() - filesCompacting.size()) >= minFilesToCompact;
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9420) Math.max() on syncedTillHere lacks synchronization

2013-09-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775739#comment-13775739
 ] 

Enis Soztutar commented on HBASE-9420:
--

HBASE-8755 changes syncedTillHere as well, and it seems that it seems to 
synchronize on it, before the update. I have to spend some time to grok the 
whole patch still, but if we can prove that that model is better, we might as 
well let that overtake this issue. 

> Math.max() on syncedTillHere lacks synchronization
> --
>
> Key: HBASE-9420
> URL: https://issues.apache.org/jira/browse/HBASE-9420
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Trivial
> Fix For: 0.98.0
>
> Attachments: 9420-v1.txt, 9420-v2.txt
>
>
> In FSHlog#syncer(), around line 1080:
> {code}
>   this.syncedTillHere = Math.max(this.syncedTillHere, doneUpto);
> {code}
> Assignment to syncedTillHere after computing max value is not protected by 
> proper synchronization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9610) TestThriftServer.testAll failing

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775738#comment-13775738
 ] 

Hudson commented on HBASE-9610:
---

SUCCESS: Integrated in hbase-0.96 #87 (See 
[https://builds.apache.org/job/hbase-0.96/87/])
HBASE-9610 TestThriftServer.testAll failing (stack: rev 1525680)
* 
/hbase/branches/0.96/hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java


> TestThriftServer.testAll failing
> 
>
> Key: HBASE-9610
> URL: https://issues.apache.org/jira/browse/HBASE-9610
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Elliott Clark
> Attachments: 9610-debugging.txt, more_debugging.txt
>
>
> http://jenkins-public.iridiant.net/job/hbase-0.96-hadoop2/140/org.apache.hbase$hbase-thrift/testReport/junit/org.apache.hadoop.hbase.thrift/TestThriftServer/testAll/
> {code}
> java.lang.AssertionError: Metrics Counters should be equal expected:<2> but 
> was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
> {code}
> Here too:
> http://jenkins-public.iridiant.net/job/hbase-0.96/134/
> http://jenkins-public.iridiant.net/job/hbase-0.96-hadoop2/140/
> Mind taking a looksee [~eclark]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7967) implement compactor for stripe compactions

2013-09-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7967:


Attachment: HBASE-7967-latest-with-dependencies.patch
HBASE-7967-v11.patch

addressing latest CR feedback

> implement compactor for stripe compactions
> --
>
> Key: HBASE-7967
> URL: https://issues.apache.org/jira/browse/HBASE-7967
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, 
> HBASE-7967-latest-with-dependencies.patch, HBASE-7967-v0.patch, 
> HBASE-7967-v10.patch, HBASE-7967-v11.patch, HBASE-7967-v1.patch, 
> HBASE-7967-v2.patch, HBASE-7967-v3.patch, HBASE-7967-v4.patch, 
> HBASE-7967-v5.patch, HBASE-7967-v6.patch, HBASE-7967-v7.patch, 
> HBASE-7967-v7.patch, HBASE-7967-v7.patch, HBASE-7967-v8.patch, 
> HBASE-7967-v9.patch
>
>
> Compactor needs to be implemented. See details in parent and blocking jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9511) LZ4 codec retrieval executes redundant code

2013-09-23 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9511:


   Resolution: Fixed
Fix Version/s: 0.96.1
   0.98.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to 0.96 and trunk. Thanks for the reviews.

> LZ4 codec retrieval executes redundant code
> ---
>
> Key: HBASE-9511
> URL: https://issues.apache.org/jira/browse/HBASE-9511
> Project: HBase
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9511.00.patch
>
>
> the LZ4 implementation of {{Compression.Algorithm#getCodec(Configuration)}} 
> makes an extra call to {{buildCodec}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9511) LZ4 codec retrieval executes redundant code

2013-09-23 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775722#comment-13775722
 ] 

Nick Dimiduk commented on HBASE-9511:
-

Deployed the patch on a small cluster, ran {{PerformanceEvaluation 
--compress=LZ4}}. Everything works as expected.

> LZ4 codec retrieval executes redundant code
> ---
>
> Key: HBASE-9511
> URL: https://issues.apache.org/jira/browse/HBASE-9511
> Project: HBase
>  Issue Type: Bug
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-9511.00.patch
>
>
> the LZ4 implementation of {{Compression.Algorithm#getCodec(Configuration)}} 
> makes an extra call to {{buildCodec}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9641) We should have a way to provide table level based ACL.

2013-09-23 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775707#comment-13775707
 ] 

Matteo Bertozzi commented on HBASE-9641:


AccessControlList already talk about groups, even if they are not really 
implemented.
maybe that is a more generic way to do that?

> We should have a way to provide table level based ACL.
> --
>
> Key: HBASE-9641
> URL: https://issues.apache.org/jira/browse/HBASE-9641
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Jean-Marc Spaggiari
>Priority: Minor
>
> Today we can grant rights to users based on the user / table / column family 
> / family. When there is thousands of users and you want to add a new table, 
> it's long to add back everyone to the table.
> We should be able to provide a table based ACL. Something like "grant_table 
>   [  [  ]]" to give specific 
> rights to a table for ALL the users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7680) implement compaction policy for stripe compactions

2013-09-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7680:


Attachment: HBASE-7680-latest-with-dependencies.patch
HBASE-7680-v16.patch

Fix the comments, esp. in config, and merge the now-unnecessary separate base 
classes/tests with implementation classes/tests.

I think this should be the final patch, or very close

> implement compaction policy for stripe compactions
> --
>
> Key: HBASE-7680
> URL: https://issues.apache.org/jira/browse/HBASE-7680
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 0.98.0
>
> Attachments: HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, 
> HBASE-7680-latest-with-dependencies.patch, HBASE-7680-v0.patch, 
> HBASE-7680-v10.patch, HBASE-7680-v10.patch, HBASE-7680-v11.patch, 
> HBASE-7680-v12.patch, HBASE-7680-v13.patch, HBASE-7680-v13.patch, 
> HBASE-7680-v14.patch, HBASE-7680-v15.patch, HBASE-7680-v16.patch, 
> HBASE-7680-v1.patch, HBASE-7680-v2.patch, HBASE-7680-v3.patch, 
> HBASE-7680-v4.patch, HBASE-7680-v5.patch, HBASE-7680-v6.patch, 
> HBASE-7680-v7.patch, HBASE-7680-v8.patch, HBASE-7680-v9.patch
>
>
> Bringing into 0.95.2 so gets some consideration

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9639) SecureBulkLoad dispatches file load requests to all Regions

2013-09-23 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775702#comment-13775702
 ] 

Nick Dimiduk commented on HBASE-9639:
-

To answer a question asked by [~yuzhih...@gmail.com] offline, the log I pasted 
above "client.HTable: For key range ..." was from an additional log statement I 
added while debugging. In hind-sight, I think it would be clear to see the 
extraneous calls by enabling IPC trace logging.

> SecureBulkLoad dispatches file load requests to all Regions
> ---
>
> Key: HBASE-9639
> URL: https://issues.apache.org/jira/browse/HBASE-9639
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Coprocessors
>Affects Versions: 0.95.2
> Environment: Hadoop2, Kerberos 
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9639.00.patch
>
>
> When running a bulk load on a secure environment and loading data into the 
> first region of a table, the request to load the HFile set is dispatched to 
> all Regions for the table. This is reproduced consistently by running 
> IntegrationTestBulkLoad on a secure cluster. The load fails with an exception 
> that looks like:
> {noformat}
> 2013-08-30 07:37:22,993 INFO  [main] mapreduce.LoadIncrementalHFiles: Split 
> occured while grouping HFiles, retry attempt 1 with 3 files remaining to 
> group or split
> 2013-08-30 07:37:22,999 ERROR [main] mapreduce.LoadIncrementalHFiles: 
> IOException during splitting
> java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File 
> does not exist: 
> /user/hbase/test-data/c45ddfe9-ee30-4d32-8042-928db12b1cee/IntegrationTestBulkLoad-0/L/bf41ea13997b4e228d05e67ba7b1b686
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1489)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1438)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1418)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1392)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:438)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:403)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:284)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runLinkedListMRJob(IntegrationTestBulkLoad.java:200)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:133)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9639) SecureBulkLoad dispatches file load requests to all Regions

2013-09-23 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775680#comment-13775680
 ] 

Nick Dimiduk commented on HBASE-9639:
-

Yes, I think the comparator is in fact correct. The problem appears to be from 
SecureBulkLoadClient#bulkLoadHFiles sending bulk load requests to all regions 
in the table rather than the single region it intended. This is because is 
passes in EMPTY_BYTE_ARRAY for both start and end rows, which is also the 
invocation structure necessary for calling the whole table.

> SecureBulkLoad dispatches file load requests to all Regions
> ---
>
> Key: HBASE-9639
> URL: https://issues.apache.org/jira/browse/HBASE-9639
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Coprocessors
>Affects Versions: 0.95.2
> Environment: Hadoop2, Kerberos 
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9639.00.patch
>
>
> When running a bulk load on a secure environment and loading data into the 
> first region of a table, the request to load the HFile set is dispatched to 
> all Regions for the table. This is reproduced consistently by running 
> IntegrationTestBulkLoad on a secure cluster. The load fails with an exception 
> that looks like:
> {noformat}
> 2013-08-30 07:37:22,993 INFO  [main] mapreduce.LoadIncrementalHFiles: Split 
> occured while grouping HFiles, retry attempt 1 with 3 files remaining to 
> group or split
> 2013-08-30 07:37:22,999 ERROR [main] mapreduce.LoadIncrementalHFiles: 
> IOException during splitting
> java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File 
> does not exist: 
> /user/hbase/test-data/c45ddfe9-ee30-4d32-8042-928db12b1cee/IntegrationTestBulkLoad-0/L/bf41ea13997b4e228d05e67ba7b1b686
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1489)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1438)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1418)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1392)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:438)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:403)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:284)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runLinkedListMRJob(IntegrationTestBulkLoad.java:200)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:133)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9639) SecureBulkLoad dispatches file load requests to all Regions

2013-09-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775675#comment-13775675
 ] 

Enis Soztutar commented on HBASE-9639:
--

bq. It looks like a comparator is broken for byte[0]. With a little extra 
logging, you see this come out of HTable#getStartKeysInRange
This looks correct to me. It should give out every start key for range empty to 
empty, right? 


> SecureBulkLoad dispatches file load requests to all Regions
> ---
>
> Key: HBASE-9639
> URL: https://issues.apache.org/jira/browse/HBASE-9639
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Coprocessors
>Affects Versions: 0.95.2
> Environment: Hadoop2, Kerberos 
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9639.00.patch
>
>
> When running a bulk load on a secure environment and loading data into the 
> first region of a table, the request to load the HFile set is dispatched to 
> all Regions for the table. This is reproduced consistently by running 
> IntegrationTestBulkLoad on a secure cluster. The load fails with an exception 
> that looks like:
> {noformat}
> 2013-08-30 07:37:22,993 INFO  [main] mapreduce.LoadIncrementalHFiles: Split 
> occured while grouping HFiles, retry attempt 1 with 3 files remaining to 
> group or split
> 2013-08-30 07:37:22,999 ERROR [main] mapreduce.LoadIncrementalHFiles: 
> IOException during splitting
> java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File 
> does not exist: 
> /user/hbase/test-data/c45ddfe9-ee30-4d32-8042-928db12b1cee/IntegrationTestBulkLoad-0/L/bf41ea13997b4e228d05e67ba7b1b686
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1489)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1438)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1418)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1392)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:438)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:403)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:284)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runLinkedListMRJob(IntegrationTestBulkLoad.java:200)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:133)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9609) AsyncProcess doesn't increase all the counters when trying to limit the per region flow.

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775671#comment-13775671
 ] 

Hudson commented on HBASE-9609:
---

SUCCESS: Integrated in HBase-TRUNK #4551 (See 
[https://builds.apache.org/job/HBase-TRUNK/4551/])
HBASE-9609  AsyncProcess doesn't incrase all the counters when trying to limit 
the per region flow. (nkeywal: rev 1525643)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java


> AsyncProcess doesn't increase all the counters when trying to limit the per 
> region flow.
> 
>
> Key: HBASE-9609
> URL: https://issues.apache.org/jira/browse/HBASE-9609
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9609.v1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9632) Put the shell in a maven sub module (hbase-shell) instead of hbase-server

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775673#comment-13775673
 ] 

Hudson commented on HBASE-9632:
---

SUCCESS: Integrated in HBase-TRUNK #4551 (See 
[https://builds.apache.org/job/HBase-TRUNK/4551/])
HBASE-9632 Put the shell in a maven sub module (hbase-shell) instead of 
hbase-server (nkeywal: rev 1525641)
* /hbase/trunk/bin/hbase
* /hbase/trunk/bin/hbase.cmd
* /hbase/trunk/hbase-assembly/src/main/assembly/components.xml
* /hbase/trunk/hbase-it/pom.xml
* /hbase/trunk/hbase-server/pom.xml
* /hbase/trunk/hbase-server/src/main/ruby/hbase.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/hbase.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/replication_admin.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/security.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/table.rb
* /hbase/trunk/hbase-server/src/main/ruby/irb/hirb.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/add_peer.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/alter.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/alter_async.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/alter_namespace.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/alter_status.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/assign.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/balance_switch.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/balancer.rb
* 
/hbase/trunk/hbase-server/src/main/ruby/shell/commands/catalogjanitor_enabled.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/catalogjanitor_run.rb
* 
/hbase/trunk/hbase-server/src/main/ruby/shell/commands/catalogjanitor_switch.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/clone_snapshot.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/close_region.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/compact.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/count.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/create.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/create_namespace.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/delete.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/delete_snapshot.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/deleteall.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/describe.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/describe_namespace.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/disable.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/disable_all.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/disable_peer.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/drop.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/drop_all.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/drop_namespace.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/enable.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/enable_all.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/enable_peer.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/exists.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/flush.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/get.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/get_counter.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/get_table.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/grant.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/hlog_roll.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/incr.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/is_disabled.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/is_enabled.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/list.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/list_namespace.rb
* 
/hbase/trunk/hbase-server/src/main/ruby/shell/commands/list_namespace_tables.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/list_peers.rb
* 
/hbase/trunk/hbase-server/src/main/ruby/shell/commands/list_replicated_tables.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/list_snapshots.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/major_compact.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/merge_region.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/move.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/put.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/remove_peer.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/rename_snapshot.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/re

[jira] [Commented] (HBASE-9590) TableEventHandler#reOpenAllRegions() should close the HTable instance

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775670#comment-13775670
 ] 

Hudson commented on HBASE-9590:
---

SUCCESS: Integrated in HBase-TRUNK #4551 (See 
[https://builds.apache.org/job/HBase-TRUNK/4551/])
HBASE-9590 TableEventHandler#reOpenAllRegions() should close the HTable 
instance (tedyu: rev 1525640)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java


> TableEventHandler#reOpenAllRegions() should close the HTable instance
> -
>
> Key: HBASE-9590
> URL: https://issues.apache.org/jira/browse/HBASE-9590
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Trivial
> Fix For: 0.98.0
>
> Attachments: 9590-v1.txt
>
>
> {code}
> HTable table = new HTable(masterServices.getConfiguration(), tableName);
> TreeMap> serverToRegions = Maps
> .newTreeMap();
> NavigableMap hriHserverMapping = 
> table.getRegionLocations();
> {code}
> table should be closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9533) List of dependency jars for MR jobs is hard-coded and does not include netty, breaking MRv1 jobs

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775674#comment-13775674
 ] 

Hudson commented on HBASE-9533:
---

SUCCESS: Integrated in HBase-TRUNK #4551 (See 
[https://builds.apache.org/job/HBase-TRUNK/4551/])
HBASE-9633  Partial reverse of HBASE-9533 (nkeywal: rev 1525648)
* /hbase/trunk/hbase-client/pom.xml
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* /hbase/trunk/pom.xml


> List of dependency jars for MR jobs is hard-coded and does not include netty, 
> breaking MRv1 jobs
> 
>
> Key: HBASE-9533
> URL: https://issues.apache.org/jira/browse/HBASE-9533
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 0.95.2, 0.96.1
>Reporter: Aleksandr Shulman
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9533.txt, 9533v3.txt, 
> failed_mrv1_rowcounter_tt_taskoutput.out
>
>
> Observed behavior:
> Against trunk, using MRv1 with hadoop 1.0.4, r1393290, I am able to run MRv1 
> jobs (e.g. pi 2 4).
> However, when I use it to run MR over HBase jobs, they fail with the stack 
> trace below.
> From the trace, the issue seems to be that it cannot find a class that the 
> netty jar contains. This would make sense, given that the dependency jars 
> that we use for the MapReduce job are hard-coded, and that the netty jar is 
> not one of them.
> https://github.com/apache/hbase/blob/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java#L519
> Strangely, this is only an issue in trunk, not 0.95, even though the code 
> hasn't changed.
> Command:
> {code}/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter 
> sampletable{code}
> TT logs (attached)
> Output from console running job:
> {code}13/09/13 16:02:58 INFO mapred.JobClient: Task Id : 
> attempt_201309131601_0002_m_00_2, Status : FAILED
> java.io.IOException: Cannot create a record reader because of a previous 
> error. Please look at the previous logs lines from the task's full log for 
> more details.
>   at 
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.createRecordReader(TableInputFormatBase.java:119)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.(MapTask.java:489)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>   at org.apache.hadoop.mapred.Child.main(Child.java:249)
> 13/09/13 16:03:09 INFO mapred.JobClient: Job complete: job_201309131601_0002
> 13/09/13 16:03:09 INFO mapred.JobClient: Counters: 7
> 13/09/13 16:03:09 INFO mapred.JobClient:   Job Counters 
> 13/09/13 16:03:09 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=29913
> 13/09/13 16:03:09 INFO mapred.JobClient: Total time spent by all reduces 
> waiting after reserving slots (ms)=0
> 13/09/13 16:03:09 INFO mapred.JobClient: Total time spent by all maps 
> waiting after reserving slots (ms)=0
> 13/09/13 16:03:09 INFO mapred.JobClient: Launched map tasks=4
> 13/09/13 16:03:09 INFO mapred.JobClient: Data-local map tasks=4
> 13/09/13 16:03:09 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
> 13/09/13 16:03:09 INFO mapred.JobClient: Failed map tasks=1{code}
> Expected behavior:
> As a stopgap, the netty jar should be included in that list. More generally, 
> there should be a more elegant way to include the jars that are needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9633) Partial reverse of HBASE-9533

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775672#comment-13775672
 ] 

Hudson commented on HBASE-9633:
---

SUCCESS: Integrated in HBase-TRUNK #4551 (See 
[https://builds.apache.org/job/HBase-TRUNK/4551/])
HBASE-9633  Partial reverse of HBASE-9533 (nkeywal: rev 1525648)
* /hbase/trunk/hbase-client/pom.xml
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* /hbase/trunk/pom.xml


> Partial reverse of HBASE-9533
> -
>
> Key: HBASE-9633
> URL: https://issues.apache.org/jira/browse/HBASE-9633
> Project: HBase
>  Issue Type: Bug
>  Components: build, Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9633.v1.patch
>
>
> I don't understand the solution in HBASE-9533
> In netty 3.3, they changed the group id, if I understand well for legal 
> reasons see https://github.com/netty/netty/issues/103). But they have not 
> changed the java package name: it's still org.jboss.netty. So we should not 
> have to remove our dependency to netty 3.6. 
> So:
> - this comment is wrong imho: the explicit load is not related to the package 
> name but to how mapreduce load work.
> {code}
> +  // This is ugly.  Our zk3.4.5 depends on the org.jboss.netty, not 
> hadoops io.netty
> +  // so need to load it up explicitly while on 3.4.5 zk
> {code}
> - We do use Netty (for the multicast message), so now we're have a missing 
> dependency, as maven says:
> {code}
>  [INFO] — maven-dependency-plugin:2.1:analyze (default-cli) @ hbase-client —
> [WARNING] Used undeclared dependencies found:
> [WARNING] org.jboss.netty:netty:jar:3.2.2.Final:compile
> {code}
> So I propose a partial reverse. [~saint@gmail.com], [@Aleksandr Shulman] 
> would it work for you?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9634) HBase Table few regions are not getting recovered from the 'Transition'/'OFFLINE state'

2013-09-23 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774656#comment-13774656
 ] 

rajeshbabu commented on HBASE-9634:
---

[~nkeywal]
if region server goes down during split then some regions are in 
transition(OFFLINE), going through the logs and give the exact reason.

> HBase Table few regions are not getting recovered from the 
> 'Transition'/'OFFLINE state' 
> 
>
> Key: HBASE-9634
> URL: https://issues.apache.org/jira/browse/HBASE-9634
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.94.11
> Environment: SuSE11
>Reporter: shankarlingayya
>
> {noformat}
> HBase Table few regions are not getting recovered from the 
> 'Transition'/'OFFLINE state'
> Test Procedure:
> 1. Setup Non HA Hadoop Cluster with two nodes (Node1-XX.XX.XX.XX,  
> Node2-YY.YY.YY.YY)
> 2. Install Zookeeper & HRegionServer in Node-1
> 3. Install HMaster & HRegionServer in Node-2
> 4. From Node2 create HBase Table ( table name 't1' with one column family 
> 'cf1' )
> 5. Perform addrecord 99649 rows 
> 6. Perform kill and restart of Node1 Region Server & Node2 Region Server in a 
> loop for 10-20 times
> 2013-09-23 18:28:06,610 INFO 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Opening of 
> region {NAME => 
> 't1,row507465,1379937224590.2d9fad2aee78103f928d8c7fe16ba6cd.', STARTKEY => 
> 'row507465', ENDKEY => 'row508987', ENCODED => 
> 2d9fad2aee78103f928d8c7fe16ba6cd,} failed, marking as FAILED_OPEN in ZK
> 2013-09-23 18:46:12,160 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
> Instantiated t1,row507465,1379937224590.2d9fad2aee78103f928d8c7fe16ba6cd.
> 2013-09-23 18:46:12,160 ERROR 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open 
> of region=t1,row507465,1379937224590.2d9fad2aee78103f928d8c7fe16ba6cd., 
> starting to roll back the global memstore size.
> {noformat}  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-9638) HBase Table some regions are not getting recovered from PENDING_OPEN state

2013-09-23 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu reassigned HBASE-9638:
-

Assignee: rajeshbabu

> HBase Table some regions are not getting recovered from PENDING_OPEN state
> --
>
> Key: HBASE-9638
> URL: https://issues.apache.org/jira/browse/HBASE-9638
> Project: HBase
>  Issue Type: Bug
>  Components: master, regionserver
>Affects Versions: 0.94.11
> Environment: SuSE11
>Reporter: shankarlingayya
>Assignee: rajeshbabu
>
> {noformat}
> Regions in Transition, not getting recovered from PENDING_OPEN state:
> Region: 0034cb85f86dc84401c7ea0d3937f361
> State: t1,row337020,1379937213853.0034cb85f86dc84401c7ea0d3937f361. 
> state=PENDING_OPEN, ts=Mon Sep 23 20:41:45 IST 2013 (1287s ago), 
> server=HOST-XX.XX.XX.XX,61020,1379949057135
> 2013-09-23 20:41:33,377 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
> regionserver:61020-0x14149fdb0af01a7 Successfully transitioned node 
> be960a4e829834202c642fe5f9bd2ec8 from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_OPENING
> 2013-09-23 20:41:33,377 WARN org.apache.hadoop.hbase.zookeeper.ZKAssign: 
> regionserver:61020-0x14149fdb0af01a7 Attempt to transition the unassigned 
> node for 0034cb85f86dc84401c7ea0d3937f361 from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN failed, the node existed but was in the state 
> M_ZK_REGION_OFFLINE set by the server HOST-XX.XX.XX.XX,61020,1379949057135
> 2013-09-23 20:41:33,377 WARN 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Unable to 
> mark region {NAME => 
> 't1,row337020,1379937213853.0034cb85f86dc84401c7ea0d3937f361.', STARTKEY => 
> 'row337020', ENDKEY => 'row338542', ENCODED => 
> 0034cb85f86dc84401c7ea0d3937f361,} as FAILED_OPEN. It's likely that the 
> master already timed out this open attempt, and thus another RS already has 
> the region.
> 2013-09-23 20:41:33,377 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Post open deploy tasks 
> for region=t1,row121299,1379934070472.be960a4e829834202c642fe5f9bd2ec8., 
> daughter=false
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9629) SnapshotReferenceUtil#snapshot should catch RemoteWithExtrasException

2013-09-23 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774717#comment-13774717
 ] 

Matteo Bertozzi commented on HBASE-9629:


ok, what is the rationale behind throwing the RemoteExceptions to the user 
instead of unwrapping them and throw the proper exception? the user will 
probably have to do the unwrap anyway if wants to handler a particular 
exception.

like in the patch attached you've a catch for a CorruptedSnapshotException and 
a catch for a RemoteException with an if for the CorruptedSnapshotException 
case... so what is different between the two? why I've two write two code path 
for the same exception?

> SnapshotReferenceUtil#snapshot should catch RemoteWithExtrasException
> -
>
> Key: HBASE-9629
> URL: https://issues.apache.org/jira/browse/HBASE-9629
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9629.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/7329//testReport/org.apache.hadoop.hbase.snapshot/TestFlushSnapshotFromClient/testTakeSnapshotAfterMerge/
>  :
> {code}
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=snapshotAfterMerge table=test type=FLUSH } had an error.  Procedure 
> snapshotAfterMerge { waiting=[] done=[] }
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:208)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:219)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:123)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:94)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3156)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2705)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2638)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.snapshot(HBaseAdmin.java:2645)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.snapshot(SnapshotTestingUtils.java:260)
>   at 
> org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient.testTakeSnapshotAfterMerge(TestFlushSnapshotFromClient.java:318)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=snapshotAfterMerge table=test type=FLUSH } had an error.  Procedure 
> snapshotAfterMerge { waiting=[] done=[] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:365)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2878)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$2.callBlockingMethod(MasterAdminProtos.java:32890)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:1979)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:90)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at java.util.concurrent.F

[jira] [Updated] (HBASE-5335) Dynamic Schema Configurations

2013-09-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5335:
-

Fix Version/s: 0.94.7

> Dynamic Schema Configurations
> -
>
> Key: HBASE-5335
> URL: https://issues.apache.org/jira/browse/HBASE-5335
> Project: HBase
>  Issue Type: New Feature
>Reporter: Nicolas Spiegelberg
>Assignee: Nicolas Spiegelberg
>  Labels: configuration, schema
> Fix For: 0.94.7, 0.95.0
>
> Attachments: ASF.LICENSE.NOT.GRANTED--D2247.1.patch, 
> ASF.LICENSE.NOT.GRANTED--D2247.2.patch, 
> ASF.LICENSE.NOT.GRANTED--D2247.3.patch, 
> ASF.LICENSE.NOT.GRANTED--D2247.4.patch, 
> ASF.LICENSE.NOT.GRANTED--D2247.5.patch, 
> ASF.LICENSE.NOT.GRANTED--D2247.6.patch, 
> ASF.LICENSE.NOT.GRANTED--D2247.7.patch, 
> ASF.LICENSE.NOT.GRANTED--D2247.8.patch, HBASE-5335-trunk-2.patch, 
> HBASE-5335-trunk-3.patch, HBASE-5335-trunk-3.patch, HBASE-5335-trunk-4.patch, 
> HBASE-5335-trunk.patch
>
>
> Currently, the ability for a core developer to add per-table & per-CF 
> configuration settings is very heavyweight.  You need to add a reserved 
> keyword all the way up the stack & you have to support this variable 
> long-term if you're going to expose it explicitly to the user.  This has 
> ended up with using Configuration.get() a lot because it is lightweight and 
> you can tweak settings while you're trying to understand system behavior 
> [since there are many config params that may never need to be tuned].  We 
> need to add the ability to put & read arbitrary KV settings in the HBase 
> schema.  Combined with online schema change, this will allow us to safely 
> iterate on configuration settings.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9609) AsyncProcess doesn't increase all the counters when trying to limit the per region flow.

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775658#comment-13775658
 ] 

Hudson commented on HBASE-9609:
---

FAILURE: Integrated in hbase-0.96-hadoop2 #54 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/54/])
HBASE-9609  AsyncProcess doesn't incrase all the counters when trying to limit 
the per region flow. (nkeywal: rev 1525646)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/branches/0.96/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java


> AsyncProcess doesn't increase all the counters when trying to limit the per 
> region flow.
> 
>
> Key: HBASE-9609
> URL: https://issues.apache.org/jira/browse/HBASE-9609
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9609.v1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9533) List of dependency jars for MR jobs is hard-coded and does not include netty, breaking MRv1 jobs

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775661#comment-13775661
 ] 

Hudson commented on HBASE-9533:
---

FAILURE: Integrated in hbase-0.96-hadoop2 #54 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/54/])
HBASE-9633  Partial reverse of HBASE-9533 (nkeywal: rev 1525649)
* /hbase/branches/0.96/hbase-client/pom.xml
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* /hbase/branches/0.96/pom.xml


> List of dependency jars for MR jobs is hard-coded and does not include netty, 
> breaking MRv1 jobs
> 
>
> Key: HBASE-9533
> URL: https://issues.apache.org/jira/browse/HBASE-9533
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 0.95.2, 0.96.1
>Reporter: Aleksandr Shulman
>Assignee: Matteo Bertozzi
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9533.txt, 9533v3.txt, 
> failed_mrv1_rowcounter_tt_taskoutput.out
>
>
> Observed behavior:
> Against trunk, using MRv1 with hadoop 1.0.4, r1393290, I am able to run MRv1 
> jobs (e.g. pi 2 4).
> However, when I use it to run MR over HBase jobs, they fail with the stack 
> trace below.
> From the trace, the issue seems to be that it cannot find a class that the 
> netty jar contains. This would make sense, given that the dependency jars 
> that we use for the MapReduce job are hard-coded, and that the netty jar is 
> not one of them.
> https://github.com/apache/hbase/blob/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java#L519
> Strangely, this is only an issue in trunk, not 0.95, even though the code 
> hasn't changed.
> Command:
> {code}/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter 
> sampletable{code}
> TT logs (attached)
> Output from console running job:
> {code}13/09/13 16:02:58 INFO mapred.JobClient: Task Id : 
> attempt_201309131601_0002_m_00_2, Status : FAILED
> java.io.IOException: Cannot create a record reader because of a previous 
> error. Please look at the previous logs lines from the task's full log for 
> more details.
>   at 
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.createRecordReader(TableInputFormatBase.java:119)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.(MapTask.java:489)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>   at org.apache.hadoop.mapred.Child.main(Child.java:249)
> 13/09/13 16:03:09 INFO mapred.JobClient: Job complete: job_201309131601_0002
> 13/09/13 16:03:09 INFO mapred.JobClient: Counters: 7
> 13/09/13 16:03:09 INFO mapred.JobClient:   Job Counters 
> 13/09/13 16:03:09 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=29913
> 13/09/13 16:03:09 INFO mapred.JobClient: Total time spent by all reduces 
> waiting after reserving slots (ms)=0
> 13/09/13 16:03:09 INFO mapred.JobClient: Total time spent by all maps 
> waiting after reserving slots (ms)=0
> 13/09/13 16:03:09 INFO mapred.JobClient: Launched map tasks=4
> 13/09/13 16:03:09 INFO mapred.JobClient: Data-local map tasks=4
> 13/09/13 16:03:09 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
> 13/09/13 16:03:09 INFO mapred.JobClient: Failed map tasks=1{code}
> Expected behavior:
> As a stopgap, the netty jar should be included in that list. More generally, 
> there should be a more elegant way to include the jars that are needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9633) Partial reverse of HBASE-9533

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775659#comment-13775659
 ] 

Hudson commented on HBASE-9633:
---

FAILURE: Integrated in hbase-0.96-hadoop2 #54 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/54/])
HBASE-9633  Partial reverse of HBASE-9533 (nkeywal: rev 1525649)
* /hbase/branches/0.96/hbase-client/pom.xml
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* /hbase/branches/0.96/pom.xml


> Partial reverse of HBASE-9533
> -
>
> Key: HBASE-9633
> URL: https://issues.apache.org/jira/browse/HBASE-9633
> Project: HBase
>  Issue Type: Bug
>  Components: build, Client
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9633.v1.patch
>
>
> I don't understand the solution in HBASE-9533
> In netty 3.3, they changed the group id, if I understand well for legal 
> reasons see https://github.com/netty/netty/issues/103). But they have not 
> changed the java package name: it's still org.jboss.netty. So we should not 
> have to remove our dependency to netty 3.6. 
> So:
> - this comment is wrong imho: the explicit load is not related to the package 
> name but to how mapreduce load work.
> {code}
> +  // This is ugly.  Our zk3.4.5 depends on the org.jboss.netty, not 
> hadoops io.netty
> +  // so need to load it up explicitly while on 3.4.5 zk
> {code}
> - We do use Netty (for the multicast message), so now we're have a missing 
> dependency, as maven says:
> {code}
>  [INFO] — maven-dependency-plugin:2.1:analyze (default-cli) @ hbase-client —
> [WARNING] Used undeclared dependencies found:
> [WARNING] org.jboss.netty:netty:jar:3.2.2.Final:compile
> {code}
> So I propose a partial reverse. [~saint@gmail.com], [@Aleksandr Shulman] 
> would it work for you?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9632) Put the shell in a maven sub module (hbase-shell) instead of hbase-server

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775660#comment-13775660
 ] 

Hudson commented on HBASE-9632:
---

FAILURE: Integrated in hbase-0.96-hadoop2 #54 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/54/])
HBASE-9632 Put the shell in a maven sub module (hbase-shell) instead of 
hbase-server (nkeywal: rev 1525642)
* /hbase/branches/0.96/bin/hbase
* /hbase/branches/0.96/bin/hbase.cmd
* /hbase/branches/0.96/hbase-assembly/src/main/assembly/components.xml
* /hbase/branches/0.96/hbase-it/pom.xml
* /hbase/branches/0.96/hbase-server/pom.xml
* /hbase/branches/0.96/hbase-server/src/main/ruby/hbase.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/hbase/admin.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/hbase/hbase.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/hbase/replication_admin.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/hbase/security.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/hbase/table.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/irb/hirb.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/add_peer.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/alter.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/alter_async.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/alter_namespace.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/alter_status.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/assign.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/balance_switch.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/balancer.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/catalogjanitor_enabled.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/catalogjanitor_run.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/catalogjanitor_switch.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/clone_snapshot.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/close_region.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/compact.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/count.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/create.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/create_namespace.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/delete.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/delete_snapshot.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/deleteall.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/describe.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/describe_namespace.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/disable.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/disable_all.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/disable_peer.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/drop.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/drop_all.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/drop_namespace.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/enable.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/enable_all.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/enable_peer.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/exists.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/flush.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/get.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/get_counter.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/get_table.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/grant.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/hlog_roll.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/incr.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/is_disabled.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/is_enabled.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/list.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/list_namespace.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/list_namespace_tables.rb
* /hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/list_peers.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/commands/list_replicated_tables.rb
* 
/hbase/branches/0.96/hbase-server/src/main/ruby/shell/co

[jira] [Commented] (HBASE-9610) TestThriftServer.testAll failing

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775663#comment-13775663
 ] 

Hudson commented on HBASE-9610:
---

FAILURE: Integrated in hbase-0.96-hadoop2 #54 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/54/])
HBASE-9610 TestThriftServer.testAll failing (stack: rev 1525680)
* 
/hbase/branches/0.96/hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java


> TestThriftServer.testAll failing
> 
>
> Key: HBASE-9610
> URL: https://issues.apache.org/jira/browse/HBASE-9610
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Elliott Clark
> Attachments: 9610-debugging.txt, more_debugging.txt
>
>
> http://jenkins-public.iridiant.net/job/hbase-0.96-hadoop2/140/org.apache.hbase$hbase-thrift/testReport/junit/org.apache.hadoop.hbase.thrift/TestThriftServer/testAll/
> {code}
> java.lang.AssertionError: Metrics Counters should be equal expected:<2> but 
> was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
> {code}
> Here too:
> http://jenkins-public.iridiant.net/job/hbase-0.96/134/
> http://jenkins-public.iridiant.net/job/hbase-0.96-hadoop2/140/
> Mind taking a looksee [~eclark]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9430) Memstore heapSize calculation - DEEP_OVERHEAD is incorrect

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775662#comment-13775662
 ] 

Hudson commented on HBASE-9430:
---

FAILURE: Integrated in hbase-0.96-hadoop2 #54 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/54/])
HBASE-9430 Memstore heapSize calculation - DEEP_OVERHEAD is incorrect 
(anoopsamjohn: rev 1525571)
* 
/hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueSkipListSet.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java


> Memstore heapSize calculation - DEEP_OVERHEAD is incorrect
> --
>
> Key: HBASE-9430
> URL: https://issues.apache.org/jira/browse/HBASE-9430
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.98.0, 0.94.13, 0.96.1
>
> Attachments: HBASE-9430_94.patch, HBASE-9430.patch
>
>
> {code}
> expected += ClassSize.estimateBase(ConcurrentSkipListMap.class, false);
> expected += ClassSize.estimateBase(ConcurrentSkipListMap.class, false);
> expected += ClassSize.estimateBase(CopyOnWriteArraySet.class, false);
> expected += ClassSize.estimateBase(CopyOnWriteArrayList.class, false);
> {code}
> We need to consider the heap requirement for KeyValueSkipListSet.
> From where CopyOnWriteArraySet & CopyOnWriteArrayList is coming into picture? 
>  I am not able to follow.
> Also we need to consider the heap for TimeRangeTracker

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9632) Put the shell in a maven sub module (hbase-shell) instead of hbase-server

2013-09-23 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774724#comment-13774724
 ] 

Elliott Clark commented on HBASE-9632:
--

+1

> Put the shell in a maven sub module (hbase-shell) instead of hbase-server
> -
>
> Key: HBASE-9632
> URL: https://issues.apache.org/jira/browse/HBASE-9632
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9632.v1.patch
>
>
> This will remove the dependency from hbase-server to jruby. jruby is huge and 
> contains many sub dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9514) Prevent region from assigning before log splitting is done

2013-09-23 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775654#comment-13775654
 ] 

Jimmy Xiang commented on HBASE-9514:


Almost there. Without schema change/master restart actions, I got it green.

> Prevent region from assigning before log splitting is done
> --
>
> Key: HBASE-9514
> URL: https://issues.apache.org/jira/browse/HBASE-9514
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Blocker
> Attachments: trunk-9514_v1.patch, trunk-9514_v2.patch, 
> trunk-9514_v3.patch
>
>
> If a region is assigned before log splitting is done by the server shutdown 
> handler, the edits belonging to this region in the hlogs of the dead server 
> will be lost.
> Generally this is not an issue if users don't assign/unassign a region from 
> hbase shell or via hbase admin. These commands are marked for experts only in 
> the hbase shell help too.  However, chaos monkey doesn't care.
> If we can prevent from assigning such regions in a bad time, it would make 
> things a little safer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9390) coprocessors observers are not called during a recovery with the new log replay algorithm

2013-09-23 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-9390:
-

Attachment: hbase-9390-part2-v2.patch

Thanks [~nkeywal] and [~te...@apache.org] reviews! I added a test case and 
incorporate feedbacks from Ted.


> coprocessors observers are not called during a recovery with the new log 
> replay algorithm
> -
>
> Key: HBASE-9390
> URL: https://issues.apache.org/jira/browse/HBASE-9390
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, MTTR
>Affects Versions: 0.95.2
>Reporter: Nicolas Liochon
>Assignee: Jeffrey Zhong
> Attachments: copro.patch, hbase-9390-part2.patch, 
> hbase-9390-part2-v2.patch, hbase-9390.patch, hbase-9390-v2.patch
>
>
> See the patch to reproduce the issue: If we activate log replay we don't have 
> the events on WAL restore.
> Pinging [~jeffreyz], we discussed this offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9420) Math.max() on syncedTillHere lacks synchronization

2013-09-23 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775650#comment-13775650
 ] 

Himanshu Vashishtha commented on HBASE-9420:


Well, our semantics are basically correct even without this synchronized block. 
Going by what Enis said, we don't need this at all IMHO. Thanks Enis.

> Math.max() on syncedTillHere lacks synchronization
> --
>
> Key: HBASE-9420
> URL: https://issues.apache.org/jira/browse/HBASE-9420
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Trivial
> Fix For: 0.98.0
>
> Attachments: 9420-v1.txt, 9420-v2.txt
>
>
> In FSHlog#syncer(), around line 1080:
> {code}
>   this.syncedTillHere = Math.max(this.syncedTillHere, doneUpto);
> {code}
> Assignment to syncedTillHere after computing max value is not protected by 
> proper synchronization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8755) A new write thread model for HLog to improve the overall HBase write throughput

2013-09-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775651#comment-13775651
 ] 

stack commented on HBASE-8755:
--

bq. What was implication of not shutting this?   Were tests failing or is this 
just make-work?

I did not save avg ops/s. It was a set amount of work.

bq. 2. is the write load against a single node, or five nodes? – to confirm the 
throughput is per-node or per-cluster(with five nodes)

I had a client writing to a WAL hosted in hdfs on a 5-node HDFS cluster.

Thanks [~fenghh]

> A new write thread model for HLog to improve the overall HBase write 
> throughput
> ---
>
> Key: HBASE-8755
> URL: https://issues.apache.org/jira/browse/HBASE-8755
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, wal
>Reporter: Feng Honghua
>Assignee: stack
>Priority: Critical
> Fix For: 0.96.1
>
> Attachments: 8755trunkV2.txt, HBASE-8755-0.94-V0.patch, 
> HBASE-8755-0.94-V1.patch, HBASE-8755-trunk-V0.patch, HBASE-8755-trunk-V1.patch
>
>
> In current write model, each write handler thread (executing put()) will 
> individually go through a full 'append (hlog local buffer) => HLog writer 
> append (write to hdfs) => HLog writer sync (sync hdfs)' cycle for each write, 
> which incurs heavy race condition on updateLock and flushLock.
> The only optimization where checking if current syncTillHere > txid in 
> expectation for other thread help write/sync its own txid to hdfs and 
> omitting the write/sync actually help much less than expectation.
> Three of my colleagues(Ye Hangjun / Wu Zesheng / Zhang Peng) at Xiaomi 
> proposed a new write thread model for writing hdfs sequence file and the 
> prototype implementation shows a 4X improvement for throughput (from 17000 to 
> 7+). 
> I apply this new write thread model in HLog and the performance test in our 
> test cluster shows about 3X throughput improvement (from 12150 to 31520 for 1 
> RS, from 22000 to 7 for 5 RS), the 1 RS write throughput (1K row-size) 
> even beats the one of BigTable (Precolator published in 2011 says Bigtable's 
> write throughput then is 31002). I can provide the detailed performance test 
> results if anyone is interested.
> The change for new write thread model is as below:
>  1> All put handler threads append the edits to HLog's local pending buffer; 
> (it notifies AsyncWriter thread that there is new edits in local buffer)
>  2> All put handler threads wait in HLog.syncer() function for underlying 
> threads to finish the sync that contains its txid;
>  3> An single AsyncWriter thread is responsible for retrieve all the buffered 
> edits in HLog's local pending buffer and write to the hdfs 
> (hlog.writer.append); (it notifies AsyncFlusher thread that there is new 
> writes to hdfs that needs a sync)
>  4> An single AsyncFlusher thread is responsible for issuing a sync to hdfs 
> to persist the writes by AsyncWriter; (it notifies the AsyncNotifier thread 
> that sync watermark increases)
>  5> An single AsyncNotifier thread is responsible for notifying all pending 
> put handler threads which are waiting in the HLog.syncer() function
>  6> No LogSyncer thread any more (since there is always 
> AsyncWriter/AsyncFlusher threads do the same job it does)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9583) add document for getShortMidpointKey

2013-09-23 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775648#comment-13775648
 ] 

Jonathan Hsieh commented on HBASE-9583:
---

[~xieliang007] Write draft with the content, and we'll do a pass to make it 
more native speaker sounding?

> add document for getShortMidpointKey
> 
>
> Key: HBASE-9583
> URL: https://issues.apache.org/jira/browse/HBASE-9583
> Project: HBase
>  Issue Type: Task
>  Components: HFile
>Affects Versions: 0.98.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>
> add the faked key to documentation http://hbase.apache.org/book.html#hfilev2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9590) TableEventHandler#reOpenAllRegions() should close the HTable instance

2013-09-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9590:
-

Priority: Trivial  (was: Minor)

What was implication of not shutting this?   Were tests failing or is this just 
make-work?

Making trivial.

> TableEventHandler#reOpenAllRegions() should close the HTable instance
> -
>
> Key: HBASE-9590
> URL: https://issues.apache.org/jira/browse/HBASE-9590
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Trivial
> Fix For: 0.98.0
>
> Attachments: 9590-v1.txt
>
>
> {code}
> HTable table = new HTable(masterServices.getConfiguration(), tableName);
> TreeMap> serverToRegions = Maps
> .newTreeMap();
> NavigableMap hriHserverMapping = 
> table.getRegionLocations();
> {code}
> table should be closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9420) Math.max() on syncedTillHere lacks synchronization

2013-09-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9420:
-

Priority: Trivial  (was: Major)

What are implications of not having this sync'd?

Setting to trivial.

> Math.max() on syncedTillHere lacks synchronization
> --
>
> Key: HBASE-9420
> URL: https://issues.apache.org/jira/browse/HBASE-9420
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Trivial
> Fix For: 0.98.0
>
> Attachments: 9420-v1.txt, 9420-v2.txt
>
>
> In FSHlog#syncer(), around line 1080:
> {code}
>   this.syncedTillHere = Math.max(this.syncedTillHere, doneUpto);
> {code}
> Assignment to syncedTillHere after computing max value is not protected by 
> proper synchronization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9639) SecureBulkLoad dispatches file load requests to all Regions

2013-09-23 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775637#comment-13775637
 ] 

Nick Dimiduk commented on HBASE-9639:
-

This patch explicitly limits the number of calls made by 
SecureBulkLoadClient#bulkLoadHFiles to one region per HFile batch.

> SecureBulkLoad dispatches file load requests to all Regions
> ---
>
> Key: HBASE-9639
> URL: https://issues.apache.org/jira/browse/HBASE-9639
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Coprocessors
>Affects Versions: 0.95.2
> Environment: Hadoop2, Kerberos 
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9639.00.patch
>
>
> When running a bulk load on a secure environment and loading data into the 
> first region of a table, the request to load the HFile set is dispatched to 
> all Regions for the table. This is reproduced consistently by running 
> IntegrationTestBulkLoad on a secure cluster. The load fails with an exception 
> that looks like:
> {noformat}
> 2013-08-30 07:37:22,993 INFO  [main] mapreduce.LoadIncrementalHFiles: Split 
> occured while grouping HFiles, retry attempt 1 with 3 files remaining to 
> group or split
> 2013-08-30 07:37:22,999 ERROR [main] mapreduce.LoadIncrementalHFiles: 
> IOException during splitting
> java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File 
> does not exist: 
> /user/hbase/test-data/c45ddfe9-ee30-4d32-8042-928db12b1cee/IntegrationTestBulkLoad-0/L/bf41ea13997b4e228d05e67ba7b1b686
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1489)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1438)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1418)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1392)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:438)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:403)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:284)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runLinkedListMRJob(IntegrationTestBulkLoad.java:200)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:133)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9632) Put the shell in a maven sub module (hbase-shell) instead of hbase-server

2013-09-23 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9632:
---

   Resolution: Fixed
Fix Version/s: 0.96.1
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

> Put the shell in a maven sub module (hbase-shell) instead of hbase-server
> -
>
> Key: HBASE-9632
> URL: https://issues.apache.org/jira/browse/HBASE-9632
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.1
>
> Attachments: 9632.v1.patch
>
>
> This will remove the dependency from hbase-server to jruby. jruby is huge and 
> contains many sub dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9514) Prevent region from assigning before log splitting is done

2013-09-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775634#comment-13775634
 ] 

Enis Soztutar commented on HBASE-9514:
--

Any update on this [~jxiang]? 

> Prevent region from assigning before log splitting is done
> --
>
> Key: HBASE-9514
> URL: https://issues.apache.org/jira/browse/HBASE-9514
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Blocker
> Attachments: trunk-9514_v1.patch, trunk-9514_v2.patch, 
> trunk-9514_v3.patch
>
>
> If a region is assigned before log splitting is done by the server shutdown 
> handler, the edits belonging to this region in the hlogs of the dead server 
> will be lost.
> Generally this is not an issue if users don't assign/unassign a region from 
> hbase shell or via hbase admin. These commands are marked for experts only in 
> the hbase shell help too.  However, chaos monkey doesn't care.
> If we can prevent from assigning such regions in a bad time, it would make 
> things a little safer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9420) Math.max() on syncedTillHere lacks synchronization

2013-09-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775631#comment-13775631
 ] 

Enis Soztutar commented on HBASE-9420:
--

>From my understanding without this synchronization, syncedTillHere might go 
>down, but does it affect semantics. At worst case it will cause an extra snyc 
>no? 

> Math.max() on syncedTillHere lacks synchronization
> --
>
> Key: HBASE-9420
> URL: https://issues.apache.org/jira/browse/HBASE-9420
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 9420-v1.txt, 9420-v2.txt
>
>
> In FSHlog#syncer(), around line 1080:
> {code}
>   this.syncedTillHere = Math.max(this.syncedTillHere, doneUpto);
> {code}
> Assignment to syncedTillHere after computing max value is not protected by 
> proper synchronization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9602) Cluster can't start when log splitting at startup time and the master's web UI is refreshed a few times

2013-09-23 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-9602:
--

Summary: Cluster can't start when log splitting at startup time and the 
master's web UI is refreshed a few times  (was: Cannot show the master's web UI 
when splitting logs at start time)

Changing the title to reflect that it's not just a web UI problem. Any sizeable 
amount of splitting coupled with refreshing the web page will prevent the 
master from handling log splitting because all the handlers are full waiting on 
the Namespace region.

> Cluster can't start when log splitting at startup time and the master's web 
> UI is refreshed a few times
> ---
>
> Key: HBASE-9602
> URL: https://issues.apache.org/jira/browse/HBASE-9602
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.0
>Reporter: Jean-Daniel Cryans
>Assignee: stack
>Priority: Critical
> Fix For: 0.98.0, 0.96.0
>
> Attachments: HBASE-9602.jstack.rtf
>
>
> It looks like we cannot show the master's web ui at start time when there are 
> logs to split because we can't reach the namespace regions.
> So it means that you can't see how things are progressing without tailing the 
> log while waiting on your cluster to boot up. This wasn't the case in 0.94
> See this jstack:
> {noformat}
> "606214580@qtp-2001431298-3" prio=10 tid=0x7f6ac804 nid=0x7b1 in 
> Object.wait() [0x7f6aa82bf000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xbc0c1460> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$Call)
>   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1416)
>   - locked <0xbc0c1460> (a 
> org.apache.hadoop.hbase.ipc.RpcClient$Call)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1634)
>   at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1691)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterAdminProtos$MasterAdminService$BlockingStub.listTableDescriptorsByNamespace(MasterAdminProtos.java:35031)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$5.listTableDescriptorsByNamespace(HConnectionManager.java:2181)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin$22.call(HBaseAdmin.java:2265)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin$22.call(HBaseAdmin.java:2262)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:116)
>   - locked <0xc09baf20> (a 
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:94)
>   - locked <0xc09baf20> (a 
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3155)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.listTableDescriptorsByNamespace(HBaseAdmin.java:2261)
>   at 
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.__jamon_innerUnit__catalogTables(MasterStatusTmplImpl.java:461)
>   at 
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:270)
>   at 
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:382)
>   at 
> org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:372)
>   at 
> org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:95)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:850)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handl

[jira] [Resolved] (HBASE-6515) Setting request size with protobuf

2013-09-23 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha resolved HBASE-6515.


Resolution: Duplicate

> Setting request size with protobuf
> --
>
> Key: HBASE-6515
> URL: https://issues.apache.org/jira/browse/HBASE-6515
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC, Replication
>Affects Versions: 0.95.2
>Reporter: Himanshu Vashishtha
>Priority: Critical
>
> While running replication on upstream code, I am hitting  the size-limit 
> exception while sending WALEdits to a different cluster.
> {code}
> com.google.protobuf.InvalidProtocolBufferException: IPC server unable to read 
> call parameters: Protocol message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> {code}
> Do we have a property to set some max size or something?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9639) SecureBulkLoad dispatches file load requests to all Regions

2013-09-23 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9639:


Status: Patch Available  (was: Open)

> SecureBulkLoad dispatches file load requests to all Regions
> ---
>
> Key: HBASE-9639
> URL: https://issues.apache.org/jira/browse/HBASE-9639
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Coprocessors
>Affects Versions: 0.95.2
> Environment: Hadoop2, Kerberos 
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9639.00.patch
>
>
> When running a bulk load on a secure environment and loading data into the 
> first region of a table, the request to load the HFile set is dispatched to 
> all Regions for the table. This is reproduced consistently by running 
> IntegrationTestBulkLoad on a secure cluster. The load fails with an exception 
> that looks like:
> {noformat}
> 2013-08-30 07:37:22,993 INFO  [main] mapreduce.LoadIncrementalHFiles: Split 
> occured while grouping HFiles, retry attempt 1 with 3 files remaining to 
> group or split
> 2013-08-30 07:37:22,999 ERROR [main] mapreduce.LoadIncrementalHFiles: 
> IOException during splitting
> java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File 
> does not exist: 
> /user/hbase/test-data/c45ddfe9-ee30-4d32-8042-928db12b1cee/IntegrationTestBulkLoad-0/L/bf41ea13997b4e228d05e67ba7b1b686
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1489)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1438)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1418)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1392)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:438)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:403)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:284)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runLinkedListMRJob(IntegrationTestBulkLoad.java:200)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:133)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9641) We should have a way to provide table level based ACL.

2013-09-23 Thread Jean-Marc Spaggiari (JIRA)
Jean-Marc Spaggiari created HBASE-9641:
--

 Summary: We should have a way to provide table level based ACL.
 Key: HBASE-9641
 URL: https://issues.apache.org/jira/browse/HBASE-9641
 Project: HBase
  Issue Type: Improvement
  Components: security
Reporter: Jean-Marc Spaggiari
Priority: Minor


Today we can grant rights to users based on the user / table / column family / 
family. When there is thousands of users and you want to add a new table, it's 
long to add back everyone to the table.

We should be able to provide a table based ACL. Something like "grant_table 
  [  [  ]]" to give specific rights 
to a table for ALL the users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9639) SecureBulkLoad dispatches file load requests to all Regions

2013-09-23 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9639:


Attachment: HBASE-9639.00.patch

> SecureBulkLoad dispatches file load requests to all Regions
> ---
>
> Key: HBASE-9639
> URL: https://issues.apache.org/jira/browse/HBASE-9639
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Coprocessors
>Affects Versions: 0.95.2
> Environment: Hadoop2, Kerberos 
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.98.0, 0.96.1
>
> Attachments: HBASE-9639.00.patch
>
>
> When running a bulk load on a secure environment and loading data into the 
> first region of a table, the request to load the HFile set is dispatched to 
> all Regions for the table. This is reproduced consistently by running 
> IntegrationTestBulkLoad on a secure cluster. The load fails with an exception 
> that looks like:
> {noformat}
> 2013-08-30 07:37:22,993 INFO  [main] mapreduce.LoadIncrementalHFiles: Split 
> occured while grouping HFiles, retry attempt 1 with 3 files remaining to 
> group or split
> 2013-08-30 07:37:22,999 ERROR [main] mapreduce.LoadIncrementalHFiles: 
> IOException during splitting
> java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File 
> does not exist: 
> /user/hbase/test-data/c45ddfe9-ee30-4d32-8042-928db12b1cee/IntegrationTestBulkLoad-0/L/bf41ea13997b4e228d05e67ba7b1b686
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
> at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:51)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1489)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1438)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1418)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1392)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:438)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:269)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59566)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplitPhase(LoadIncrementalHFiles.java:403)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:284)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runLinkedListMRJob(IntegrationTestBulkLoad.java:200)
> at 
> org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:133)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9420) Math.max() on syncedTillHere lacks synchronization

2013-09-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9420:
--

Attachment: 9420-v2.txt

Patch v2 uses flushLock for synchronization.

> Math.max() on syncedTillHere lacks synchronization
> --
>
> Key: HBASE-9420
> URL: https://issues.apache.org/jira/browse/HBASE-9420
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 9420-v1.txt, 9420-v2.txt
>
>
> In FSHlog#syncer(), around line 1080:
> {code}
>   this.syncedTillHere = Math.max(this.syncedTillHere, doneUpto);
> {code}
> Assignment to syncedTillHere after computing max value is not protected by 
> proper synchronization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6201) HBase integration/system tests

2013-09-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774754#comment-13774754
 ] 

stack commented on HBASE-6201:
--

Can this be closed?  One subtask is open but it seems that this issue took on 
what the subtask was all about.

> HBase integration/system tests
> --
>
> Key: HBASE-6201
> URL: https://issues.apache.org/jira/browse/HBASE-6201
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.95.2
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>
> Integration and general system tests have been discussed previously, and the 
> conclusion is that we need to unify how we do "release candidate" testing 
> (HBASE-6091).
> In this issue, I would like to discuss and agree on a general plan, and open 
> subtickets for execution so that we can carry out most of the tests in 
> HBASE-6091 automatically. 
> Initially, here is what I have in mind: 
> 1. Create hbase-it (or hbase-tests) containing forward port of HBASE-4454 
> (without any tests). This will allow integration test to be run with
>  {code}
>   mvn verify
>  {code}
> 2. Add ability to run all integration/system tests on a given cluster. Smt 
> like: 
>  {code}
>   mvn verify -Dconf=/etc/hbase/conf/
>  {code}
> should run the test suite on the given cluster. (Right now we can launch some 
> of the tests (TestAcidGuarantees) from command line). Most of the system 
> tests will be client side, and interface with the cluster through public 
> APIs. We need a tool on top of MiniHBaseCluster or improve 
> HBaseTestingUtility, so that tests can interface with the mini cluster or the 
> actual cluster uniformly.
> 3. Port candidate unit tests to the integration tests module. Some of the 
> candidates are: 
>  - TestAcidGuarantees / TestAtomicOperation
>  - TestRegionBalancing (HBASE-6053)
>  - TestFullLogReconstruction
>  - TestMasterFailover
>  - TestImportExport
>  - TestMultiVersions / TestKeepDeletes
>  - TestFromClientSide
>  - TestShell and src/test/ruby
>  - TestRollingRestart
>  - Test**OnCluster
>  - Balancer tests
> These tests should continue to be run as unit tests w/o any change in 
> semantics. However, given an actual cluster, they should use that, instead of 
> spinning a mini cluster.  
> 4. Add more tests, especially, long running ingestion tests (goraci, BigTop's 
> TestLoadAndVerify, LoadTestTool), and chaos monkey style fault tests. 
> All suggestions welcome. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9606) Apply small scan to meta scan where rowLimit is low

2013-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774825#comment-13774825
 ] 

Hudson commented on HBASE-9606:
---

FAILURE: Integrated in HBase-TRUNK #4550 (See 
[https://builds.apache.org/job/HBase-TRUNK/4550/])
HBASE-9606 Apply small scan to meta scan where rowLimit is low (tedyu: rev 
1525634)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java


> Apply small scan to meta scan where rowLimit is low
> ---
>
> Key: HBASE-9606
> URL: https://issues.apache.org/jira/browse/HBASE-9606
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 9606-v2.txt, small-v3.txt
>
>
> HBASE-9488 added the feature for small scan where RPC calls are reduced.
> We can apply small scan to MetaScanner#metaScan() where rowLimit is low.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9593) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2013-09-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774819#comment-13774819
 ] 

Ted Yu commented on HBASE-9593:
---

{code}
+@Category(LargeTests.class)
+public class TestRSKilledWhenInitializing {
{code}
Add brief description for the new test.
{code}
+  Thread.sleep(100);
+} catch (InterruptedException e1) {
+  // TODO Auto-generated catch block
+  e1.printStackTrace();
{code}
If you use Threads.sleep(), you don't need to deal with InterruptedException 
yourself.

> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-9593
> URL: https://issues.apache.org/jira/browse/HBASE-9593
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0, 0.94.13, 0.96.1
>
> Attachments: HBASE-9593.patch, HBASE-9593_v2.patch
>
>
> In some of our tests we found that regionserer always showing online in 
> master UI but its actually dead.
> If region server went down in the middle following steps then the region 
> server always showing in master online servers list.
> 1) register to master
> 2) create  ephemeral znode
> Since no notification from zookeeper, master is not removing the expired 
> server from online servers list.
> Assignments will fail if the RS is selected as destination server.
> Some cases ROOT or META also wont be assigned if the RS is randomly selected 
> every time need to wait for timeout.
> Here are the logs:
> 1) HOST-10-18-40-153 is registered to master
> {code}
> 2013-09-19 19:47:41,123 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> STARTUP: Server HOST-10-18-40-153,61020,1379600260255 came back up, removed 
> it from the dead servers list
> 2013-09-19 19:47:41,123 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Registering server=HOST-10-18-40-153,61020,1379600260255
> {code}
> {code}
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to master at 
> HOST-10-18-40-153/10.18.40.153:61000
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 
> HOST-10-18-40-153,61000,1379600055284 that we are up with port=61020, 
> startcode=1379600260255
> {code}
> 2) Terminated before creating ephemeral node.
> {code}
> Thu Sep 19 19:47:41 IST 2013 Terminating regionserver
> {code}
> 3) The RS can be selected for assignment and they will fail.
> {code}
> 2013-09-19 19:47:54,049 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> -ROOT-,,0.70236052 to HOST-10-18-40-153,61020,1379600260255, trying to assign 
> elsewhere instead; retry=0
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:390)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:436)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1127)
>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>   at $Proxy15.openRegion(Unknown Source)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:533)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1734)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1431)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1406)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1401)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2374)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRoot(MetaServerShutdownHandler.java:136)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRootWithRetries(MetaServerShutdownHandler.java:160)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerSh

  1   2   >