[jira] [Created] (HBASE-10909) Abstract out ZooKeeper usage in HBase

2014-04-04 Thread Mikhail Antonov (JIRA)
Mikhail Antonov created HBASE-10909:
---

 Summary: Abstract out ZooKeeper usage in HBase
 Key: HBASE-10909
 URL: https://issues.apache.org/jira/browse/HBASE-10909
 Project: HBase
  Issue Type: Umbrella
  Components: Zookeeper
Reporter: Mikhail Antonov
Assignee: Mikhail Antonov






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10866) Decouple HLogSplitterHandler from ZooKeeper

2014-04-04 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10866:


Issue Type: Sub-task  (was: Improvement)
Parent: HBASE-10909

> Decouple HLogSplitterHandler from ZooKeeper
> ---
>
> Key: HBASE-10866
> URL: https://issues.apache.org/jira/browse/HBASE-10866
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Zookeeper
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Attachments: HBASE-10866.patch, HBASE-10866.patch, HBASE-10866.patch, 
> HBASE-10866.patch, HBaseConsensus.pdf
>
>
> As some sort of follow-up or initial step towards HBASE-10296...
> Whatever consensus algorithm/library may be the chosen, perhaps on of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> I'd like to propose a series of patches to help better abstract out zookeeper 
> (and then help develop consensus APIs). 
> Here is first version of  patch for initial review (then I'm planning to work 
> on another handlers in regionserver, and then perhaps start working on 
> abstracting listeners).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10909) Abstract out ZooKeeper usage in HBase

2014-04-04 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10909:


Description: 
(Original discussion started in comments for HBASE-10866)

As some sort of follow-up or initial step towards HBASE-10296.
Whatever consensus algorithm/library may be the chosen, perhaps one of first 
practical steps towards this goal would be to better abstract ZK-related API 
and details, which are now throughout the codebase (mostly leaked throuth 
ZkUtil, ZooKeeperWatcher and listeners).

This jira is umbrella for relevant subtasks.

> Abstract out ZooKeeper usage in HBase
> -
>
> Key: HBASE-10909
> URL: https://issues.apache.org/jira/browse/HBASE-10909
> Project: HBase
>  Issue Type: Umbrella
>  Components: Zookeeper
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>
> (Original discussion started in comments for HBASE-10866)
> As some sort of follow-up or initial step towards HBASE-10296.
> Whatever consensus algorithm/library may be the chosen, perhaps one of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> This jira is umbrella for relevant subtasks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10118) Major compact keeps deletes with future timestamps

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959720#comment-13959720
 ] 

Hudson commented on HBASE-10118:


FAILURE: Integrated in HBase-0.94-security #456 (See 
[https://builds.apache.org/job/HBase-0.94-security/456/])
HBASE-10118 Major compact keeps deletes with future timestamps. (Liu Shaohui) 
(larsh: rev 1584514)
* /hbase/branches/0.94/src/docbkx/book.xml
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> Major compact keeps deletes with future timestamps
> --
>
> Key: HBASE-10118
> URL: https://issues.apache.org/jira/browse/HBASE-10118
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Deletes, regionserver
>Reporter: Max Lapan
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3
>
> Attachments: HBASE-10118-0.94-v1.diff, HBASE-10118-trunk-v1.diff, 
> HBASE-10118-trunk-v2.diff, HBASE-10118-trunk-v3.diff
>
>
> Hello!
> During migration from HBase 0.90.6 to 0.94.6 we found changed behaviour in 
> how major compact handles delete markers with timestamps in the future. 
> Before HBASE-4721 major compact purged deletes regardless of their timestamp. 
> Newer versions keep them in HFile until timestamp not reached.
> I guess this happened due to new check in ScanQueryMatcher 
> {{EnvironmentEdgeManager.currentTimeMillis() - timestamp) <= 
> timeToPurgeDeletes}}.
> This can be worked around by specifying large negative value in 
> {{hbase.hstore.time.to.purge.deletes}} option, but, unfortunately, negative 
> values are pulled up to zero by Math.max in HStore.java.
> Maybe, we are trying to do something weird by specifing delete timestamp in 
> future, but HBASE-4721 definitely breaks old behaviour we rely on.
> Steps to reproduce this:
> {code}
> put 'test', 'delmeRow', 'delme:something', 'hello'
> flush 'test'
> delete 'test', 'delmeRow', 'delme:something', 1394161431061
> flush 'test'
> major_compact 'test'
> {code}
> Before major_compact we have two hfiles with the following:
> {code}
> first:
> K: delmeRow/delme:something/1384161431061/Put/vlen=5/ts=0
> second:
> K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
> {code}
> After major compact we get the following:
> {code}
> K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
> {code}
> In our installation, we resolved this by removing Math.max and setting 
> hbase.hstore.time.to.purge.deletes to Integer.MIN_VALUE, which purges delete 
> markers, and it looks like a solution. But, maybe, there are better approach.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10118) Major compact keeps deletes with future timestamps

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959725#comment-13959725
 ] 

Hudson commented on HBASE-10118:


FAILURE: Integrated in HBase-0.94-JDK7 #103 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/103/])
HBASE-10118 Major compact keeps deletes with future timestamps. (Liu Shaohui) 
(larsh: rev 1584514)
* /hbase/branches/0.94/src/docbkx/book.xml
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> Major compact keeps deletes with future timestamps
> --
>
> Key: HBASE-10118
> URL: https://issues.apache.org/jira/browse/HBASE-10118
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Deletes, regionserver
>Reporter: Max Lapan
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3
>
> Attachments: HBASE-10118-0.94-v1.diff, HBASE-10118-trunk-v1.diff, 
> HBASE-10118-trunk-v2.diff, HBASE-10118-trunk-v3.diff
>
>
> Hello!
> During migration from HBase 0.90.6 to 0.94.6 we found changed behaviour in 
> how major compact handles delete markers with timestamps in the future. 
> Before HBASE-4721 major compact purged deletes regardless of their timestamp. 
> Newer versions keep them in HFile until timestamp not reached.
> I guess this happened due to new check in ScanQueryMatcher 
> {{EnvironmentEdgeManager.currentTimeMillis() - timestamp) <= 
> timeToPurgeDeletes}}.
> This can be worked around by specifying large negative value in 
> {{hbase.hstore.time.to.purge.deletes}} option, but, unfortunately, negative 
> values are pulled up to zero by Math.max in HStore.java.
> Maybe, we are trying to do something weird by specifing delete timestamp in 
> future, but HBASE-4721 definitely breaks old behaviour we rely on.
> Steps to reproduce this:
> {code}
> put 'test', 'delmeRow', 'delme:something', 'hello'
> flush 'test'
> delete 'test', 'delmeRow', 'delme:something', 1394161431061
> flush 'test'
> major_compact 'test'
> {code}
> Before major_compact we have two hfiles with the following:
> {code}
> first:
> K: delmeRow/delme:something/1384161431061/Put/vlen=5/ts=0
> second:
> K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
> {code}
> After major compact we get the following:
> {code}
> K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
> {code}
> In our installation, we resolved this by removing Math.max and setting 
> hbase.hstore.time.to.purge.deletes to Integer.MIN_VALUE, which purges delete 
> markers, and it looks like a solution. But, maybe, there are better approach.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10903) HBASE-10740 regression; cannot pass commands for zk to run

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959728#comment-13959728
 ] 

Hudson commented on HBASE-10903:


SUCCESS: Integrated in hbase-0.96 #377 (See 
[https://builds.apache.org/job/hbase-0.96/377/])
HBASE-10903 HBASE-10740 regression; cannot pass commands for zk to run (stack: 
rev 1584424)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperMainServer.java


> HBASE-10740 regression; cannot pass commands for zk to run
> --
>
> Key: HBASE-10903
> URL: https://issues.apache.org/jira/browse/HBASE-10903
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Affects Versions: 0.98.1, 0.99.0
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: 10903.txt, 10903v2.txt
>
>
> We can't do this:
> {code}
> ./bin/hbase zkcli deleteall 
> /hbase/rs/c2022.halxg.cloudera.com,16020,1396502726715
> {code}
> after upgrade to 3.4.6 zookeeper.  Works if I put back 3.4.5.
> See below where only difference is the zk jar:
> {code}
> [stack@c2022 hbase-0.99.0-SNAPSHOT]$ ~/bin/java/bin/java -cp 
> ~/hbase-0.96.1.1-hadoop2/lib/zookeeper-3.4.5.jar:lib/slf4j-log4j12-1.6.4.jar:lib/slf4j-api-1.6.4.jar:lib/log4j-1.2.17.jar
>   org.apache.zookeeper.ZooKeeperMain -server c2020:2181 ls  "/hbase/rs"
> Connecting to c2020:2181
> log4j:WARN No appenders could be found for logger 
> (org.apache.zookeeper.ZooKeeper).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> WATCHER::
> WatchedEvent state:SyncConnected type:None path:null
> [c2020.halxg.cloudera.com,16020,1396482186194, 
> c2021.halxg.cloudera.com,16020,1396499398203, 
> c2023.halxg.cloudera.com,16020,1396498834473, 
> c2025.halxg.cloudera.com,16020,1396482188110, 
> c2022.halxg.cloudera.com,16020,1396502726715, 
> c2024.halxg.cloudera.com,16020,1396482188280]
> [stack@c2022 hbase-0.99.0-SNAPSHOT]$ ~/bin/java/bin/java -cp 
> lib/zookeeper-3.4.6.jar:lib/slf4j-log4j12-1.6.4.jar:lib/slf4j-api-1.6.4.jar:lib/log4j-1.2.17.jar
>   org.apache.zookeeper.ZooKeeperMain -server c2020:2181 ls  "/hbase/rs"
> Connecting to c2020:2181
> log4j:WARN No appenders could be found for logger 
> (org.apache.zookeeper.ZooKeeper).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10740) Upgrade zookeeper to 3.4.6 release

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959727#comment-13959727
 ] 

Hudson commented on HBASE-10740:


SUCCESS: Integrated in hbase-0.96 #377 (See 
[https://builds.apache.org/job/hbase-0.96/377/])
HBASE-10903 HBASE-10740 regression; cannot pass commands for zk to run (stack: 
rev 1584424)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperMainServer.java


> Upgrade zookeeper to 3.4.6 release
> --
>
> Key: HBASE-10740
> URL: https://issues.apache.org/jira/browse/HBASE-10740
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.98.1, 0.99.0
>
> Attachments: 10740-v1.txt
>
>
> Zookeeper 3.4.6 release has been released.
> This JIRA upgrades zookeeper dependency to 3.4.6



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10909) Abstract out ZooKeeper usage in HBase

2014-04-04 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10909:


Attachment: HBaseConsensus.pdf

> Abstract out ZooKeeper usage in HBase
> -
>
> Key: HBASE-10909
> URL: https://issues.apache.org/jira/browse/HBASE-10909
> Project: HBase
>  Issue Type: Umbrella
>  Components: Zookeeper
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Attachments: HBaseConsensus.pdf
>
>
> (Original discussion started in comments for HBASE-10866)
> As some sort of follow-up or initial step towards HBASE-10296.
> Whatever consensus algorithm/library may be the chosen, perhaps one of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> This jira is umbrella for relevant subtasks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10909) Abstract out ZooKeeper usage in HBase

2014-04-04 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10909:


Attachment: (was: HBaseConsensus.pdf)

> Abstract out ZooKeeper usage in HBase
> -
>
> Key: HBASE-10909
> URL: https://issues.apache.org/jira/browse/HBASE-10909
> Project: HBase
>  Issue Type: Umbrella
>  Components: Zookeeper
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>
> (Original discussion started in comments for HBASE-10866)
> As some sort of follow-up or initial step towards HBASE-10296.
> Whatever consensus algorithm/library may be the chosen, perhaps one of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> This jira is umbrella for relevant subtasks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10909) Abstract out ZooKeeper usage in HBase

2014-04-04 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-10909:


Attachment: HBaseConsensus.pdf

> Abstract out ZooKeeper usage in HBase
> -
>
> Key: HBASE-10909
> URL: https://issues.apache.org/jira/browse/HBASE-10909
> Project: HBase
>  Issue Type: Umbrella
>  Components: Zookeeper
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Attachments: HBaseConsensus.pdf
>
>
> (Original discussion started in comments for HBASE-10866)
> As some sort of follow-up or initial step towards HBASE-10296.
> Whatever consensus algorithm/library may be the chosen, perhaps one of first 
> practical steps towards this goal would be to better abstract ZK-related API 
> and details, which are now throughout the codebase (mostly leaked throuth 
> ZkUtil, ZooKeeperWatcher and listeners).
> This jira is umbrella for relevant subtasks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10118) Major compact keeps deletes with future timestamps

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959761#comment-13959761
 ] 

Hudson commented on HBASE-10118:


FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #64 (See 
[https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/64/])
HBASE-10118 Major compact keeps deletes with future timestamps. (Liu Shaohui) 
(larsh: rev 1584514)
* /hbase/branches/0.94/src/docbkx/book.xml
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> Major compact keeps deletes with future timestamps
> --
>
> Key: HBASE-10118
> URL: https://issues.apache.org/jira/browse/HBASE-10118
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Deletes, regionserver
>Reporter: Max Lapan
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3
>
> Attachments: HBASE-10118-0.94-v1.diff, HBASE-10118-trunk-v1.diff, 
> HBASE-10118-trunk-v2.diff, HBASE-10118-trunk-v3.diff
>
>
> Hello!
> During migration from HBase 0.90.6 to 0.94.6 we found changed behaviour in 
> how major compact handles delete markers with timestamps in the future. 
> Before HBASE-4721 major compact purged deletes regardless of their timestamp. 
> Newer versions keep them in HFile until timestamp not reached.
> I guess this happened due to new check in ScanQueryMatcher 
> {{EnvironmentEdgeManager.currentTimeMillis() - timestamp) <= 
> timeToPurgeDeletes}}.
> This can be worked around by specifying large negative value in 
> {{hbase.hstore.time.to.purge.deletes}} option, but, unfortunately, negative 
> values are pulled up to zero by Math.max in HStore.java.
> Maybe, we are trying to do something weird by specifing delete timestamp in 
> future, but HBASE-4721 definitely breaks old behaviour we rely on.
> Steps to reproduce this:
> {code}
> put 'test', 'delmeRow', 'delme:something', 'hello'
> flush 'test'
> delete 'test', 'delmeRow', 'delme:something', 1394161431061
> flush 'test'
> major_compact 'test'
> {code}
> Before major_compact we have two hfiles with the following:
> {code}
> first:
> K: delmeRow/delme:something/1384161431061/Put/vlen=5/ts=0
> second:
> K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
> {code}
> After major compact we get the following:
> {code}
> K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
> {code}
> In our installation, we resolved this by removing Math.max and setting 
> hbase.hstore.time.to.purge.deletes to Integer.MIN_VALUE, which purges delete 
> markers, and it looks like a solution. But, maybe, there are better approach.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10118) Major compact keeps deletes with future timestamps

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959776#comment-13959776
 ] 

Hudson commented on HBASE-10118:


FAILURE: Integrated in HBase-0.94 #1337 (See 
[https://builds.apache.org/job/HBase-0.94/1337/])
HBASE-10118 Major compact keeps deletes with future timestamps. (Liu Shaohui) 
(larsh: rev 1584514)
* /hbase/branches/0.94/src/docbkx/book.xml
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> Major compact keeps deletes with future timestamps
> --
>
> Key: HBASE-10118
> URL: https://issues.apache.org/jira/browse/HBASE-10118
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Deletes, regionserver
>Reporter: Max Lapan
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3
>
> Attachments: HBASE-10118-0.94-v1.diff, HBASE-10118-trunk-v1.diff, 
> HBASE-10118-trunk-v2.diff, HBASE-10118-trunk-v3.diff
>
>
> Hello!
> During migration from HBase 0.90.6 to 0.94.6 we found changed behaviour in 
> how major compact handles delete markers with timestamps in the future. 
> Before HBASE-4721 major compact purged deletes regardless of their timestamp. 
> Newer versions keep them in HFile until timestamp not reached.
> I guess this happened due to new check in ScanQueryMatcher 
> {{EnvironmentEdgeManager.currentTimeMillis() - timestamp) <= 
> timeToPurgeDeletes}}.
> This can be worked around by specifying large negative value in 
> {{hbase.hstore.time.to.purge.deletes}} option, but, unfortunately, negative 
> values are pulled up to zero by Math.max in HStore.java.
> Maybe, we are trying to do something weird by specifing delete timestamp in 
> future, but HBASE-4721 definitely breaks old behaviour we rely on.
> Steps to reproduce this:
> {code}
> put 'test', 'delmeRow', 'delme:something', 'hello'
> flush 'test'
> delete 'test', 'delmeRow', 'delme:something', 1394161431061
> flush 'test'
> major_compact 'test'
> {code}
> Before major_compact we have two hfiles with the following:
> {code}
> first:
> K: delmeRow/delme:something/1384161431061/Put/vlen=5/ts=0
> second:
> K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
> {code}
> After major compact we get the following:
> {code}
> K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
> {code}
> In our installation, we resolved this by removing Math.max and setting 
> hbase.hstore.time.to.purge.deletes to Integer.MIN_VALUE, which purges delete 
> markers, and it looks like a solution. But, maybe, there are better approach.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10908) On CompactionRequest, reader can be null on recalculateSize

2014-04-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959788#comment-13959788
 ] 

Hadoop QA commented on HBASE-10908:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638633/HBASE-10908.1.patch
  against trunk revision .
  ATTACHMENT ID: 12638633

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9200//console

This message is automatically generated.

> On CompactionRequest, reader can be null on recalculateSize
> ---
>
> Key: HBASE-10908
> URL: https://issues.apache.org/jira/browse/HBASE-10908
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.0
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Minor
> Attachments: HBASE-10908.1.patch
>
>
> Comment from Honghua Feng 
> {code}
> private void recalculateSize() {
>   long sz = 0;
>   for (StoreFile sf : this.filesToCompact) {
> sz += sf.getReader().length();
>   }
>   this.totalSize = sz;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10899) Multiple versions of ACLs in memstore/before compaction needs to handled

2014-04-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959842#comment-13959842
 ] 

Hadoop QA commented on HBASE-10899:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638635/HBASE-10899_1.patch
  against trunk revision .
  ATTACHMENT ID: 12638635

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestImportExport.testImport94Table(TestImportExport.java:230)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9199//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9199//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9199//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9199//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9199//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9199//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9199//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9199//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9199//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9199//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9199//console

This message is automatically generated.

> Multiple versions of ACLs in memstore/before compaction needs to handled
> 
>
> Key: HBASE-10899
> URL: https://issues.apache.org/jira/browse/HBASE-10899
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.2
>
> Attachments: HBASE-10899_1.patch
>
>
> Similar to HBASE-10854 we need to handle different versions while the 
> versions are in memstore and/or before compaction happens when the max 
> versions for the CF == 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10908) On CompactionRequest, reader can be null on recalculateSize

2014-04-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959904#comment-13959904
 ] 

Ted Yu commented on HBASE-10908:


This has been covered in patch v2 of that JIRA. 

> On CompactionRequest, reader can be null on recalculateSize
> ---
>
> Key: HBASE-10908
> URL: https://issues.apache.org/jira/browse/HBASE-10908
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.0
>Reporter: Rekha Joshi
>Assignee: Rekha Joshi
>Priority: Minor
> Attachments: HBASE-10908.1.patch
>
>
> Comment from Honghua Feng 
> {code}
> private void recalculateSize() {
>   long sz = 0;
>   for (StoreFile sf : this.filesToCompact) {
> sz += sf.getReader().length();
>   }
>   this.totalSize = sz;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10881) Support reverse scan in thrift2

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959909#comment-13959909
 ] 

Hudson commented on HBASE-10881:


ABORTED: Integrated in HBase-TRUNK #5063 (See 
[https://builds.apache.org/job/HBase-TRUNK/5063/])
HBASE-10881 Support reverse scan in thrift2 (Liu Shaohui) (liangxie: rev 
1584509)
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/generated/TScan.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java
* 
/hbase/trunk/hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift
* 
/hbase/trunk/hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
* 
/hbase/trunk/hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java
* 
/hbase/trunk/hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java


> Support reverse scan in thrift2
> ---
>
> Key: HBASE-10881
> URL: https://issues.apache.org/jira/browse/HBASE-10881
> Project: HBase
>  Issue Type: New Feature
>  Components: Thrift
>Affects Versions: 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10881-trunk-v1.diff, HBASE-10881-trunk-v2.diff
>
>
> Support reverse scan in thrift2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10883) Restrict the universe of labels and authorizations

2014-04-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959911#comment-13959911
 ] 

ramkrishna.s.vasudevan commented on HBASE-10883:


Ping to commit?

> Restrict the universe of labels and authorizations
> --
>
> Key: HBASE-10883
> URL: https://issues.apache.org/jira/browse/HBASE-10883
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.1
>Reporter: Andrew Purtell
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.99.0, 0.98.2
>
> Attachments: HBASE-10883.patch, HBASE-10883_1.patch, 
> HBASE-10883_2.patch, HBASE-10883_3.patch, HBASE-10883_4.patch, 
> HBASE-10883_5.patch
>
>
> Currently we allow any string as visibility label or request authorization. 
> However as seen on HBASE-10878, we accept for authorizations strings that 
> would not work if provided as labels in visibility expressions. We should 
> throw an exception at least in cases where someone tries to define or use a 
> label or authorization including visibility expression operators '&', '|', 
> '!', '(', ')'.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10899) Multiple versions of ACLs in memstore/before compaction needs to handled

2014-04-04 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959910#comment-13959910
 ] 

ramkrishna.s.vasudevan commented on HBASE-10899:


Ping to commit?

> Multiple versions of ACLs in memstore/before compaction needs to handled
> 
>
> Key: HBASE-10899
> URL: https://issues.apache.org/jira/browse/HBASE-10899
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.2
>
> Attachments: HBASE-10899_1.patch
>
>
> Similar to HBASE-10854 we need to handle different versions while the 
> versions are in memstore and/or before compaction happens when the max 
> versions for the CF == 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10801) Ensure DBE interfaces can work with Cell

2014-04-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10801:
---

Attachment: HBASE-10801.patch

Patch seems to work. Changing to cells further down the path of read would  
make this patch even more worthy.
Actually wanted to change write path also, but I think Anoop is working on 
that. If not I will take that up later.
If this looks fine, can take up next task of changing to Cells through out the 
read path.

> Ensure DBE interfaces can work with Cell
> 
>
> Key: HBASE-10801
> URL: https://issues.apache.org/jira/browse/HBASE-10801
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
> Fix For: 0.99.0
>
> Attachments: HBASE-10801.patch
>
>
> Some changes to the interfaces may be needed for DBEs or may be the way it 
> works currently may be need to be modified inorder to make DBEs work with 
> Cells. Suggestions and ideas welcome.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10740) Upgrade zookeeper to 3.4.6 release

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959922#comment-13959922
 ] 

Hudson commented on HBASE-10740:


SUCCESS: Integrated in hbase-0.96-hadoop2 #259 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/259/])
HBASE-10903 HBASE-10740 regression; cannot pass commands for zk to run (stack: 
rev 1584424)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperMainServer.java


> Upgrade zookeeper to 3.4.6 release
> --
>
> Key: HBASE-10740
> URL: https://issues.apache.org/jira/browse/HBASE-10740
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.98.1, 0.99.0
>
> Attachments: 10740-v1.txt
>
>
> Zookeeper 3.4.6 release has been released.
> This JIRA upgrades zookeeper dependency to 3.4.6



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10118) Major compact keeps deletes with future timestamps

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959921#comment-13959921
 ] 

Hudson commented on HBASE-10118:


SUCCESS: Integrated in hbase-0.96-hadoop2 #259 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/259/])
HBASE-10118 Major compact keeps deletes with future timestamps (Liu Shaohui) 
(sershe: rev 1584392)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* /hbase/branches/0.96/src/main/docbkx/book.xml


> Major compact keeps deletes with future timestamps
> --
>
> Key: HBASE-10118
> URL: https://issues.apache.org/jira/browse/HBASE-10118
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, Deletes, regionserver
>Reporter: Max Lapan
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3
>
> Attachments: HBASE-10118-0.94-v1.diff, HBASE-10118-trunk-v1.diff, 
> HBASE-10118-trunk-v2.diff, HBASE-10118-trunk-v3.diff
>
>
> Hello!
> During migration from HBase 0.90.6 to 0.94.6 we found changed behaviour in 
> how major compact handles delete markers with timestamps in the future. 
> Before HBASE-4721 major compact purged deletes regardless of their timestamp. 
> Newer versions keep them in HFile until timestamp not reached.
> I guess this happened due to new check in ScanQueryMatcher 
> {{EnvironmentEdgeManager.currentTimeMillis() - timestamp) <= 
> timeToPurgeDeletes}}.
> This can be worked around by specifying large negative value in 
> {{hbase.hstore.time.to.purge.deletes}} option, but, unfortunately, negative 
> values are pulled up to zero by Math.max in HStore.java.
> Maybe, we are trying to do something weird by specifing delete timestamp in 
> future, but HBASE-4721 definitely breaks old behaviour we rely on.
> Steps to reproduce this:
> {code}
> put 'test', 'delmeRow', 'delme:something', 'hello'
> flush 'test'
> delete 'test', 'delmeRow', 'delme:something', 1394161431061
> flush 'test'
> major_compact 'test'
> {code}
> Before major_compact we have two hfiles with the following:
> {code}
> first:
> K: delmeRow/delme:something/1384161431061/Put/vlen=5/ts=0
> second:
> K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
> {code}
> After major compact we get the following:
> {code}
> K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
> {code}
> In our installation, we resolved this by removing Math.max and setting 
> hbase.hstore.time.to.purge.deletes to Integer.MIN_VALUE, which purges delete 
> markers, and it looks like a solution. But, maybe, there are better approach.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10903) HBASE-10740 regression; cannot pass commands for zk to run

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959924#comment-13959924
 ] 

Hudson commented on HBASE-10903:


SUCCESS: Integrated in hbase-0.96-hadoop2 #259 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/259/])
HBASE-10903 HBASE-10740 regression; cannot pass commands for zk to run (stack: 
rev 1584424)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServer.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperMainServer.java


> HBASE-10740 regression; cannot pass commands for zk to run
> --
>
> Key: HBASE-10903
> URL: https://issues.apache.org/jira/browse/HBASE-10903
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Affects Versions: 0.98.1, 0.99.0
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: 10903.txt, 10903v2.txt
>
>
> We can't do this:
> {code}
> ./bin/hbase zkcli deleteall 
> /hbase/rs/c2022.halxg.cloudera.com,16020,1396502726715
> {code}
> after upgrade to 3.4.6 zookeeper.  Works if I put back 3.4.5.
> See below where only difference is the zk jar:
> {code}
> [stack@c2022 hbase-0.99.0-SNAPSHOT]$ ~/bin/java/bin/java -cp 
> ~/hbase-0.96.1.1-hadoop2/lib/zookeeper-3.4.5.jar:lib/slf4j-log4j12-1.6.4.jar:lib/slf4j-api-1.6.4.jar:lib/log4j-1.2.17.jar
>   org.apache.zookeeper.ZooKeeperMain -server c2020:2181 ls  "/hbase/rs"
> Connecting to c2020:2181
> log4j:WARN No appenders could be found for logger 
> (org.apache.zookeeper.ZooKeeper).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> WATCHER::
> WatchedEvent state:SyncConnected type:None path:null
> [c2020.halxg.cloudera.com,16020,1396482186194, 
> c2021.halxg.cloudera.com,16020,1396499398203, 
> c2023.halxg.cloudera.com,16020,1396498834473, 
> c2025.halxg.cloudera.com,16020,1396482188110, 
> c2022.halxg.cloudera.com,16020,1396502726715, 
> c2024.halxg.cloudera.com,16020,1396482188280]
> [stack@c2022 hbase-0.99.0-SNAPSHOT]$ ~/bin/java/bin/java -cp 
> lib/zookeeper-3.4.6.jar:lib/slf4j-log4j12-1.6.4.jar:lib/slf4j-api-1.6.4.jar:lib/log4j-1.2.17.jar
>   org.apache.zookeeper.ZooKeeperMain -server c2020:2181 ls  "/hbase/rs"
> Connecting to c2020:2181
> log4j:WARN No appenders could be found for logger 
> (org.apache.zookeeper.ZooKeeper).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10850) essential column family optimization is broken

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959923#comment-13959923
 ] 

Hudson commented on HBASE-10850:


SUCCESS: Integrated in hbase-0.96-hadoop2 #259 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/259/])
HBASE-10850 essential column family optimization is broken (tedyu: rev 1584357)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterWrapper.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSCVFWithMiniCluster.java


> essential column family optimization is broken
> --
>
> Key: HBASE-10850
> URL: https://issues.apache.org/jira/browse/HBASE-10850
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, Filters, Performance
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: 10850-hasFilterRow-v1.txt, 10850-hasFilterRow-v2.txt, 
> 10850-hasFilterRow-v3.txt, 10850-v4.txt, 10850-v5.txt, 10850-v6.txt, 
> 10850-v7.txt, HBASE-10850-96.patch, HBASE-10850.patch, HBASE-10850_V2.patch, 
> HBaseSingleColumnValueFilterTest.java, TestWithMiniCluster.java
>
>
> When using the filter SingleColumnValueFilter, and depending of the columns 
> specified in the scan (filtering column always specified), the results can be 
> different.
> Here is an example.
> Suppose the following table:
> ||key||a:foo||a:bar||b:foo||b:bar||
> |1|false|_flag_|_flag_|_flag_|
> |2|true|_flag_|_flag_|_flag_|
> |3| |_flag_|_flag_|_flag_|
> With this filter:
> {code}
> SingleColumnValueFilter filter = new 
> SingleColumnValueFilter(Bytes.toBytes("a"), Bytes.toBytes("foo"), 
> CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes("false")));
> filter.setFilterIfMissing(true);
> {code}
> Depending of how I specify the list of columns to add in the scan, the result 
> is different. Yet, all examples below should always return only the first row 
> (key '1'):
> OK:
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addFamily(Bytes.toBytes("a"));
> scan.addFamily(Bytes.toBytes("b"));
> {code}
> KO (2 results returned, row '3' without 'a:foo' qualifier is returned):
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("foo"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("b"), Bytes.toBytes("bar"));
> {code}
> OK:
> {code}
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("foo"));
> scan.addColumn(Bytes.toBytes("a"), Bytes.toBytes("bar"));
> {code}
> This is a regression as it was working properly on HBase 0.92.
> You will find in attachement the unit tests reproducing the issue.
> +The analysis of this issue lead us to 2 critical bugs induced in 96 and 
> above versions+
> 1. The essential family optimization is broken in some cases.  In case of 
> condition on some families, we 1st will read those KVs and apply condition on 
> those, when the condition says to filter out that row, we will not go ahead 
> and fetch data from remaining non essential CFs. But now in most of the cases 
> we will do this unwanted data read which is fully against this optimization
> 2. We have a CP hook postFilterRow() which will be called when a row is 
> getting filtered out by the Filter.  This gives the CP to do a reseek to the 
> next known row which it thinks can evaluate the condition to true. But 
> currently in 96+ code , this hook is not getting called.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10848) Filter SingleColumnValueFilter combined with NullComparator does not work

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959925#comment-13959925
 ] 

Hudson commented on HBASE-10848:


SUCCESS: Integrated in hbase-0.96-hadoop2 #259 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/259/])
HBASE-10848 Filter SingleColumnValueFilter combined with NullComparator does 
not work (Fabien) (tedyu: rev 1584370)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/NullComparator.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java


> Filter SingleColumnValueFilter combined with NullComparator does not work
> -
>
> Key: HBASE-10848
> URL: https://issues.apache.org/jira/browse/HBASE-10848
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.96.1.1
>Reporter: Fabien Le Gallo
>Assignee: Fabien Le Gallo
> Fix For: 0.99.0, 0.94.19, 0.98.2, 0.96.3
>
> Attachments: HBASE-10848.patch, HBASE_10848-v2.patch, 
> HBASE_10848-v3.patch, HBASE_10848-v4-94.patch, HBASE_10848-v4.patch, 
> HBaseRegression.java, TestScanWithNullComparable.java
>
>
> I want to filter out from the scan the rows that does not have a specific 
> column qualifier. For this purpose I use the filter SingleColumnValueFilter 
> combined with the NullComparator.
> But every time I use this in a scan, I get the following exception:
> {code}
> java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:47)
> at 
> com.xxx.xxx.test.HBaseRegression.nullComparator(HBaseRegression.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry 
> of OutOfOrderScannerNextException: was there a rpc timeout?
> at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:391)
> at 
> org.apache.hadoop.hbase.client.AbstractClientScanner$1.hasNext(AbstractClientScanner.java:44)
> ... 25 more
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 7998309028985532303 number_of_rows: 100 close_scanner: false next_call_seq: 0
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3011)
>   

[jira] [Commented] (HBASE-10899) Multiple versions of ACLs in memstore/before compaction needs to handled

2014-04-04 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960041#comment-13960041
 ] 

Matteo Bertozzi commented on HBASE-10899:
-

+1 looks good to me

> Multiple versions of ACLs in memstore/before compaction needs to handled
> 
>
> Key: HBASE-10899
> URL: https://issues.apache.org/jira/browse/HBASE-10899
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.2
>
> Attachments: HBASE-10899_1.patch
>
>
> Similar to HBASE-10854 we need to handle different versions while the 
> versions are in memstore and/or before compaction happens when the max 
> versions for the CF == 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10899) Multiple versions of ACLs in memstore/before compaction needs to handled

2014-04-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10899:
---

Status: Open  (was: Patch Available)

> Multiple versions of ACLs in memstore/before compaction needs to handled
> 
>
> Key: HBASE-10899
> URL: https://issues.apache.org/jira/browse/HBASE-10899
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.2
>
> Attachments: HBASE-10899_1.patch
>
>
> Similar to HBASE-10854 we need to handle different versions while the 
> versions are in memstore and/or before compaction happens when the max 
> versions for the CF == 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10899) [AccessController] Apply MAX_VERSIONS from schema or request when scanning

2014-04-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10899:
---

Summary: [AccessController] Apply MAX_VERSIONS from schema or request when 
scanning  (was: Multiple versions of ACLs in memstore/before compaction needs 
to handled)

> [AccessController] Apply MAX_VERSIONS from schema or request when scanning
> --
>
> Key: HBASE-10899
> URL: https://issues.apache.org/jira/browse/HBASE-10899
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.2
>
> Attachments: HBASE-10899_1.patch, HBASE-10899_2.patch
>
>
> Similar to HBASE-10854 we need to handle different versions while the 
> versions are in memstore and/or before compaction happens when the max 
> versions for the CF == 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10899) Multiple versions of ACLs in memstore/before compaction needs to handled

2014-04-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10899:
---

Attachment: HBASE-10899_2.patch

As per Anoop's suggestion changed the comparison logic to that in HBASE-10854.
[~mbm87]
Thanks for the review.

> Multiple versions of ACLs in memstore/before compaction needs to handled
> 
>
> Key: HBASE-10899
> URL: https://issues.apache.org/jira/browse/HBASE-10899
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.2
>
> Attachments: HBASE-10899_1.patch, HBASE-10899_2.patch
>
>
> Similar to HBASE-10854 we need to handle different versions while the 
> versions are in memstore and/or before compaction happens when the max 
> versions for the CF == 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10899) [AccessController] Apply MAX_VERSIONS from schema or request when scanning

2014-04-04 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10899:
---

Status: Patch Available  (was: Open)

> [AccessController] Apply MAX_VERSIONS from schema or request when scanning
> --
>
> Key: HBASE-10899
> URL: https://issues.apache.org/jira/browse/HBASE-10899
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.2
>
> Attachments: HBASE-10899_1.patch, HBASE-10899_2.patch
>
>
> Similar to HBASE-10854 we need to handle different versions while the 
> versions are in memstore and/or before compaction happens when the max 
> versions for the CF == 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10899) [AccessController] Apply MAX_VERSIONS from schema or request when scanning

2014-04-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960080#comment-13960080
 ] 

Ted Yu commented on HBASE-10899:


+1 on v2, if tests pass.

> [AccessController] Apply MAX_VERSIONS from schema or request when scanning
> --
>
> Key: HBASE-10899
> URL: https://issues.apache.org/jira/browse/HBASE-10899
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.2
>
> Attachments: HBASE-10899_1.patch, HBASE-10899_2.patch
>
>
> Similar to HBASE-10854 we need to handle different versions while the 
> versions are in memstore and/or before compaction happens when the max 
> versions for the CF == 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10899) [AccessController] Apply MAX_VERSIONS from schema or request when scanning

2014-04-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960145#comment-13960145
 ] 

Hadoop QA commented on HBASE-10899:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638693/HBASE-10899_2.patch
  against trunk revision .
  ATTACHMENT ID: 12638693

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.wal.TestLogRolling

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9201//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9201//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9201//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9201//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9201//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9201//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9201//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9201//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9201//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9201//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9201//console

This message is automatically generated.

> [AccessController] Apply MAX_VERSIONS from schema or request when scanning
> --
>
> Key: HBASE-10899
> URL: https://issues.apache.org/jira/browse/HBASE-10899
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.2
>
> Attachments: HBASE-10899_1.patch, HBASE-10899_2.patch
>
>
> Similar to HBASE-10854 we need to handle different versions while the 
> versions are in memstore and/or before compaction happens when the max 
> versions for the CF == 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10906) Change error log for NamingException in TableInputFormatBase to WARN level

2014-04-04 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960163#comment-13960163
 ] 

Nick Dimiduk commented on HBASE-10906:
--

+1. Thanks [~yuzhih...@gmail.com].

> Change error log for NamingException in TableInputFormatBase to WARN level
> --
>
> Key: HBASE-10906
> URL: https://issues.apache.org/jira/browse/HBASE-10906
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10906-v1.txt
>
>
> Over in this thread:
> http://search-hadoop.com/m/DHED4Qp3ho/HBase+resolveDns+error+in+log&subj=HBase+resolveDns+error+in+log
> Amit mentioned that despite the error log, mapreduce job executed 
> successfully.
> The log level should be lowered to WARN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10910) [89-fb] Change get(List) to batchGet(List)

2014-04-04 Thread Adela Maznikar (JIRA)
Adela Maznikar created HBASE-10910:
--

 Summary: [89-fb] Change get(List) to batchGet(List)
 Key: HBASE-10910
 URL: https://issues.apache.org/jira/browse/HBASE-10910
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.89-fb
Reporter: Adela Maznikar
 Fix For: 0.89-fb


batchGet(List) is more performant since it splits the list of Gets on 
regionserver level, and get(List) does that on region level. 
If we have a list of gets for regions on a same regionserver, get(List) 
will do #regions rpc calls and batchGet(List) will do just one rpc call. 

Changing HTable.get(List) to internally call HTable.batchGet(List)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-10910) [89-fb] Change get(List) to batchGet(List)

2014-04-04 Thread Adela Maznikar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adela Maznikar reassigned HBASE-10910:
--

Assignee: Adela Maznikar

> [89-fb] Change get(List) to batchGet(List)
> 
>
> Key: HBASE-10910
> URL: https://issues.apache.org/jira/browse/HBASE-10910
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 0.89-fb
>Reporter: Adela Maznikar
>Assignee: Adela Maznikar
> Fix For: 0.89-fb
>
>
> batchGet(List) is more performant since it splits the list of Gets on 
> regionserver level, and get(List) does that on region level. 
> If we have a list of gets for regions on a same regionserver, get(List) 
> will do #regions rpc calls and batchGet(List) will do just one rpc call. 
> Changing HTable.get(List) to internally call HTable.batchGet(List)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10911) ServerShutdownHandler#toString shows meaningless message

2014-04-04 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-10911:
---

 Summary: ServerShutdownHandler#toString shows meaningless message
 Key: HBASE-10911
 URL: https://issues.apache.org/jira/browse/HBASE-10911
 Project: HBase
  Issue Type: Improvement
  Components: master
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor


SSH#toString returns the master server name, which is not so interesting. It's 
better to show the dead server's name instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10906) Change error log for NamingException in TableInputFormatBase to WARN level

2014-04-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10906:
---

Fix Version/s: 0.98.2
   0.99.0
 Hadoop Flags: Reviewed

> Change error log for NamingException in TableInputFormatBase to WARN level
> --
>
> Key: HBASE-10906
> URL: https://issues.apache.org/jira/browse/HBASE-10906
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: 10906-v1.txt
>
>
> Over in this thread:
> http://search-hadoop.com/m/DHED4Qp3ho/HBase+resolveDns+error+in+log&subj=HBase+resolveDns+error+in+log
> Amit mentioned that despite the error log, mapreduce job executed 
> successfully.
> The log level should be lowered to WARN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10906) Change error log for NamingException in TableInputFormatBase to WARN level

2014-04-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10906:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the reviews, Chunhui and Nick.

> Change error log for NamingException in TableInputFormatBase to WARN level
> --
>
> Key: HBASE-10906
> URL: https://issues.apache.org/jira/browse/HBASE-10906
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: 10906-v1.txt
>
>
> Over in this thread:
> http://search-hadoop.com/m/DHED4Qp3ho/HBase+resolveDns+error+in+log&subj=HBase+resolveDns+error+in+log
> Amit mentioned that despite the error log, mapreduce job executed 
> successfully.
> The log level should be lowered to WARN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10289) Avoid random port usage by default JMX Server. Create Custome JMX server

2014-04-04 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960426#comment-13960426
 ] 

Demai Ni commented on HBASE-10289:
--

[~saint@gmail.com], the patch looks good to me.
I configured hbase-site.xml by adding : 
{code}

 hbase.master.rmi.registry.port
 2356


 hbase.regionserver.rmi.registry.port
 2357

{code}
and both ports are set on my single-node cluster. 

the patch is out of date, I will upload a new one shortly

> Avoid random port usage by default JMX Server. Create Custome JMX server
> 
>
> Key: HBASE-10289
> URL: https://issues.apache.org/jira/browse/HBASE-10289
> Project: HBase
>  Issue Type: Improvement
>Reporter: nijel
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10289.patch, HBASE-10289_1.patch, 
> HBASE-10289_2.patch, HBASE-10289_3.patch
>
>
> If we enable JMX MBean server for HMaster or Region server  through VM 
> arguments, the process will use one random which we cannot configure.
> This can be a problem if that random port is configured for some other 
> service.
> This issue can be avoided by supporting  a custom JMX Server.
> The ports can be configured. If there is no ports configured, it will 
> continue the same way as now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10289) Avoid random port usage by default JMX Server. Create Custome JMX server

2014-04-04 Thread Demai Ni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Demai Ni updated HBASE-10289:
-

Attachment: HBASE-10289-v4.patch

in HMasterCommandLine. I have to repeat 
{code}
jmxServer = new JMXServer(conf, "hbase.master");
jmxServer.start();
{code}
twice, so that in the else condition, jmxSever will be start before 
master.start() to register. Otherwise, its port won't be set for HMaster

> Avoid random port usage by default JMX Server. Create Custome JMX server
> 
>
> Key: HBASE-10289
> URL: https://issues.apache.org/jira/browse/HBASE-10289
> Project: HBase
>  Issue Type: Improvement
>Reporter: nijel
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10289-v4.patch, HBASE-10289.patch, 
> HBASE-10289_1.patch, HBASE-10289_2.patch, HBASE-10289_3.patch
>
>
> If we enable JMX MBean server for HMaster or Region server  through VM 
> arguments, the process will use one random which we cannot configure.
> This can be a problem if that random port is configured for some other 
> service.
> This issue can be avoided by supporting  a custom JMX Server.
> The ports can be configured. If there is no ports configured, it will 
> continue the same way as now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7912) HBase Backup/Restore Based on HBase Snapshot

2014-04-04 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960473#comment-13960473
 ] 

Demai Ni commented on HBASE-7912:
-

[~mbertozzi], 

many thanks for the comments. Very good point about 'full backup', which pretty 
much a wrap on snapshot and exportsnapshot. As Snapshot is a very good feature, 
so 'full backup' alone doesn't provide much more benefit. 

The valuable part is the incremental-backup, which is on top of the full 
backup.  so that a user only need to take the 'full backup' once at the initial 
phase. A concept of 'backup image' is introduced, which is identified by 
backupID. The manifest file is stored inside 'backup image' together with the 
data HFiles. so that 'backup image' is self-explained, and can be moved around 
and restored independently.  

Demai



> HBase Backup/Restore Based on HBase Snapshot
> 
>
> Key: HBASE-7912
> URL: https://issues.apache.org/jira/browse/HBASE-7912
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Richard Ding
>Assignee: Richard Ding
> Attachments: HBaseBackupRestore-Jira-7912-DesignDoc-v1.pdf, 
> HBase_BackupRestore-Jira-7912-CLI-v1.pdf
>
>
> Finally, we completed the implementation of our backup/restore solution, and 
> would like to share with community through this jira. 
> We are leveraging existing hbase snapshot feature, and provide a general 
> solution to common users. Our full backup is using snapshot to capture 
> metadata locally and using exportsnapshot to move data to another cluster; 
> the incremental backup is using offline-WALplayer to backup HLogs; we also 
> leverage global distribution rolllog and flush to improve performance; other 
> added-on values such as convert, merge, progress report, and CLI commands. So 
> that a common user can backup hbase data without in-depth knowledge of hbase. 
>  Our solution also contains some usability features for enterprise users. 
> The detail design document and CLI command will be attached in this jira. We 
> plan to use 10~12 subtasks to share each of the following features, and 
> document the detail implement in the subtasks: 
> * *Full Backup* : provide local and remote back/restore for a list of tables
> * *offline-WALPlayer* to convert HLog to HFiles offline (for incremental 
> backup)
> * *distributed* Logroll and distributed flush 
> * Backup *Manifest* and history
> * *Incremental* backup: to build on top of full backup as daily/weekly backup 
> * *Convert*  incremental backup WAL files into hfiles
> * *Merge* several backup images into one(like merge weekly into monthly)
> * *add and remove* table to and from Backup image
> * *Cancel* a backup process
> * backup progress *status*
> * full backup based on *existing snapshot*
> *-*
> *Below is the original description, to keep here as the history for the 
> design and discussion back in 2013*
> There have been attempts in the past to come up with a viable HBase 
> backup/restore solution (e.g., HBASE-4618).  Recently, there are many 
> advancements and new features in HBase, for example, FileLink, Snapshot, and 
> Distributed Barrier Procedure. This is a proposal for a backup/restore 
> solution that utilizes these new features to achieve better performance and 
> consistency. 
>  
> A common practice of backup and restore in database is to first take full 
> baseline backup, and then periodically take incremental backup that capture 
> the changes since the full baseline backup. HBase cluster can store massive 
> amount data.  Combination of full backups with incremental backups has 
> tremendous benefit for HBase as well.  The following is a typical scenario 
> for full and incremental backup.
> # The user takes a full backup of a table or a set of tables in HBase. 
> # The user schedules periodical incremental backups to capture the changes 
> from the full backup, or from last incremental backup.
> # The user needs to restore table data to a past point of time.
> # The full backup is restored to the table(s) or to different table name(s).  
> Then the incremental backups that are up to the desired point in time are 
> applied on top of the full backup. 
> We would support the following key features and capabilities.
> * Full backup uses HBase snapshot to capture HFiles.
> * Use HBase WALs to capture incremental changes, but we use bulk load of 
> HFiles for fast incremental restore.
> * Support single table or a set of tables, and column family level backup and 
> restore.
> * Restore to different table names.
> * Support adding additional tables or CF to backup set without interruption 
> of incremental backup schedule.
> * Support rollup/combining of incremental backups into lo

[jira] [Created] (HBASE-10912) setUp / tearDown in TestSCVFWithMiniCluster should be done once per run

2014-04-04 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10912:
--

 Summary: setUp / tearDown in TestSCVFWithMiniCluster should be 
done once per run
 Key: HBASE-10912
 URL: https://issues.apache.org/jira/browse/HBASE-10912
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor


setUp / tearDown should be annotated with @BeforeClass and @AfterClass, 
respectively.

On my Mac, the runtime for this test went from 19 seconds to 9 seconds:
{code}
Running org.apache.hadoop.hbase.regionserver.TestSCVFWithMiniCluster
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.302 sec
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10912) setUp / tearDown in TestSCVFWithMiniCluster should be done once per run

2014-04-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10912:
---

Status: Patch Available  (was: Open)

> setUp / tearDown in TestSCVFWithMiniCluster should be done once per run
> ---
>
> Key: HBASE-10912
> URL: https://issues.apache.org/jira/browse/HBASE-10912
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10912-v1.txt
>
>
> setUp / tearDown should be annotated with @BeforeClass and @AfterClass, 
> respectively.
> On my Mac, the runtime for this test went from 19 seconds to 9 seconds:
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestSCVFWithMiniCluster
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.302 sec
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10912) setUp / tearDown in TestSCVFWithMiniCluster should be done once per run

2014-04-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10912:
---

Attachment: 10912-v1.txt

> setUp / tearDown in TestSCVFWithMiniCluster should be done once per run
> ---
>
> Key: HBASE-10912
> URL: https://issues.apache.org/jira/browse/HBASE-10912
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10912-v1.txt
>
>
> setUp / tearDown should be annotated with @BeforeClass and @AfterClass, 
> respectively.
> On my Mac, the runtime for this test went from 19 seconds to 9 seconds:
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestSCVFWithMiniCluster
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.302 sec
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10906) Change error log for NamingException in TableInputFormatBase to WARN level

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960539#comment-13960539
 ] 

Hudson commented on HBASE-10906:


SUCCESS: Integrated in HBase-0.98 #265 (See 
[https://builds.apache.org/job/HBase-0.98/265/])
HBASE-10906 Change error log for NamingException in TableInputFormatBase to 
WARN level (tedyu: rev 1584889)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java


> Change error log for NamingException in TableInputFormatBase to WARN level
> --
>
> Key: HBASE-10906
> URL: https://issues.apache.org/jira/browse/HBASE-10906
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: 10906-v1.txt
>
>
> Over in this thread:
> http://search-hadoop.com/m/DHED4Qp3ho/HBase+resolveDns+error+in+log&subj=HBase+resolveDns+error+in+log
> Amit mentioned that despite the error log, mapreduce job executed 
> successfully.
> The log level should be lowered to WARN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10886) add htrace-zipkin to the runtime dependencies again

2014-04-04 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960543#comment-13960543
 ] 

Elliott Clark commented on HBASE-10886:
---

Sorry just got to this. I don't think this is a good idea.  The client just got 
a whole lot more dependencies, while < 10% of the users will need it.

> add htrace-zipkin to the runtime dependencies again
> ---
>
> Key: HBASE-10886
> URL: https://issues.apache.org/jira/browse/HBASE-10886
> Project: HBase
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10886-0.patch, HBASE-10886-1.patch
>
>
> Once htrace-zipkin was removed from depencencies in HBASE-9700. Because all 
> of the depencencies of htrace-zipkin is bundled with HBase now, it is good to 
> add it for the ease of use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10886) add htrace-zipkin to the runtime dependencies again

2014-04-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960551#comment-13960551
 ] 

stack commented on HBASE-10886:
---

Yes.  Good point.  Every client, MR, etc., now needs htrace whether they use it 
or not ... or they need htrace-core regardless and now they need htrace-zipkin?

> add htrace-zipkin to the runtime dependencies again
> ---
>
> Key: HBASE-10886
> URL: https://issues.apache.org/jira/browse/HBASE-10886
> Project: HBase
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10886-0.patch, HBASE-10886-1.patch
>
>
> Once htrace-zipkin was removed from depencencies in HBASE-9700. Because all 
> of the depencencies of htrace-zipkin is bundled with HBase now, it is good to 
> add it for the ease of use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10289) Avoid random port usage by default JMX Server. Create Custome JMX server

2014-04-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960552#comment-13960552
 ] 

Hadoop QA commented on HBASE-10289:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638768/HBASE-10289-v4.patch
  against trunk revision .
  ATTACHMENT ID: 12638768

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9202//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9202//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9202//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9202//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9202//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9202//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9202//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9202//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9202//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9202//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9202//console

This message is automatically generated.

> Avoid random port usage by default JMX Server. Create Custome JMX server
> 
>
> Key: HBASE-10289
> URL: https://issues.apache.org/jira/browse/HBASE-10289
> Project: HBase
>  Issue Type: Improvement
>Reporter: nijel
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10289-v4.patch, HBASE-10289.patch, 
> HBASE-10289_1.patch, HBASE-10289_2.patch, HBASE-10289_3.patch
>
>
> If we enable JMX MBean server for HMaster or Region server  through VM 
> arguments, the process will use one random which we cannot configure.
> This can be a problem if that random port is configured for some other 
> service.
> This issue can be avoided by supporting  a custom JMX Server.
> The ports can be configured. If there is no ports configured, it will 
> continue the same way as now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10886) add htrace-zipkin to the runtime dependencies again

2014-04-04 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960556#comment-13960556
 ] 

Elliott Clark commented on HBASE-10886:
---

They need core as that has the hooks in it.  But they don't need to include the 
thrift (and other) dependencies that htrace-zipkin pulls in.

> add htrace-zipkin to the runtime dependencies again
> ---
>
> Key: HBASE-10886
> URL: https://issues.apache.org/jira/browse/HBASE-10886
> Project: HBase
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10886-0.patch, HBASE-10886-1.patch
>
>
> Once htrace-zipkin was removed from depencencies in HBASE-9700. Because all 
> of the depencencies of htrace-zipkin is bundled with HBase now, it is good to 
> add it for the ease of use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10289) Avoid random port usage by default JMX Server. Create Custome JMX server

2014-04-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960641#comment-13960641
 ] 

stack commented on HBASE-10289:
---

Patch is great.  Thanks for testing [~nidmhbase].  How does this patch relate 
to the stuff that is in hbase-env.sh?

{code}


# Uncomment and adjust to enable JMX exporting
# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to 
configure remote password access.
# More details at: 
http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
#
# export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false"
...

{code}

Should we remove hbase-env.sh stuff?

> Avoid random port usage by default JMX Server. Create Custome JMX server
> 
>
> Key: HBASE-10289
> URL: https://issues.apache.org/jira/browse/HBASE-10289
> Project: HBase
>  Issue Type: Improvement
>Reporter: nijel
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10289-v4.patch, HBASE-10289.patch, 
> HBASE-10289_1.patch, HBASE-10289_2.patch, HBASE-10289_3.patch
>
>
> If we enable JMX MBean server for HMaster or Region server  through VM 
> arguments, the process will use one random which we cannot configure.
> This can be a problem if that random port is configured for some other 
> service.
> This issue can be avoided by supporting  a custom JMX Server.
> The ports can be configured. If there is no ports configured, it will 
> continue the same way as now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10886) add htrace-zipkin to the runtime dependencies again

2014-04-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960772#comment-13960772
 ] 

stack commented on HBASE-10886:
---

Let me revert while we discuss 

> add htrace-zipkin to the runtime dependencies again
> ---
>
> Key: HBASE-10886
> URL: https://issues.apache.org/jira/browse/HBASE-10886
> Project: HBase
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10886-0.patch, HBASE-10886-1.patch
>
>
> Once htrace-zipkin was removed from depencencies in HBASE-9700. Because all 
> of the depencencies of htrace-zipkin is bundled with HBase now, it is good to 
> add it for the ease of use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HBASE-10886) add htrace-zipkin to the runtime dependencies again

2014-04-04 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reopened HBASE-10886:
---


> add htrace-zipkin to the runtime dependencies again
> ---
>
> Key: HBASE-10886
> URL: https://issues.apache.org/jira/browse/HBASE-10886
> Project: HBase
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10886-0.patch, HBASE-10886-1.patch
>
>
> Once htrace-zipkin was removed from depencencies in HBASE-9700. Because all 
> of the depencencies of htrace-zipkin is bundled with HBase now, it is good to 
> add it for the ease of use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10289) Avoid random port usage by default JMX Server. Create Custome JMX server

2014-04-04 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960800#comment-13960800
 ] 

Demai Ni commented on HBASE-10289:
--

[~stack], thanks for reviewing the patch. 

I am not familiar with JMX setting. from my cluster: 
{code}
export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=true 
-Dcom.sun.management.jmxremote.password.file=/opt/ibm/biginsights/conf/jmx/hbase/jmxremote.password
 
-Dcom.sun.management.jmxremote.access.file=/opt/ibm/biginsights/conf/jmx/hbase/jmxremote.access"
{code}
the value $HBASE_JMX_BASE is used to start others, like HMaster, TaskTracker. 
Probably won't hurt to leave it there... Again, I am not familiar with 
that(smile)


> Avoid random port usage by default JMX Server. Create Custome JMX server
> 
>
> Key: HBASE-10289
> URL: https://issues.apache.org/jira/browse/HBASE-10289
> Project: HBase
>  Issue Type: Improvement
>Reporter: nijel
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10289-v4.patch, HBASE-10289.patch, 
> HBASE-10289_1.patch, HBASE-10289_2.patch, HBASE-10289_3.patch
>
>
> If we enable JMX MBean server for HMaster or Region server  through VM 
> arguments, the process will use one random which we cannot configure.
> This can be a problem if that random port is configured for some other 
> service.
> This issue can be avoided by supporting  a custom JMX Server.
> The ports can be configured. If there is no ports configured, it will 
> continue the same way as now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10912) setUp / tearDown in TestSCVFWithMiniCluster should be done once per run

2014-04-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960833#comment-13960833
 ] 

Hadoop QA commented on HBASE-10912:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638783/10912-v1.txt
  against trunk revision .
  ATTACHMENT ID: 12638783

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9203//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9203//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9203//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9203//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9203//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9203//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9203//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9203//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9203//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9203//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9203//console

This message is automatically generated.

> setUp / tearDown in TestSCVFWithMiniCluster should be done once per run
> ---
>
> Key: HBASE-10912
> URL: https://issues.apache.org/jira/browse/HBASE-10912
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10912-v1.txt
>
>
> setUp / tearDown should be annotated with @BeforeClass and @AfterClass, 
> respectively.
> On my Mac, the runtime for this test went from 19 seconds to 9 seconds:
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestSCVFWithMiniCluster
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.302 sec
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10902) Make Secure Bulk Load work across remote secure clusters

2014-04-04 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-10902:
-

Summary: Make Secure Bulk Load work across remote secure clusters  (was: 
Secure Bulk Load does not work across secure clusters)

> Make Secure Bulk Load work across remote secure clusters
> 
>
> Key: HBASE-10902
> URL: https://issues.apache.org/jira/browse/HBASE-10902
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.96.1
>Reporter: Jerry He
>Assignee: Jerry He
>
> Two secure clusters, both with kerberos enabled.
> Run bulk load on one cluster to load files from another cluster.
> biadmin@hdtest249:~> hbase 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles 
> hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c
>  TestTable_rr
> Bulk load failed.  In the region server log:
> {code}
> 2014-04-02 20:04:56,361 ERROR 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint: Failed to 
> complete bulk load
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c/info/6b44ca48aebf48d98cb3491f512c41a7,
>  expected: hdfs://hdtest249.svl.ibm.com:9000
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:651)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1248)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1244)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1244)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:233)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:223)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:300)
> at javax.security.auth.Subject.doAs(Subject.java:494)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1482)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.secureBulkLoadHFiles(SecureBulkLoadEndpoint.java:223)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos$SecureBulkLoadService.callMethod(SecureBulkLoadProtos.java:4631)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5088)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3219)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26933)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10902) Secure Bulk Load does not work across secure clusters

2014-04-04 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-10902:
-

Issue Type: Improvement  (was: Bug)

> Secure Bulk Load does not work across secure clusters
> -
>
> Key: HBASE-10902
> URL: https://issues.apache.org/jira/browse/HBASE-10902
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.96.1
>Reporter: Jerry He
>Assignee: Jerry He
>
> Two secure clusters, both with kerberos enabled.
> Run bulk load on one cluster to load files from another cluster.
> biadmin@hdtest249:~> hbase 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles 
> hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c
>  TestTable_rr
> Bulk load failed.  In the region server log:
> {code}
> 2014-04-02 20:04:56,361 ERROR 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint: Failed to 
> complete bulk load
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c/info/6b44ca48aebf48d98cb3491f512c41a7,
>  expected: hdfs://hdtest249.svl.ibm.com:9000
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:651)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1248)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1244)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1244)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:233)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:223)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:300)
> at javax.security.auth.Subject.doAs(Subject.java:494)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1482)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.secureBulkLoadHFiles(SecureBulkLoadEndpoint.java:223)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos$SecureBulkLoadService.callMethod(SecureBulkLoadProtos.java:4631)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5088)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3219)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26933)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10902) Make Secure Bulk Load work across remote secure clusters

2014-04-04 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-10902:
-

Attachment: HBASE-10902-v0-0.96.patch

> Make Secure Bulk Load work across remote secure clusters
> 
>
> Key: HBASE-10902
> URL: https://issues.apache.org/jira/browse/HBASE-10902
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.96.1
>Reporter: Jerry He
>Assignee: Jerry He
> Attachments: HBASE-10902-v0-0.96.patch
>
>
> Two secure clusters, both with kerberos enabled.
> Run bulk load on one cluster to load files from another cluster.
> biadmin@hdtest249:~> hbase 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles 
> hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c
>  TestTable_rr
> Bulk load failed.  In the region server log:
> {code}
> 2014-04-02 20:04:56,361 ERROR 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint: Failed to 
> complete bulk load
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c/info/6b44ca48aebf48d98cb3491f512c41a7,
>  expected: hdfs://hdtest249.svl.ibm.com:9000
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:651)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1248)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1244)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1244)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:233)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:223)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:300)
> at javax.security.auth.Subject.doAs(Subject.java:494)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1482)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.secureBulkLoadHFiles(SecureBulkLoadEndpoint.java:223)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos$SecureBulkLoadService.callMethod(SecureBulkLoadProtos.java:4631)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5088)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3219)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26933)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10902) Make Secure Bulk Load work across remote secure clusters

2014-04-04 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960879#comment-13960879
 ] 

Jerry He commented on HBASE-10902:
--

Attached a patch for 0.96.
Tested to work in both local cluster and across secure clusters.
Please review.
I will do more cleanup if necessary and add patch for trunk.

> Make Secure Bulk Load work across remote secure clusters
> 
>
> Key: HBASE-10902
> URL: https://issues.apache.org/jira/browse/HBASE-10902
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.96.1
>Reporter: Jerry He
>Assignee: Jerry He
> Attachments: HBASE-10902-v0-0.96.patch
>
>
> Two secure clusters, both with kerberos enabled.
> Run bulk load on one cluster to load files from another cluster.
> biadmin@hdtest249:~> hbase 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles 
> hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c
>  TestTable_rr
> Bulk load failed.  In the region server log:
> {code}
> 2014-04-02 20:04:56,361 ERROR 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint: Failed to 
> complete bulk load
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://bdvm197.svl.ibm.com:9000/user/biadmin/mybackups/TestTable/0709e79bb131af13ed088bf1afd5649c/info/6b44ca48aebf48d98cb3491f512c41a7,
>  expected: hdfs://hdtest249.svl.ibm.com:9000
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:651)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1248)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1244)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1244)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:233)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint$1.run(SecureBulkLoadEndpoint.java:223)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:300)
> at javax.security.auth.Subject.doAs(Subject.java:494)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1482)
> at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.secureBulkLoadHFiles(SecureBulkLoadEndpoint.java:223)
> at 
> org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos$SecureBulkLoadService.callMethod(SecureBulkLoadProtos.java:4631)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5088)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3219)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26933)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10895) unassign a region fails due to the hosting region server is in FailedServerList

2014-04-04 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-10895:
--

   Resolution: Fixed
Fix Version/s: 0.96.3
   0.98.2
   0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks [~jxiang], [~tedyu] for the reviews! I've integrated the patch into 
0.96, 0.98 & trunk branch.

> unassign a region fails due to the hosting region server is in 
> FailedServerList
> ---
>
> Key: HBASE-10895
> URL: https://issues.apache.org/jira/browse/HBASE-10895
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.96.1, 0.98.1, 0.99.0
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: hbase-10895-trunk.patch, hbase-10895.patch
>
>
> This issue is similar as HBASE-10833 which deal with the sendRegionOpen RPC 
> while the JIRA issue happens with sendRegionClose.
> Once a RS in in failed server list due to a network hiccup, AM quickly 
> exhausted all retries and failed the whole region assignment later. Below is 
> a sample stack trace:
> {noformat}
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.AssignmentManager: Server 
> hor16n09.gq1.ygridcore.net,60020,1396270942046 returned 
> org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is 
> in the failed servers list: hor16n09.gq1.ygridcore.net/68.142.246.220:60020 
> for loadtest_d1,5994,1396261861562.fcef8d691632e99948fbf876d24f907e., 
> try=20 of 20
> org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is 
> in the failed servers list: hor16n09.gq1.ygridcore.net/68.142.246.220:60020
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:880)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1065)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.tracedWriteRequest(RpcClient.java:1032)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1474)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.closeRegion(AdminProtos.java:20854)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.closeRegion(ProtobufUtil.java:1656)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionClose(ServerManager.java:693)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.unassign(AssignmentManager.java:1685)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.forceRegionStateToOffline(AssignmentManager.java:1786)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1436)
> at 
> org.apache.hadoop.hbase.master.AssignCallable.call(AssignCallable.java:45)
> 
> 2014-03-31 13:39:10,056 WARN  [AM.-pool1-t8] master.RegionStates: Failed to 
> open/close fcef8d691632e99948fbf876d24f907e on 
> hor16n09.gq1.ygridcore.net,60020,1396270942046, set to FAILED_CLOSE
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.RegionStates: 
> Transitioned {fcef8d691632e99948fbf876d24f907e state=PENDING_OPEN, 
> ts=1396273149814, server=hor16n09.gq1.ygridcore.net,60020,1396270942046} to 
> {fcef8d691632e99948fbf876d24f907e state=FAILED_CLOSE, ts=1396273150056, 
> server=hor16n09.gq1.ygridcore.net,60020,1396270942046}
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.AssignmentManager: Skip 
> assigning {ENCODED => fcef8d691632e99948fbf876d24f907e, NAME => 
> 'loadtest_d1,5994,1396261861562.fcef8d691632e99948fbf876d24f907e.', 
> STARTKEY => '5994', ENDKEY => '6660'}, we couldn't close it: 
> {fcef8d691632e99948fbf876d24f907e state=FAILED_CLOSE, ts=1396273150056, 
> server=hor16n09.gq1.ygridcore.net,60020,1396270942046}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10912) setUp / tearDown in TestSCVFWithMiniCluster should be done once per run

2014-04-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960903#comment-13960903
 ] 

Anoop Sam John commented on HBASE-10912:


+1
There is some unused imports also. Can you remove those on commit. 

> setUp / tearDown in TestSCVFWithMiniCluster should be done once per run
> ---
>
> Key: HBASE-10912
> URL: https://issues.apache.org/jira/browse/HBASE-10912
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10912-v1.txt
>
>
> setUp / tearDown should be annotated with @BeforeClass and @AfterClass, 
> respectively.
> On my Mac, the runtime for this test went from 19 seconds to 9 seconds:
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestSCVFWithMiniCluster
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.302 sec
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10899) [AccessController] Apply MAX_VERSIONS from schema or request when scanning

2014-04-04 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960905#comment-13960905
 ] 

Anoop Sam John commented on HBASE-10899:


Don't think the test failure is related to this patch.
+1

> [AccessController] Apply MAX_VERSIONS from schema or request when scanning
> --
>
> Key: HBASE-10899
> URL: https://issues.apache.org/jira/browse/HBASE-10899
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.2
>
> Attachments: HBASE-10899_1.patch, HBASE-10899_2.patch
>
>
> Similar to HBASE-10854 we need to handle different versions while the 
> versions are in memstore and/or before compaction happens when the max 
> versions for the CF == 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10906) Change error log for NamingException in TableInputFormatBase to WARN level

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960912#comment-13960912
 ] 

Hudson commented on HBASE-10906:


ABORTED: Integrated in HBase-TRUNK #5065 (See 
[https://builds.apache.org/job/HBase-TRUNK/5065/])
HBASE-10906 Change error log for NamingException in TableInputFormatBase to 
WARN level (tedyu: rev 1584890)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java


> Change error log for NamingException in TableInputFormatBase to WARN level
> --
>
> Key: HBASE-10906
> URL: https://issues.apache.org/jira/browse/HBASE-10906
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: 10906-v1.txt
>
>
> Over in this thread:
> http://search-hadoop.com/m/DHED4Qp3ho/HBase+resolveDns+error+in+log&subj=HBase+resolveDns+error+in+log
> Amit mentioned that despite the error log, mapreduce job executed 
> successfully.
> The log level should be lowered to WARN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2014-04-04 Thread Alex Baranau (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960916#comment-13960916
 ] 

Alex Baranau commented on HBASE-6618:
-

[~kuzmiigo] 

bq. I thought that the value in the fixed part is checked as whole, but the 
code actually checks its bytes in isolation, so the rule is actually 0(0 - 
9)(0 - 9)(1 - 9)

not true. aa68 will satisfy the rule ??(53 - 97). Added a test specifically for 
that:

{code}
// Range
Assert.assertEquals(FuzzyRowFilter.SatisfiesCode.YES,
FuzzyRowFilter.satisfies(
  new byte[]{1, 1, 6, 8},
  new Triple(
new byte[]{0, 0, 1, 1}, // mask
new byte[]{1, 1, 5, 6}, // upper bytes
new byte[]{1, 1, 9, 7}))); // lower bytes
{code}

> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, 
> HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10913) Print exception of why a copy failed during ExportSnapshot

2014-04-04 Thread Harsh J (JIRA)
Harsh J created HBASE-10913:
---

 Summary: Print exception of why a copy failed during ExportSnapshot
 Key: HBASE-10913
 URL: https://issues.apache.org/jira/browse/HBASE-10913
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.96.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor


Currently we print a vague "Failed to copy the snapshot directory from X to Y" 
whenever X pre-exists on Y. Users have to figure this out by themselves.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10906) Change error log for NamingException in TableInputFormatBase to WARN level

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960921#comment-13960921
 ] 

Hudson commented on HBASE-10906:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #249 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/249/])
HBASE-10906 Change error log for NamingException in TableInputFormatBase to 
WARN level (tedyu: rev 1584889)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java


> Change error log for NamingException in TableInputFormatBase to WARN level
> --
>
> Key: HBASE-10906
> URL: https://issues.apache.org/jira/browse/HBASE-10906
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: 10906-v1.txt
>
>
> Over in this thread:
> http://search-hadoop.com/m/DHED4Qp3ho/HBase+resolveDns+error+in+log&subj=HBase+resolveDns+error+in+log
> Amit mentioned that despite the error log, mapreduce job executed 
> successfully.
> The log level should be lowered to WARN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2014-04-04 Thread Alex Baranau (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Baranau updated HBASE-6618:


Attachment: HBASE-6618_4.patch

Updated patch to fit latest trunk

> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, 
> HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path, HBASE-6618_4.patch
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2014-04-04 Thread Alex Baranau (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960924#comment-13960924
 ] 

Alex Baranau commented on HBASE-6618:
-

Updated patch, also uploaded to review board at 
https://reviews.apache.org/r/8786. Very small change to fit latest trunk. 
[~yuzhih...@gmail.com] if you have time by chance - very much appreciate if you 
can review. This version is much better, much more flexible than current one.. 
Thank you a lot in advance

> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, 
> HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path, HBASE-6618_4.patch
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2014-04-04 Thread Alex Baranau (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960927#comment-13960927
 ] 

Alex Baranau commented on HBASE-6618:
-

I mean than the one available in HBase currently

> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, 
> HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path, HBASE-6618_4.patch
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10914) Improve the snapshot directory local copy during ExportSnapshot

2014-04-04 Thread Harsh J (JIRA)
Harsh J created HBASE-10914:
---

 Summary: Improve the snapshot directory local copy during 
ExportSnapshot
 Key: HBASE-10914
 URL: https://issues.apache.org/jira/browse/HBASE-10914
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.96.0
Reporter: Harsh J
Priority: Minor


For tables with a lot of regions, the ExportSnapshot appears to "hang" without 
progress cause it copies the .snapshot directory (which has tiny, 
reference/reference-like files I assume) and does not report its state. Would 
be good if it dumped its state for like say, every 50 files it copies.

This operation is also sequential, so takes a lot of time, and can be improved 
to do 5-10 threads at a time in parallel perhaps, since the actual writes are 
tiny (i.e. data transfer is no concern) and its mostly just NN interaction 
that's needed here.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2014-04-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-6618:
--

Status: Patch Available  (was: Open)

> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, 
> HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path, HBASE-6618_4.patch
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10912) setUp / tearDown in TestSCVFWithMiniCluster should be done once per run

2014-04-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10912:
---

Attachment: 10912-v2.txt

Patch v2 removes unused import.

> setUp / tearDown in TestSCVFWithMiniCluster should be done once per run
> ---
>
> Key: HBASE-10912
> URL: https://issues.apache.org/jira/browse/HBASE-10912
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 10912-v1.txt, 10912-v2.txt
>
>
> setUp / tearDown should be annotated with @BeforeClass and @AfterClass, 
> respectively.
> On my Mac, the runtime for this test went from 19 seconds to 9 seconds:
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestSCVFWithMiniCluster
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.302 sec
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10912) setUp / tearDown in TestSCVFWithMiniCluster should be done once per run

2014-04-04 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10912:
---

   Resolution: Fixed
Fix Version/s: 0.98.2
   0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

> setUp / tearDown in TestSCVFWithMiniCluster should be done once per run
> ---
>
> Key: HBASE-10912
> URL: https://issues.apache.org/jira/browse/HBASE-10912
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: 10912-v1.txt, 10912-v2.txt
>
>
> setUp / tearDown should be annotated with @BeforeClass and @AfterClass, 
> respectively.
> On my Mac, the runtime for this test went from 19 seconds to 9 seconds:
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestSCVFWithMiniCluster
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.302 sec
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10895) unassign a region fails due to the hosting region server is in FailedServerList

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960935#comment-13960935
 ] 

Hudson commented on HBASE-10895:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #250 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/250/])
HBASE-10895: unassign a region fails due to the hosting region server is in 
FailedServerList - part2 (jeffreyz: rev 1584950)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
HBASE-10895: unassign a region fails due to the hosting region server is in 
FailedServerList (jeffreyz: rev 1584948)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java


> unassign a region fails due to the hosting region server is in 
> FailedServerList
> ---
>
> Key: HBASE-10895
> URL: https://issues.apache.org/jira/browse/HBASE-10895
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.96.1, 0.98.1, 0.99.0
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: hbase-10895-trunk.patch, hbase-10895.patch
>
>
> This issue is similar as HBASE-10833 which deal with the sendRegionOpen RPC 
> while the JIRA issue happens with sendRegionClose.
> Once a RS in in failed server list due to a network hiccup, AM quickly 
> exhausted all retries and failed the whole region assignment later. Below is 
> a sample stack trace:
> {noformat}
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.AssignmentManager: Server 
> hor16n09.gq1.ygridcore.net,60020,1396270942046 returned 
> org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is 
> in the failed servers list: hor16n09.gq1.ygridcore.net/68.142.246.220:60020 
> for loadtest_d1,5994,1396261861562.fcef8d691632e99948fbf876d24f907e., 
> try=20 of 20
> org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is 
> in the failed servers list: hor16n09.gq1.ygridcore.net/68.142.246.220:60020
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:880)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1065)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.tracedWriteRequest(RpcClient.java:1032)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1474)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.closeRegion(AdminProtos.java:20854)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.closeRegion(ProtobufUtil.java:1656)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionClose(ServerManager.java:693)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.unassign(AssignmentManager.java:1685)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.forceRegionStateToOffline(AssignmentManager.java:1786)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1436)
> at 
> org.apache.hadoop.hbase.master.AssignCallable.call(AssignCallable.java:45)
> 
> 2014-03-31 13:39:10,056 WARN  [AM.-pool1-t8] master.RegionStates: Failed to 
> open/close fcef8d691632e99948fbf876d24f907e on 
> hor16n09.gq1.ygridcore.net,60020,1396270942046, set to FAILED_CLOSE
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.RegionStates: 
> Transitioned {fcef8d691632e99948fbf876d24f907e state=PENDING_OPEN, 
> ts=1396273149814, server=hor16n09.gq1.ygridcore.net,60020,1396270942046} to 
> {fcef8d691632e99948fbf876d24f907e state=FAILED_CLOSE, ts=1396273150056, 
> server=hor16n09.gq1.ygridcore.net,60020,1396270942046}
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.AssignmentManager: Skip 
> assigning {ENCODED => fcef8d691632e99948fbf876d24f907e, NAME => 
> 'loadtest_d1,5994,1396261861562.fcef8d691632e99948fbf876d24f907e.', 
> STARTKEY => '5994', ENDKEY => '6660'}, we couldn't close it: 
> {fcef8d691632e99948fbf876d24f907e state=FAILED_CLOSE, ts=1396273150056, 
> server=hor16n09.gq1.ygridcore.net,60020,1396270942046}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10913) Print exception of why a copy failed during ExportSnapshot

2014-04-04 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HBASE-10913.
-

Resolution: Duplicate

A day too lateā€¦ :-)

Resolving as dupe of HBASE-10622

> Print exception of why a copy failed during ExportSnapshot
> --
>
> Key: HBASE-10913
> URL: https://issues.apache.org/jira/browse/HBASE-10913
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 0.96.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
>
> Currently we print a vague "Failed to copy the snapshot directory from X to 
> Y" whenever X pre-exists on Y. Users have to figure this out by themselves.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7847) Use zookeeper multi to clear znodes

2014-04-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960950#comment-13960950
 ] 

stack commented on HBASE-7847:
--

[~rakeshr] So, we should require a zk 3.4.x for hbase?  It was released 
November 2011 so this seems like an OK requirement to add to our list.  The 
hang will happen on any zk before 3.4.6?  When was multi added?  Do you know?

Regardless of the zk version, is it true that you have to set the 'multi' 
configuration property for us to even do a multi op in the first place?  Is 
this true?   If so, and the zk version does not support multi ops and hbase 
hangs, then we have a sort of defense and we can commit this.

Is it possible to ask zk what version it is?  It was not possible in the past 
but may be fixed in 3.4.6?  If so, that'd be cool.  Then we could ask and then 
do multi going forward (Though if I remember, the issue here is that only one 
member of the ensemble, the one we are talking too, could report itself 3.4.6 
but all others could be at an earlier version).

Thanks [~rakeshr]



> Use zookeeper multi to clear znodes
> ---
>
> Key: HBASE-7847
> URL: https://issues.apache.org/jira/browse/HBASE-7847
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Rakesh R
> Attachments: 7847-v1.txt, 7847_v6.patch, 7847_v6.patch, 
> HBASE-7847.patch, HBASE-7847.patch, HBASE-7847.patch, HBASE-7847_v4.patch, 
> HBASE-7847_v5.patch, HBASE-7847_v6.patch
>
>
> In ZKProcedureUtil, clearChildZNodes() and clearZNodes(String procedureName) 
> should utilize zookeeper multi so that they're atomic



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10895) unassign a region fails due to the hosting region server is in FailedServerList

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960953#comment-13960953
 ] 

Hudson commented on HBASE-10895:


SUCCESS: Integrated in HBase-0.98 #266 (See 
[https://builds.apache.org/job/HBase-0.98/266/])
HBASE-10895: unassign a region fails due to the hosting region server is in 
FailedServerList - part2 (jeffreyz: rev 1584950)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
HBASE-10895: unassign a region fails due to the hosting region server is in 
FailedServerList (jeffreyz: rev 1584948)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java


> unassign a region fails due to the hosting region server is in 
> FailedServerList
> ---
>
> Key: HBASE-10895
> URL: https://issues.apache.org/jira/browse/HBASE-10895
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.96.1, 0.98.1, 0.99.0
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: hbase-10895-trunk.patch, hbase-10895.patch
>
>
> This issue is similar as HBASE-10833 which deal with the sendRegionOpen RPC 
> while the JIRA issue happens with sendRegionClose.
> Once a RS in in failed server list due to a network hiccup, AM quickly 
> exhausted all retries and failed the whole region assignment later. Below is 
> a sample stack trace:
> {noformat}
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.AssignmentManager: Server 
> hor16n09.gq1.ygridcore.net,60020,1396270942046 returned 
> org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is 
> in the failed servers list: hor16n09.gq1.ygridcore.net/68.142.246.220:60020 
> for loadtest_d1,5994,1396261861562.fcef8d691632e99948fbf876d24f907e., 
> try=20 of 20
> org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is 
> in the failed servers list: hor16n09.gq1.ygridcore.net/68.142.246.220:60020
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:880)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1065)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.tracedWriteRequest(RpcClient.java:1032)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1474)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.closeRegion(AdminProtos.java:20854)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.closeRegion(ProtobufUtil.java:1656)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionClose(ServerManager.java:693)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.unassign(AssignmentManager.java:1685)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.forceRegionStateToOffline(AssignmentManager.java:1786)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1436)
> at 
> org.apache.hadoop.hbase.master.AssignCallable.call(AssignCallable.java:45)
> 
> 2014-03-31 13:39:10,056 WARN  [AM.-pool1-t8] master.RegionStates: Failed to 
> open/close fcef8d691632e99948fbf876d24f907e on 
> hor16n09.gq1.ygridcore.net,60020,1396270942046, set to FAILED_CLOSE
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.RegionStates: 
> Transitioned {fcef8d691632e99948fbf876d24f907e state=PENDING_OPEN, 
> ts=1396273149814, server=hor16n09.gq1.ygridcore.net,60020,1396270942046} to 
> {fcef8d691632e99948fbf876d24f907e state=FAILED_CLOSE, ts=1396273150056, 
> server=hor16n09.gq1.ygridcore.net,60020,1396270942046}
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.AssignmentManager: Skip 
> assigning {ENCODED => fcef8d691632e99948fbf876d24f907e, NAME => 
> 'loadtest_d1,5994,1396261861562.fcef8d691632e99948fbf876d24f907e.', 
> STARTKEY => '5994', ENDKEY => '6660'}, we couldn't close it: 
> {fcef8d691632e99948fbf876d24f907e state=FAILED_CLOSE, ts=1396273150056, 
> server=hor16n09.gq1.ygridcore.net,60020,1396270942046}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2014-04-04 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960955#comment-13960955
 ] 

Ted Yu commented on HBASE-6618:
---

If a client, compiled with FuzzyRowFilter before this change, uses 
FuzzyRowFilter in Scan, would server side be able to handle ?

Please add more tests for the new rules. This would make code more robust and 
help detect regression.

> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, 
> HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path, HBASE-6618_4.patch
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6618) Implement FuzzyRowFilter with ranges support

2014-04-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960963#comment-13960963
 ] 

Hadoop QA commented on HBASE-6618:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638828/HBASE-6618_4.patch
  against trunk revision .
  ATTACHMENT ID: 12638828

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9204//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9204//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9204//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9204//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9204//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9204//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9204//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9204//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9204//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9204//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9204//console

This message is automatically generated.

> Implement FuzzyRowFilter with ranges support
> 
>
> Key: HBASE-6618
> URL: https://issues.apache.org/jira/browse/HBASE-6618
> Project: HBase
>  Issue Type: New Feature
>  Components: Filters
>Reporter: Alex Baranau
>Assignee: Alex Baranau
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-6618-algo-desc-bits.png, HBASE-6618-algo.patch, 
> HBASE-6618.patch, HBASE-6618_2.path, HBASE-6618_3.path, HBASE-6618_4.patch
>
>
> Apart from current ability to specify fuzzy row filter e.g. for 
>  format as _0004 (where 0004 - actionId) it would be 
> great to also have ability to specify the "fuzzy range" , e.g. _0004, 
> ..., _0099.
> See initial discussion here: http://search-hadoop.com/m/WVLJdX0Z65
> Note: currently it is possible to provide multiple fuzzy row rules to 
> existing FuzzyRowFilter, but in case when the range is big (contains 
> thousands of values) it is not efficient.
> Filter should perform efficient fast-forwarding during the scan (this is what 
> distinguishes it from regex row filter).
> While such functionality may seem like a proper fit for custom filter (i.e. 
> not including into standard filter set) it looks like the filter may be very 
> re-useable. We may judge based on the implementation that will hopefully be 
> added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10895) unassign a region fails due to the hosting region server is in FailedServerList

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960968#comment-13960968
 ] 

Hudson commented on HBASE-10895:


SUCCESS: Integrated in hbase-0.96 #378 (See 
[https://builds.apache.org/job/hbase-0.96/378/])
HBASE-10895: unassign a region fails due to the hosting region server is in 
FailedServerList (jeffreyz: rev 1584949)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java


> unassign a region fails due to the hosting region server is in 
> FailedServerList
> ---
>
> Key: HBASE-10895
> URL: https://issues.apache.org/jira/browse/HBASE-10895
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.96.1, 0.98.1, 0.99.0
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: hbase-10895-trunk.patch, hbase-10895.patch
>
>
> This issue is similar as HBASE-10833 which deal with the sendRegionOpen RPC 
> while the JIRA issue happens with sendRegionClose.
> Once a RS in in failed server list due to a network hiccup, AM quickly 
> exhausted all retries and failed the whole region assignment later. Below is 
> a sample stack trace:
> {noformat}
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.AssignmentManager: Server 
> hor16n09.gq1.ygridcore.net,60020,1396270942046 returned 
> org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is 
> in the failed servers list: hor16n09.gq1.ygridcore.net/68.142.246.220:60020 
> for loadtest_d1,5994,1396261861562.fcef8d691632e99948fbf876d24f907e., 
> try=20 of 20
> org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is 
> in the failed servers list: hor16n09.gq1.ygridcore.net/68.142.246.220:60020
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:880)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1065)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.tracedWriteRequest(RpcClient.java:1032)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1474)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.closeRegion(AdminProtos.java:20854)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.closeRegion(ProtobufUtil.java:1656)
> at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionClose(ServerManager.java:693)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.unassign(AssignmentManager.java:1685)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.forceRegionStateToOffline(AssignmentManager.java:1786)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1436)
> at 
> org.apache.hadoop.hbase.master.AssignCallable.call(AssignCallable.java:45)
> 
> 2014-03-31 13:39:10,056 WARN  [AM.-pool1-t8] master.RegionStates: Failed to 
> open/close fcef8d691632e99948fbf876d24f907e on 
> hor16n09.gq1.ygridcore.net,60020,1396270942046, set to FAILED_CLOSE
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.RegionStates: 
> Transitioned {fcef8d691632e99948fbf876d24f907e state=PENDING_OPEN, 
> ts=1396273149814, server=hor16n09.gq1.ygridcore.net,60020,1396270942046} to 
> {fcef8d691632e99948fbf876d24f907e state=FAILED_CLOSE, ts=1396273150056, 
> server=hor16n09.gq1.ygridcore.net,60020,1396270942046}
> 2014-03-31 13:39:10,056 INFO  [AM.-pool1-t8] master.AssignmentManager: Skip 
> assigning {ENCODED => fcef8d691632e99948fbf876d24f907e, NAME => 
> 'loadtest_d1,5994,1396261861562.fcef8d691632e99948fbf876d24f907e.', 
> STARTKEY => '5994', ENDKEY => '6660'}, we couldn't close it: 
> {fcef8d691632e99948fbf876d24f907e state=FAILED_CLOSE, ts=1396273150056, 
> server=hor16n09.gq1.ygridcore.net,60020,1396270942046}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10912) setUp / tearDown in TestSCVFWithMiniCluster should be done once per run

2014-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13960991#comment-13960991
 ] 

Hudson commented on HBASE-10912:


SUCCESS: Integrated in HBase-0.98 #267 (See 
[https://builds.apache.org/job/HBase-0.98/267/])
HBASE-10912 setUp / tearDown in TestSCVFWithMiniCluster should be done once per 
run (tedyu: rev 1584951)
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSCVFWithMiniCluster.java


> setUp / tearDown in TestSCVFWithMiniCluster should be done once per run
> ---
>
> Key: HBASE-10912
> URL: https://issues.apache.org/jira/browse/HBASE-10912
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0, 0.98.2
>
> Attachments: 10912-v1.txt, 10912-v2.txt
>
>
> setUp / tearDown should be annotated with @BeforeClass and @AfterClass, 
> respectively.
> On my Mac, the runtime for this test went from 19 seconds to 9 seconds:
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestSCVFWithMiniCluster
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.302 sec
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)