[jira] [Commented] (HBASE-12329) Table create with duplicate column family names quietly succeeds

2014-10-30 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189726#comment-14189726
 ] 

Jingcheng Du commented on HBASE-12329:
--

Thanks [~busbey] for the review, will submit another patch soon.

> Table create with duplicate column family names quietly succeeds
> 
>
> Key: HBASE-12329
> URL: https://issues.apache.org/jira/browse/HBASE-12329
> Project: HBase
>  Issue Type: Bug
>  Components: Client, shell
>Reporter: Sean Busbey
>Assignee: Jingcheng Du
>Priority: Minor
> Attachments: HBASE-12329.diff
>
>
> From the mailing list
> {quote}
> I was expecting that it is forbidden, **but** this call does not throw any
> exception
> {code}
> String[] families = {"cf", "cf"};
> HTableDescriptor desc = new HTableDescriptor(name);
> for (String cf : families) {
>   HColumnDescriptor coldef = new HColumnDescriptor(cf);
>   desc.addFamily(coldef);
> }
> try {
> admin.createTable(desc);
> } catch (TableExistsException e) {
> throw new IOException("table \'" + name + "\' already exists");
> }
> {code}
> {quote}
> And Ted's follow up replicates in the shell
> {code}
> hbase(main):001:0> create 't2', {NAME => 'f1'}, {NAME => 'f1'}
> The table got created - with 1 column family:
> hbase(main):002:0> describe 't2'
> DESCRIPTION
>ENABLED
>  't2', {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW',
> REPLICATION_SCOPE => '0 true
>  ', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL =>
> '2147483647', KEEP_DELETED
>  _CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE
> => 'true'}
> 1 row(s) in 0.1000 seconds
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12336) RegionServer failed to shutdown for NodeFailoverWorker thread

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189730#comment-14189730
 ] 

Hudson commented on HBASE-12336:


SUCCESS: Integrated in HBase-1.0 #387 (See 
[https://builds.apache.org/job/HBase-1.0/387/])
HBASE-12336 RegionServer failed to shutdown for NodeFailoverWorker thread (Liu 
Shaohui) (stack: rev 9e0ca7843906690afea1962fa4cf22b2e91cb224)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


> RegionServer failed to shutdown for NodeFailoverWorker thread
> -
>
> Key: HBASE-12336
> URL: https://issues.apache.org/jira/browse/HBASE-12336
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.11
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0, 0.94.26, 0.98.9, 0.99.2
>
> Attachments: HBASE-12336-trunk-v1.diff, stack
>
>
> After enabling hbase.zookeeper.useMulti in hbase cluster, we found that 
> regionserver failed to shutdown. Other threads have exited except a 
> NodeFailoverWorker thread.
> {code}
> "ReplicationExecutor-0" prio=10 tid=0x7f0d40195ad0 nid=0x73a in 
> Object.wait() [0x7f0dc8fe6000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:485)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309)
> - locked <0x0005a16df080> (a 
> org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:930)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:912)
> at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.multi(RecoverableZooKeeper.java:531)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.multiOrSequential(ZKUtil.java:1518)
> at 
> org.apache.hadoop.hbase.replication.ReplicationZookeeper.copyQueuesFromRSUsingMulti(ReplicationZookeeper.java:804)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:612)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> It's sure that the shutdown method of the executor is called in  
> ReplicationSourceManager#join.
>  
> I am looking for the root cause and suggestions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12238) A few ugly exceptions on startup

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189729#comment-14189729
 ] 

Hudson commented on HBASE-12238:


SUCCESS: Integrated in HBase-1.0 #387 (See 
[https://builds.apache.org/job/HBase-1.0/387/])
HBASE-12238 A few ugly exceptions on startup (stack: rev 
b069c10af4aae0ecc94196f241124652cd0b1998)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaTableLocator.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


> A few ugly exceptions on startup
> 
>
> Key: HBASE-12238
> URL: https://issues.apache.org/jira/browse/HBASE-12238
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.1
>Reporter: stack
>Assignee: stack
>Priority: Minor
> Fix For: 2.0.0, 0.99.2
>
> Attachments: 12238.txt, 12238.txt
>
>
> Let me fix a few innocuous exceptions that show on startup (saw testing 
> 0.99.1), even when regular -- will throw people off.
> Here is one:
> {code}
> 2014-10-12 19:07:15,251 INFO  [c2020:16020.activeMasterManager] 
> zookeeper.MetaTableLocator: Failed verification of hbase:meta,,1 at 
> address=c2021.halxg.cloudera.com,16020,1413165899611, 
> exception=org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is 
> not online on c2021.halxg.cloudera.com,16020,1413166029547
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2677)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:838)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1110)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:20158)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2016)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:110)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:90)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> More to follow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-30 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-12375:
--
Attachment: HBASE-12375-v2.patch

While testing I found that LoadIncrementalHFiles fails to create a table having 
CF name starting with '_'.
v2 patch will fix that.
Please review

> LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
> --
>
> Key: HBASE-12375
> URL: https://issues.apache.org/jira/browse/HBASE-12375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.5
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Attachments: HBASE-12375-v2.patch, HBASE-12375.patch
>
>
> We do not restrict user from creating a table having column family starting 
> with '_'.
> So when user creates a table in such a way then LoadIncrementalHFiles will 
> skip those family data to load into the table.
> {code}
> // Skip _logs, etc
> if (familyDir.getName().startsWith("_")) continue;
> {code}
> I think we should remove that check as I do not see any _logs directory being 
> created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-30 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-12375:
--
Attachment: HBASE-12375-0.98.patch

Patch for 0.98 version

> LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
> --
>
> Key: HBASE-12375
> URL: https://issues.apache.org/jira/browse/HBASE-12375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.5
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Attachments: HBASE-12375-0.98.patch, HBASE-12375-v2.patch, 
> HBASE-12375.patch
>
>
> We do not restrict user from creating a table having column family starting 
> with '_'.
> So when user creates a table in such a way then LoadIncrementalHFiles will 
> skip those family data to load into the table.
> {code}
> // Skip _logs, etc
> if (familyDir.getName().startsWith("_")) continue;
> {code}
> I think we should remove that check as I do not see any _logs directory being 
> created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189751#comment-14189751
 ] 

Hadoop QA commented on HBASE-12375:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12678135/HBASE-12375-0.98.patch
  against trunk revision .
  ATTACHMENT ID: 12678135

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java:[448,24]
 no suitable method found for createTable(java.lang.String,java.lang.String)
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:testCompile 
(default-testCompile) on project hbase-server: Compilation failure
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java:[448,24]
 no suitable method found for createTable(java.lang.String,java.lang.String)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(byte[],byte[][],byte[][])
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(org.apache.hadoop.hbase.TableName,byte[],byte[][])
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(byte[],byte[],byte[][]) 
is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(org.apache.hadoop.hbase.TableName,byte[][],int[])
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(byte[],byte[][],int[]) 
is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(org.apache.hadoop.hbase.TableName,byte[][],int,int)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(byte[],byte[][],int,int)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(org.apache.hadoop.hbase.TableName,byte[][],int)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(byte[],byte[][],int) is 
not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(org.apache.hadoop.hbase.TableName,byte[],int)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(byte[],byte[],int) is 
not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(byte[],byte[][],org.apache.hadoop.conf.Configuration,int)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(org.apache.hadoop.hbase.TableName,byte[][],org.apache.hadoop.conf.Configuration,int)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(byte[],byte[][],org.apache.hadoop.conf.Configuration)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(org.apache.hadoop.hbase.TableName,byte[][],org.apache.hadoop.conf.Configuration)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(org.apache.hadoop.hbase.HTableDescriptor,byte[][])
 is not applicable
[ERROR] (actual argument java.lang.String cannot be converted to 
org.apache.hadoop.hbase.HTableDescriptor by method invocation conversion)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(org.apache.hadoop.hbase.HTableDescriptor,byte[][],org.apache.hadoop.conf.Configuration)
 is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method 
org.apache.hadoop.hbase.HBaseTestingUtility.createTable(org.apache.h

[jira] [Commented] (HBASE-12336) RegionServer failed to shutdown for NodeFailoverWorker thread

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189765#comment-14189765
 ] 

Hudson commented on HBASE-12336:


FAILURE: Integrated in HBase-0.98 #640 (See 
[https://builds.apache.org/job/HBase-0.98/640/])
HBASE-12336 RegionServer failed to shutdown for NodeFailoverWorker thread (Liu 
Shaohui) (stack: rev 954eb428f2e23b4bc85b4073c845d475bd8c0c2e)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


> RegionServer failed to shutdown for NodeFailoverWorker thread
> -
>
> Key: HBASE-12336
> URL: https://issues.apache.org/jira/browse/HBASE-12336
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.11
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0, 0.94.26, 0.98.9, 0.99.2
>
> Attachments: HBASE-12336-trunk-v1.diff, stack
>
>
> After enabling hbase.zookeeper.useMulti in hbase cluster, we found that 
> regionserver failed to shutdown. Other threads have exited except a 
> NodeFailoverWorker thread.
> {code}
> "ReplicationExecutor-0" prio=10 tid=0x7f0d40195ad0 nid=0x73a in 
> Object.wait() [0x7f0dc8fe6000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:485)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309)
> - locked <0x0005a16df080> (a 
> org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:930)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:912)
> at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.multi(RecoverableZooKeeper.java:531)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.multiOrSequential(ZKUtil.java:1518)
> at 
> org.apache.hadoop.hbase.replication.ReplicationZookeeper.copyQueuesFromRSUsingMulti(ReplicationZookeeper.java:804)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:612)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> It's sure that the shutdown method of the executor is called in  
> ReplicationSourceManager#join.
>  
> I am looking for the root cause and suggestions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9527) Review all old api that takes a table name as a byte array and ensure none can pass ns + tablename

2014-10-30 Thread Talat UYARER (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189807#comment-14189807
 ] 

Talat UYARER commented on HBASE-9527:
-

Hi [~busbey] If you are not working on this, Could I assigning myself... 

> Review all old api that takes a table name as a byte array and ensure none 
> can pass ns + tablename
> --
>
> Key: HBASE-9527
> URL: https://issues.apache.org/jira/browse/HBASE-9527
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 0.99.2
>
>
> Go over all old APIs that take a table name and ensure that it is not 
> possible to pass in a byte array that is a namespace + tablename; instead 
> throw an exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12344) Split up TestAdmin

2014-10-30 Thread Talat UYARER (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189809#comment-14189809
 ] 

Talat UYARER commented on HBASE-12344:
--

Hi [~apurtell] If you are not working on this, Could I assig myself ?

> Split up TestAdmin
> --
>
> Key: HBASE-12344
> URL: https://issues.apache.org/jira/browse/HBASE-12344
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
>
> Running time for TestAdmin on a dev box is about 400 seconds before 
> HBASE-12142, 500 seconds after.  Split it up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189827#comment-14189827
 ] 

Hadoop QA commented on HBASE-12375:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678134/HBASE-12375-v2.patch
  against trunk revision .
  ATTACHMENT ID: 12678134

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11516//console

This message is automatically generated.

> LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
> --
>
> Key: HBASE-12375
> URL: https://issues.apache.org/jira/browse/HBASE-12375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.5
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Attachments: HBASE-12375-0.98.patch, HBASE-12375-v2.patch, 
> HBASE-12375.patch
>
>
> We do not restrict user from creating a table having column family starting 
> with '_'.
> So when user creates a table in such a way then LoadIncrementalHFiles will 
> skip those family data to load into the table.
> {code}
> // Skip _logs, etc
> if (familyDir.getName().startsWith("_")) continue;
> {code}
> I think we should remove that check as I do not see any _logs directory being 
> created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12336) RegionServer failed to shutdown for NodeFailoverWorker thread

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189828#comment-14189828
 ] 

Hudson commented on HBASE-12336:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #609 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/609/])
HBASE-12336 RegionServer failed to shutdown for NodeFailoverWorker thread (Liu 
Shaohui) (stack: rev 954eb428f2e23b4bc85b4073c845d475bd8c0c2e)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


> RegionServer failed to shutdown for NodeFailoverWorker thread
> -
>
> Key: HBASE-12336
> URL: https://issues.apache.org/jira/browse/HBASE-12336
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.11
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0, 0.94.26, 0.98.9, 0.99.2
>
> Attachments: HBASE-12336-trunk-v1.diff, stack
>
>
> After enabling hbase.zookeeper.useMulti in hbase cluster, we found that 
> regionserver failed to shutdown. Other threads have exited except a 
> NodeFailoverWorker thread.
> {code}
> "ReplicationExecutor-0" prio=10 tid=0x7f0d40195ad0 nid=0x73a in 
> Object.wait() [0x7f0dc8fe6000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:485)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309)
> - locked <0x0005a16df080> (a 
> org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:930)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:912)
> at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.multi(RecoverableZooKeeper.java:531)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.multiOrSequential(ZKUtil.java:1518)
> at 
> org.apache.hadoop.hbase.replication.ReplicationZookeeper.copyQueuesFromRSUsingMulti(ReplicationZookeeper.java:804)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:612)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> It's sure that the shutdown method of the executor is called in  
> ReplicationSourceManager#join.
>  
> I am looking for the root cause and suggestions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11683) Metrics for MOB

2014-10-30 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189922#comment-14189922
 ] 

Jingcheng Du commented on HBASE-11683:
--

Upload the patch V8 according to Jon's comments.

> Metrics for MOB
> ---
>
> Key: HBASE-11683
> URL: https://issues.apache.org/jira/browse/HBASE-11683
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Jingcheng Du
> Attachments: HBASE-11683-V2.diff, HBASE-11683-V3.diff, 
> HBASE-11683-V4.diff, HBASE-11683-V5.diff, HBASE-11683-V6.diff, 
> HBASE-11683-V7.diff, HBASE-11683-V8.diff, HBASE-11683.diff
>
>
> We need to make sure to capture metrics about mobs.
> Some basic ones include:
> # of mob writes
> # of mob reads
> # avg size of mob (?)
> # mob files
> # of mob compactions / sweeps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11683) Metrics for MOB

2014-10-30 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-11683:
-
Attachment: HBASE-11683-V8.diff

> Metrics for MOB
> ---
>
> Key: HBASE-11683
> URL: https://issues.apache.org/jira/browse/HBASE-11683
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Jingcheng Du
> Attachments: HBASE-11683-V2.diff, HBASE-11683-V3.diff, 
> HBASE-11683-V4.diff, HBASE-11683-V5.diff, HBASE-11683-V6.diff, 
> HBASE-11683-V7.diff, HBASE-11683-V8.diff, HBASE-11683.diff
>
>
> We need to make sure to capture metrics about mobs.
> Some basic ones include:
> # of mob writes
> # of mob reads
> # avg size of mob (?)
> # mob files
> # of mob compactions / sweeps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11683) Metrics for MOB

2014-10-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14189926#comment-14189926
 ] 

Hadoop QA commented on HBASE-11683:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678161/HBASE-11683-V8.diff
  against trunk revision .
  ATTACHMENT ID: 12678161

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11518//console

This message is automatically generated.

> Metrics for MOB
> ---
>
> Key: HBASE-11683
> URL: https://issues.apache.org/jira/browse/HBASE-11683
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Jingcheng Du
> Attachments: HBASE-11683-V2.diff, HBASE-11683-V3.diff, 
> HBASE-11683-V4.diff, HBASE-11683-V5.diff, HBASE-11683-V6.diff, 
> HBASE-11683-V7.diff, HBASE-11683-V8.diff, HBASE-11683.diff
>
>
> We need to make sure to capture metrics about mobs.
> Some basic ones include:
> # of mob writes
> # of mob reads
> # avg size of mob (?)
> # mob files
> # of mob compactions / sweeps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12378) Add a test to verify that the read-replica is able to read after a compaction

2014-10-30 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-12378:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add a test to verify that the read-replica is able to read after a compaction
> -
>
> Key: HBASE-12378
> URL: https://issues.apache.org/jira/browse/HBASE-12378
> Project: HBase
>  Issue Type: Test
>  Components: regionserver, Replication
>Affects Versions: 2.0.0, 0.99.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-12378-v0.patch, HBASE-12378-v0.patch
>
>
> Add a unit test that verify that the secondary read-replica is still able to 
> read all the data even when the files on the primary are archived and the 
> store file refresh is not executed.
> basically is to have a test that verifies that the file-link logic is not 
> removed.
> (there are a couple of test that probably wants to do that.. but since they 
> operate on small data they will never trigger the file-link reopen)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11819) Unit test for CoprocessorHConnection

2014-10-30 Thread Talat UYARER (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Talat UYARER updated HBASE-11819:
-
Attachment: HBASE-11819v2.patch

it was updated through Stack's suggestions

> Unit test for CoprocessorHConnection 
> -
>
> Key: HBASE-11819
> URL: https://issues.apache.org/jira/browse/HBASE-11819
> Project: HBase
>  Issue Type: Test
>Reporter: Andrew Purtell
>Assignee: Talat UYARER
>Priority: Minor
>  Labels: newbie++
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-11819.patch, HBASE-11819v2.patch
>
>
> Add a unit test to hbase-server that exercises CoprocessorHConnection . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12344) Split up TestAdmin

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-12344:
---

Assignee: Talat UYARER  (was: Andrew Purtell)

All yours!

> Split up TestAdmin
> --
>
> Key: HBASE-12344
> URL: https://issues.apache.org/jira/browse/HBASE-12344
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Assignee: Talat UYARER
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
>
> Running time for TestAdmin on a dev box is about 400 seconds before 
> HBASE-12142, 500 seconds after.  Split it up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12344) Split up TestAdmin

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-12344:
---

Assignee: Andrew Purtell  (was: Talat UYARER)

whoops, wrong issue. Sorry Talat and Andrew!

> Split up TestAdmin
> --
>
> Key: HBASE-12344
> URL: https://issues.apache.org/jira/browse/HBASE-12344
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
>
> Running time for TestAdmin on a dev box is about 400 seconds before 
> HBASE-12142, 500 seconds after.  Split it up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-9527) Review all old api that takes a table name as a byte array and ensure none can pass ns + tablename

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-9527:
--

Assignee: Talat UYARER  (was: Sean Busbey)

All yours!

> Review all old api that takes a table name as a byte array and ensure none 
> can pass ns + tablename
> --
>
> Key: HBASE-9527
> URL: https://issues.apache.org/jira/browse/HBASE-9527
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Talat UYARER
>Priority: Critical
> Fix For: 0.99.2
>
>
> Go over all old APIs that take a table name and ensure that it is not 
> possible to pass in a byte array that is a namespace + tablename; instead 
> throw an exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-7126) Update website with info on how to report security bugs

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-7126:
---
Priority: Critical  (was: Minor)

> Update website with info on how to report security bugs 
> 
>
> Key: HBASE-7126
> URL: https://issues.apache.org/jira/browse/HBASE-7126
> Project: HBase
>  Issue Type: Task
>Reporter: Eli Collins
>Priority: Critical
>
> The HBase website should be updated with information on how to report 
> potential security vulnerabilities. In Hadoop land we have a private security 
> list that anyone case post to that we point to on our list page: Hadoop 
> example http://hadoop.apache.org/general_lists.html#Security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-7126) Update website with info on how to report security bugs

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-7126:
---
Labels: website  (was: )

> Update website with info on how to report security bugs 
> 
>
> Key: HBASE-7126
> URL: https://issues.apache.org/jira/browse/HBASE-7126
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Eli Collins
>Priority: Critical
>  Labels: website
>
> The HBase website should be updated with information on how to report 
> potential security vulnerabilities. In Hadoop land we have a private security 
> list that anyone case post to that we point to on our list page: Hadoop 
> example http://hadoop.apache.org/general_lists.html#Security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-7126) Update website with info on how to report security bugs

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-7126:
---
Component/s: documentation

> Update website with info on how to report security bugs 
> 
>
> Key: HBASE-7126
> URL: https://issues.apache.org/jira/browse/HBASE-7126
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Eli Collins
>Priority: Critical
>  Labels: website
>
> The HBase website should be updated with information on how to report 
> potential security vulnerabilities. In Hadoop land we have a private security 
> list that anyone case post to that we point to on our list page: Hadoop 
> example http://hadoop.apache.org/general_lists.html#Security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7126) Update website with info on how to report security bugs

2014-10-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190049#comment-14190049
 ] 

Sean Busbey commented on HBASE-7126:


More info in the [ASF ref|http://www.apache.org/security/committers.html].

secur...@apache.org will default to private@hbase for forwarding reports to 
them. That's fine if we want to stick to usign that for all reports, but we 
should document the preference.

> Update website with info on how to report security bugs 
> 
>
> Key: HBASE-7126
> URL: https://issues.apache.org/jira/browse/HBASE-7126
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Eli Collins
>Priority: Critical
>  Labels: website
>
> The HBase website should be updated with information on how to report 
> potential security vulnerabilities. In Hadoop land we have a private security 
> list that anyone case post to that we point to on our list page: Hadoop 
> example http://hadoop.apache.org/general_lists.html#Security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12378) Add a test to verify that the read-replica is able to read after a compaction

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190070#comment-14190070
 ] 

Hudson commented on HBASE-12378:


SUCCESS: Integrated in HBase-1.0 #388 (See 
[https://builds.apache.org/job/HBase-1.0/388/])
HBASE-12378 Add a test to verify that the read-replica is able to read after a 
compaction (matteo.bertozzi: rev c466c619760c212167308c10e9768540cf3b2bab)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionReplicas.java


> Add a test to verify that the read-replica is able to read after a compaction
> -
>
> Key: HBASE-12378
> URL: https://issues.apache.org/jira/browse/HBASE-12378
> Project: HBase
>  Issue Type: Test
>  Components: regionserver, Replication
>Affects Versions: 2.0.0, 0.99.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-12378-v0.patch, HBASE-12378-v0.patch
>
>
> Add a unit test that verify that the secondary read-replica is still able to 
> read all the data even when the files on the primary are archived and the 
> store file refresh is not executed.
> basically is to have a test that verifies that the file-link logic is not 
> removed.
> (there are a couple of test that probably wants to do that.. but since they 
> operate on small data they will never trigger the file-link reopen)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12378) Add a test to verify that the read-replica is able to read after a compaction

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190101#comment-14190101
 ] 

Hudson commented on HBASE-12378:


FAILURE: Integrated in HBase-TRUNK #5721 (See 
[https://builds.apache.org/job/HBase-TRUNK/5721/])
HBASE-12378 Add a test to verify that the read-replica is able to read after a 
compaction (matteo.bertozzi: rev 8b84840d5a0b2cbd03f05bc044574a5eeb61756d)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionReplicas.java


> Add a test to verify that the read-replica is able to read after a compaction
> -
>
> Key: HBASE-12378
> URL: https://issues.apache.org/jira/browse/HBASE-12378
> Project: HBase
>  Issue Type: Test
>  Components: regionserver, Replication
>Affects Versions: 2.0.0, 0.99.1
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-12378-v0.patch, HBASE-12378-v0.patch
>
>
> Add a unit test that verify that the secondary read-replica is still able to 
> read all the data even when the files on the primary are archived and the 
> store file refresh is not executed.
> basically is to have a test that verifies that the file-link logic is not 
> removed.
> (there are a couple of test that probably wants to do that.. but since they 
> operate on small data they will never trigger the file-link reopen)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11683) Metrics for MOB

2014-10-30 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-11683:
---
  Resolution: Fixed
Release Note: 
Adds new mob related metrics:

mobCompcatedIntoMobCellsCount
mobCompcatedIntoMobCellsSize
mobCompcatedFromMobCellsCount
mobCompcatedFromMobCellsSize
mobFlushCount
mobFlushedCellsCount
mobFlushedCellsSize
mobScanCellsCount
mobScanCellsSize
mobFileCacheAccessCount
mobFileCacheMissCount
mobFileCacheHitPercent
mobFileCacheEvictedCount
mobFileCacheCount

Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

thanks for the updates jingcheng.  committed to hbase-11339

> Metrics for MOB
> ---
>
> Key: HBASE-11683
> URL: https://issues.apache.org/jira/browse/HBASE-11683
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Jingcheng Du
> Attachments: HBASE-11683-V2.diff, HBASE-11683-V3.diff, 
> HBASE-11683-V4.diff, HBASE-11683-V5.diff, HBASE-11683-V6.diff, 
> HBASE-11683-V7.diff, HBASE-11683-V8.diff, HBASE-11683.diff
>
>
> We need to make sure to capture metrics about mobs.
> Some basic ones include:
> # of mob writes
> # of mob reads
> # avg size of mob (?)
> # mob files
> # of mob compactions / sweeps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11683) Metrics for MOB

2014-10-30 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-11683:
---
Fix Version/s: hbase-11339

> Metrics for MOB
> ---
>
> Key: HBASE-11683
> URL: https://issues.apache.org/jira/browse/HBASE-11683
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Jingcheng Du
> Fix For: hbase-11339
>
> Attachments: HBASE-11683-V2.diff, HBASE-11683-V3.diff, 
> HBASE-11683-V4.diff, HBASE-11683-V5.diff, HBASE-11683-V6.diff, 
> HBASE-11683-V7.diff, HBASE-11683-V8.diff, HBASE-11683.diff
>
>
> We need to make sure to capture metrics about mobs.
> Some basic ones include:
> # of mob writes
> # of mob reads
> # avg size of mob (?)
> # mob files
> # of mob compactions / sweeps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11683) Metrics for MOB

2014-10-30 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-11683:
---
Release Note: 
Adds new mob related metrics:

mobCompactedIntoMobCellsCount
mobCompactedIntoMobCellsSize
mobCompactedFromMobCellsCount
mobCompactedFromMobCellsSize
mobFlushCount
mobFlushedCellsCount
mobFlushedCellsSize
mobScanCellsCount
mobScanCellsSize
mobFileCacheAccessCount
mobFileCacheMissCount
mobFileCacheHitPercent
mobFileCacheEvictedCount
mobFileCacheCount


  was:
Adds new mob related metrics:

mobCompcatedIntoMobCellsCount
mobCompcatedIntoMobCellsSize
mobCompcatedFromMobCellsCount
mobCompcatedFromMobCellsSize
mobFlushCount
mobFlushedCellsCount
mobFlushedCellsSize
mobScanCellsCount
mobScanCellsSize
mobFileCacheAccessCount
mobFileCacheMissCount
mobFileCacheHitPercent
mobFileCacheEvictedCount
mobFileCacheCount



> Metrics for MOB
> ---
>
> Key: HBASE-11683
> URL: https://issues.apache.org/jira/browse/HBASE-11683
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Jingcheng Du
> Fix For: hbase-11339
>
> Attachments: HBASE-11683-V2.diff, HBASE-11683-V3.diff, 
> HBASE-11683-V4.diff, HBASE-11683-V5.diff, HBASE-11683-V6.diff, 
> HBASE-11683-V7.diff, HBASE-11683-V8.diff, HBASE-11683.diff
>
>
> We need to make sure to capture metrics about mobs.
> Some basic ones include:
> # of mob writes
> # of mob reads
> # avg size of mob (?)
> # mob files
> # of mob compactions / sweeps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HBASE-10378) Divide HLog interface into User and Implementor specific interfaces

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-10378 started by Sean Busbey.
---
> Divide HLog interface into User and Implementor specific interfaces
> ---
>
> Key: HBASE-10378
> URL: https://issues.apache.org/jira/browse/HBASE-10378
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Himanshu Vashishtha
>Assignee: Sean Busbey
> Fix For: 2.0.0, 0.99.2
>
> Attachments: 10378-1.patch, 10378-2.patch
>
>
> HBASE-5937 introduces the HLog interface as a first step to support multiple 
> WAL implementations. This interface is a good start, but has some 
> limitations/drawbacks in its current state, such as:
> 1) There is no clear distinction b/w User and Implementor APIs, and it 
> provides APIs both for WAL users (append, sync, etc) and also WAL 
> implementors (Reader/Writer interfaces, etc). There are APIs which are very 
> much implementation specific (getFileNum, etc) and a user such as a 
> RegionServer shouldn't know about it.
> 2) There are about 14 methods in FSHLog which are not present in HLog 
> interface but are used at several places in the unit test code. These tests 
> typecast HLog to FSHLog, which makes it very difficult to test multiple WAL 
> implementations without doing some ugly checks.
> I'd like to propose some changes in HLog interface that would ease the multi 
> WAL story:
> 1) Have two interfaces WAL and WALService. WAL provides APIs for 
> implementors. WALService provides APIs for users (such as RegionServer).
> 2) A skeleton implementation of the above two interface as the base class for 
> other WAL implementations (AbstractWAL). It provides required fields for all 
> subclasses (fs, conf, log dir, etc). Make a minimal set of test only methods 
> and add this set in AbstractWAL.
> 3) HLogFactory returns a WALService reference when creating a WAL instance; 
> if a user need to access impl specific APIs (there are unit tests which get 
> WAL from a HRegionServer and then call impl specific APIs), use AbstractWAL 
> type casting,
> 4) Make TestHLog abstract and let all implementors provide their respective 
> test class which extends TestHLog (TestFSHLog, for example).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-8009) Fix and reenable the hbase-example unit tests.

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-8009:
---
Assignee: (was: Sean Busbey)

> Fix and reenable the hbase-example unit tests.
> --
>
> Key: HBASE-8009
> URL: https://issues.apache.org/jira/browse/HBASE-8009
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: stack
>Priority: Critical
>
> The unit tests pass locally for me repeatedly but fail from time to time up 
> on jenkins.  HBASE-7994 disabled them.  This issue is about spending the time 
> to make sure they pass up on jenkins again.  They have been disabled because 
> unit tests have been failing way more often than they have been passing over 
> the last few months and we want to establish passing tests as the precedent 
> again.  Once that is in place, we can work on bringing back examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-4413) Clear stale bin/*rb scripts

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-4413:
---
Labels:   (was: be)

> Clear stale bin/*rb scripts
> ---
>
> Key: HBASE-4413
> URL: https://issues.apache.org/jira/browse/HBASE-4413
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Critical
>  Labels: beginner
>
> Clear for 0.92.  For example add_table.rb in trunk is some hacked up thing... 
> not the original and the original doesn't do right thing for 0.90... its a 
> 0.20.x era hbase script.
> I'm sure there are others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-4413) Clear stale bin/*rb scripts

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-4413:
---
Assignee: (was: Sean Busbey)
  Labels: beginner  (was: )

> Clear stale bin/*rb scripts
> ---
>
> Key: HBASE-4413
> URL: https://issues.apache.org/jira/browse/HBASE-4413
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Critical
>  Labels: beginner
>
> Clear for 0.92.  For example add_table.rb in trunk is some hacked up thing... 
> not the original and the original doesn't do right thing for 0.90... its a 
> 0.20.x era hbase script.
> I'm sure there are others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-4413) Clear stale bin/*rb scripts

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-4413:
---
Labels: be  (was: )

> Clear stale bin/*rb scripts
> ---
>
> Key: HBASE-4413
> URL: https://issues.apache.org/jira/browse/HBASE-4413
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: Sean Busbey
>Priority: Critical
>  Labels: beginner
>
> Clear for 0.92.  For example add_table.rb in trunk is some hacked up thing... 
> not the original and the original doesn't do right thing for 0.90... its a 
> 0.20.x era hbase script.
> I'm sure there are others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-2739) Master should fail to start if it cannot successfully split logs

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-2739:
---
Assignee: (was: Sean Busbey)

> Master should fail to start if it cannot successfully split logs
> 
>
> Key: HBASE-2739
> URL: https://issues.apache.org/jira/browse/HBASE-2739
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.20.4, 0.90.0
>Reporter: Todd Lipcon
>Priority: Critical
>
> In trunk, in splitLogAfterStartup(), we log the error splitting, but don't 
> shut down. Depending on configuration, we should probably shut down here 
> rather than continue with dataloss.
> In 0.20, we print the stacktrace to stdout in verifyClusterState, but 
> continue through and often fail to start up 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-6935) Rename HLog interface to WAL

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-6935.

Resolution: Duplicate

> Rename HLog interface to WAL
> 
>
> Key: HBASE-6935
> URL: https://issues.apache.org/jira/browse/HBASE-6935
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Flavio Junqueira
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-6617) ReplicationSourceManager should be able to track multiple WAL paths

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-6617:
--

Assignee: Sean Busbey

> ReplicationSourceManager should be able to track multiple WAL paths
> ---
>
> Key: HBASE-6617
> URL: https://issues.apache.org/jira/browse/HBASE-6617
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Ted Yu
>Assignee: Sean Busbey
>
> Currently ReplicationSourceManager uses logRolled() to receive notification 
> about new HLog and remembers it in latestPath.
> When region server has multiple WAL support, we need to keep track of 
> multiple Path's in ReplicationSourceManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12381) Add maven enforcer rule for maven version

2014-10-30 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-12381:
---

 Summary: Add maven enforcer rule for maven version
 Key: HBASE-12381
 URL: https://issues.apache.org/jira/browse/HBASE-12381
 Project: HBase
  Issue Type: Task
  Components: build
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor


our ref guide says that you need maven 3 to build. add an enforcer rule so that 
people find out early that they have the wrong maven version, rather then 
however things fall over if someone tries to build with maven 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12272) Generate Thrift code through maven

2014-10-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190213#comment-14190213
 ] 

Sean Busbey commented on HBASE-12272:
-

Sorry for the late feedback, but something just occurred to me. We know that 
there are issues changing thrift versions and that we have made releases with 
particular versions in place. As things currently stand, the version can be 
changed at compile time because thrift.version is a property. It's probably a 
good idea to add a rule for the maven enforcer plugin that fixes the property 
to 0.9.0 (or whatever version a given release line is based on) with a note 
that setting it to something else needs to be reviewed for wire and behavior 
compatibility.

> Generate Thrift code through maven
> --
>
> Key: HBASE-12272
> URL: https://issues.apache.org/jira/browse/HBASE-12272
> Project: HBase
>  Issue Type: Improvement
>  Components: build, documentation, Thrift
>Reporter: Niels Basjes
> Fix For: 2.0.0, 0.98.8, 0.94.25, 0.99.2
>
> Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
> HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch, 
> HBASE-12272-2014-10-16-v4.patch
>
>
> The generated thrift code is currently under source control, but the 
> instructions on rebuilding it is buried in package javadocs.
> We should have a simple maven command to rebuild them, similar to what we 
> have for protobufs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12358) Create ByteBuffer backed Cell

2014-10-30 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-12358:
---
Attachment: HBASE-12358_1.patch

An updated patch that adds hasArray to Cell.  Added some util methods in 
ByteBufferUtils and some part is copied from HBASE-12345 as it is not yet 
committed. I will add some test cases once we are sure that this approach is 
fine. 
I created a bigger patch changing through the read path but that is becoming 
bigger and bigger and have some challenges in that.  Doing them as individual 
subtasks may not be directly feasible for now.  Anyway we can finalise it once 
we are ok with this JIRA as that forms the basis of further things. 

> Create ByteBuffer backed Cell
> -
>
> Key: HBASE-12358
> URL: https://issues.apache.org/jira/browse/HBASE-12358
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-12358.patch, HBASE-12358_1.patch
>
>
> As part of HBASE-12224 and HBASE-12282 we wanted a Cell that is backed by BB. 
>  Changing the core Cell impl would not be needed as it is used in server 
> only.  So we will create a BB backed Cell and use it in the Server side read 
> path. This JIRA just creates an interface that extends Cell and adds the 
> needed API.
> The getTimeStamp and getTypebyte() can still refer to the original Cell API 
> only.  The getXXxOffset() and getXXXLength() can also refer to the original 
> Cell only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12379) Try surefire 2.18-SNAPSHOT

2014-10-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-12379.
---
   Resolution: Fixed
Fix Version/s: 0.99.2
   2.0.0

Resolving.  branch-1 has been passing the last three builds.  Will reopen if we 
start to see the weird stream issue again.

Was going to put in 0.98 but there we have explicit reference to the ghelming 
2.12 surefire.  Looking at recent failures in 0.98, seem just to be legit test 
failures.  I don't see the stream error.  Will let well-enough alone unless you 
think different [~apurtell]

> Try surefire 2.18-SNAPSHOT
> --
>
> Key: HBASE-12379
> URL: https://issues.apache.org/jira/browse/HBASE-12379
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 0.99.2
>
> Attachments: 12379.txt
>
>
> Hopefully has a fix for:
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.17:test 
> (secondPartTestsExecution) on project hbase-server: ExecutionException: 
> java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: 
> Stream Closed -> [Help 1]
> [~eclark] says its been working for him and crew.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11819) Unit test for CoprocessorHConnection

2014-10-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190305#comment-14190305
 ] 

stack commented on HBASE-11819:
---

See style that is in the rest of the code base [~talat] See where we put 
spacings or, better, install the eclipse style plugin.   See '18.3.1.1. Code 
Formatting' in refguide.  For example, see how we do the spacing around ' 
}finally{' elsewhere in code base... There is a space after and before 
parens

This is good:

+try{
+  // Create a table with 3 region
+  admin.createTable(htd, new byte[][] { rowSeperator1, rowSeperator2 });
+  util.waitUntilAllRegionsAssigned(testTable);
+}finally{
+  admin.close();
+}

... except, look up in the method and you'll see that admin is used doing table 
exists and disable, etc.  What if admin throws exception in these methods?

Almost there.

> Unit test for CoprocessorHConnection 
> -
>
> Key: HBASE-11819
> URL: https://issues.apache.org/jira/browse/HBASE-11819
> Project: HBase
>  Issue Type: Test
>Reporter: Andrew Purtell
>Assignee: Talat UYARER
>Priority: Minor
>  Labels: newbie++
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-11819.patch, HBASE-11819v2.patch
>
>
> Add a unit test to hbase-server that exercises CoprocessorHConnection . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12382) Restore incremental compilation

2014-10-30 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-12382:
--

 Summary: Restore incremental compilation
 Key: HBASE-12382
 URL: https://issues.apache.org/jira/browse/HBASE-12382
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell


The build changes in HBASE-11912 required an upgrade of the Maven compiler 
plugin from 2.5.1 to something >= 3.0. We're now using 3.2. We also switch from 
whatever Maven does by default with an embedding of tools.jar to invocation of 
javac. We are no longer getting incremental builds due to Maven bugs hit by 
these changes. http://jira.codehaus.org/browse/MCOMPILER-209 suggests 
paradoxically setting useIncrementalCompilation to 'false' will restore 
incremental compilation behavior. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12375:
--
   Resolution: Fixed
Fix Version/s: 0.99.2
   0.98.9
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to 0.98+.  Nice test.  Thanks [~ashish singhi]

> LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
> --
>
> Key: HBASE-12375
> URL: https://issues.apache.org/jira/browse/HBASE-12375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.5
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12375-0.98.patch, HBASE-12375-v2.patch, 
> HBASE-12375.patch
>
>
> We do not restrict user from creating a table having column family starting 
> with '_'.
> So when user creates a table in such a way then LoadIncrementalHFiles will 
> skip those family data to load into the table.
> {code}
> // Skip _logs, etc
> if (familyDir.getName().startsWith("_")) continue;
> {code}
> I think we should remove that check as I do not see any _logs directory being 
> created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12379) Try surefire 2.18-SNAPSHOT

2014-10-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190317#comment-14190317
 ] 

Andrew Purtell commented on HBASE-12379:


Not sure the recent 0.98 build failures are legit in all senses of that word 
:-) - they are all flappers and only happen on ASF Jenkins, not on various 
other build and test hosts I have at my disposal - but yeah they are not 
Surefire problems. We can try this in 0.98 but maybe after the next release? 
File a follow up issue for that?

> Try surefire 2.18-SNAPSHOT
> --
>
> Key: HBASE-12379
> URL: https://issues.apache.org/jira/browse/HBASE-12379
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 0.99.2
>
> Attachments: 12379.txt
>
>
> Hopefully has a fix for:
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.17:test 
> (secondPartTestsExecution) on project hbase-server: ExecutionException: 
> java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: 
> Stream Closed -> [Help 1]
> [~eclark] says its been working for him and crew.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12383) Move 0.98 build to surefire 2.18(-SNAPSHOT)

2014-10-30 Thread stack (JIRA)
stack created HBASE-12383:
-

 Summary: Move 0.98 build to surefire 2.18(-SNAPSHOT)
 Key: HBASE-12383
 URL: https://issues.apache.org/jira/browse/HBASE-12383
 Project: HBase
  Issue Type: Task
  Components: build
Reporter: stack
Priority: Minor
 Fix For: 0.98.9


Move 0.98 build off the garyh hosted surefire and up on to 2.18 surefire (may 
have to be a 2.18-SNAPSHOT like master branch -- see HBASE-12379).

It does not look like 0.98 is suffering the master and branch-1 issues that the 
2.18-SNAPSHOT seems to fix but can't hurt upgrading.

Filing this issue at [~apurtell] suggestion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12381) Add maven enforcer rules for build assumptions

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12381:

Summary: Add maven enforcer rules for build assumptions  (was: Add maven 
enforcer rule for maven version)

updating title to broaded a bit. We can also enforce Java 7 on branch-1+

> Add maven enforcer rules for build assumptions
> --
>
> Key: HBASE-12381
> URL: https://issues.apache.org/jira/browse/HBASE-12381
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> our ref guide says that you need maven 3 to build. add an enforcer rule so 
> that people find out early that they have the wrong maven version, rather 
> then however things fall over if someone tries to build with maven 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12381) Add maven enforcer rules for build assumptions

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12381:

Attachment: HBASE-12381.1.patch.txt

patch for master. requires maven >= 3.0.3 (based on the oldest version we have 
building on jenkins) and java >= the source compilation target variable, which 
is 1.7 on master (based on java compat doc).

patch picks back to branch-1 cleanly.  patch picks back to 0.98 and 0.94 with 
straight-forward conflicts but works correctly (e.g. enforcing java 1.6+) after 
fixing.

Manually tested by building on a system that meets the above  requirements and 
by building on the same system but with the minimums updated to be newer than I 
have. Also tested that enforcement happens both at the top level and if only a 
single module is built.

> Add maven enforcer rules for build assumptions
> --
>
> Key: HBASE-12381
> URL: https://issues.apache.org/jira/browse/HBASE-12381
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-12381.1.patch.txt
>
>
> our ref guide says that you need maven 3 to build. add an enforcer rule so 
> that people find out early that they have the wrong maven version, rather 
> then however things fall over if someone tries to build with maven 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12381) Add maven enforcer rules for build assumptions

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12381:

Status: Patch Available  (was: Open)

> Add maven enforcer rules for build assumptions
> --
>
> Key: HBASE-12381
> URL: https://issues.apache.org/jira/browse/HBASE-12381
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Attachments: HBASE-12381.1.patch.txt
>
>
> our ref guide says that you need maven 3 to build. add an enforcer rule so 
> that people find out early that they have the wrong maven version, rather 
> then however things fall over if someone tries to build with maven 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12380) Too many attempts to open a region can crash the RegionServer

2014-10-30 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190360#comment-14190360
 ] 

Jimmy Xiang commented on HBASE-12380:
-

I have discussed it with Esteban. We agree that it is better not to abort. We 
can log a warning/error message instead and let it go.

The reason for aborting is that this scenario should never happen natually. 
Master has a state machine and won't send the open call again if it is already 
opened.
My concern with not aborting is that we may hide some serious bug in master if 
that indeed happens.

This test is an old test. My suggestion is to remove this test.

> Too many attempts to open a region can crash the RegionServer
> -
>
> Key: HBASE-12380
> URL: https://issues.apache.org/jira/browse/HBASE-12380
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Esteban Gutierrez
>Priority: Critical
>
> Noticed this while trying to fix faulty test while working on a fix for 
> HBASE-12219:
> {code}
> Tests in error:
>   TestRegionServerNoMaster.testMultipleOpen:237 » Service 
> java.io.IOException: R...
>   TestRegionServerNoMaster.testCloseByRegionServer:211->closeRegionNoZK:201 » 
> Service
> {code}
> Initially I thought the problem was on my patch for HBASE-12219 but I noticed 
> that the issue was occurring on the 7th attempt to open the region. However I 
> was able to reproduce the same problem in the master branch after increasing 
> the number of requests in testMultipleOpen():
> {code}
> 2014-10-29 15:03:45,043 INFO  [Thread-216] regionserver.RSRpcServices(1334): 
> Receiving OPEN for the 
> region:TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.,
>  which we are already trying to OPEN - ignoring this new request for this 
> region.
> Submitting openRegion attempt: 16 <
> 2014-10-29 15:03:45,044 INFO  [Thread-216] regionserver.RSRpcServices(1311): 
> Open TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.
> 2014-10-29 15:03:45,044 INFO  
> [PostOpenDeployTasks:025198143197ea68803e49819eae27ca] 
> hbase.MetaTableAccessor(1307): Updated row 
> TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca. 
> with server=192.168.1.105,63082,1414620220789
> Submitting openRegion attempt: 17 <
> 2014-10-29 15:03:45,046 ERROR [RS_OPEN_REGION-192.168.1.105:63082-2] 
> handler.OpenRegionHandler(88): Region 025198143197ea68803e49819eae27ca was 
> already online when we started processing the opening. Marking this new 
> attempt as failed
> 2014-10-29 15:03:45,047 FATAL [Thread-216] regionserver.HRegionServer(1931): 
> ABORTING region server 192.168.1.105,63082,1414620220789: Received OPEN for 
> the 
> region:TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.,
>  which is already online
> 2014-10-29 15:03:45,047 FATAL [Thread-216] regionserver.HRegionServer(1937): 
> RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
> 2014-10-29 15:03:45,054 WARN  [Thread-216] regionserver.HRegionServer(1955): 
> Unable to report fatal error to master
> com.google.protobuf.ServiceException: java.io.IOException: Call to 
> /192.168.1.105:63079 failed on local exception: java.io.IOException: 
> Connection to /192.168.1.105:63079 is closing. Call id=4, waitTime=2
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1707)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1757)
> at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.reportRSFatalError(RegionServerStatusProtos.java:8301)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1952)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.abortRegionServer(MiniHBaseCluster.java:174)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$100(MiniHBaseCluster.java:108)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$2.run(MiniHBaseCluster.java:167)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:356)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:277)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.abort(MiniHBaseCluster.java:165)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1964)
> at 
> org.apache

[jira] [Created] (HBASE-12384) TestTags can hang on fast test hosts

2014-10-30 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-12384:
--

 Summary: TestTags can hang on fast test hosts
 Key: HBASE-12384
 URL: https://issues.apache.org/jira/browse/HBASE-12384
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell


Waiting indefinitely expecting flushed files to reach a certain count after 
triggering a flush but compaction has happened between the flush and check for 
number of store files. 
{code}
admin.flush(tableName);
regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
for (HRegion region : regions) {
  Store store = region.getStore(fam);
- Compaction has happened before here --->
  while (!(store.getStorefilesCount() > 2)) {
- Hung forever in here ---> 
Thread.sleep(10);
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12384) TestTags can hang on fast test hosts

2014-10-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12384:
---
Description: 
Waiting indefinitely expecting flushed files to reach a certain count after 
triggering a flush but compaction has happened between the flush and check for 
number of store files. 
{code}
admin.flush(tableName);
regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
for (HRegion region : regions) {
  Store store = region.getStore(fam);
- Flush and compaction has happened before here --->
  while (!(store.getStorefilesCount() > 2)) {
- Hung forever in here ---> 
Thread.sleep(10);
  }
}
{code}

  was:
Waiting indefinitely expecting flushed files to reach a certain count after 
triggering a flush but compaction has happened between the flush and check for 
number of store files. 
{code}
admin.flush(tableName);
regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
for (HRegion region : regions) {
  Store store = region.getStore(fam);
- Compaction has happened before here --->
  while (!(store.getStorefilesCount() > 2)) {
- Hung forever in here ---> 
Thread.sleep(10);
  }
}
{code}


> TestTags can hang on fast test hosts
> 
>
> Key: HBASE-12384
> URL: https://issues.apache.org/jira/browse/HBASE-12384
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>
> Waiting indefinitely expecting flushed files to reach a certain count after 
> triggering a flush but compaction has happened between the flush and check 
> for number of store files. 
> {code}
> admin.flush(tableName);
> regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
> for (HRegion region : regions) {
>   Store store = region.getStore(fam);
> - Flush and compaction has happened before here --->
>   while (!(store.getStorefilesCount() > 2)) {
> - Hung forever in here ---> 
> Thread.sleep(10);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12384) TestTags can hang on fast test hosts

2014-10-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12384:
---
Attachment: HBASE-12384-0.98.patch

> TestTags can hang on fast test hosts
> 
>
> Key: HBASE-12384
> URL: https://issues.apache.org/jira/browse/HBASE-12384
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Attachments: HBASE-12384-0.98.patch
>
>
> Waiting indefinitely expecting flushed files to reach a certain count after 
> triggering a flush but compaction has happened between the flush and check 
> for number of store files. 
> {code}
> admin.flush(tableName);
> regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
> for (HRegion region : regions) {
>   Store store = region.getStore(fam);
> - Flush and compaction has happened before here --->
>   while (!(store.getStorefilesCount() > 2)) {
> - Hung forever in here ---> 
> Thread.sleep(10);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12384) TestTags can hang on fast test hosts

2014-10-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12384:
---
Attachment: HBASE-12384-master.patch

This patch makes some assumptions about compaction policy and in one case 
relies on winning a race between flush and subsequent count of store files and 
a compaction. Elsewhere for checking compaction progress the test uses an API 
available for that purpose. We are lacking an API for confirming flush request 
completion. Rather than count files, sleep for a short time. We lose the 
confirmation of flush completion but the test won't hang now and if in the 
future we change the default compaction policy.

> TestTags can hang on fast test hosts
> 
>
> Key: HBASE-12384
> URL: https://issues.apache.org/jira/browse/HBASE-12384
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-12384-0.98.patch, HBASE-12384-master.patch
>
>
> Waiting indefinitely expecting flushed files to reach a certain count after 
> triggering a flush but compaction has happened between the flush and check 
> for number of store files. 
> {code}
> admin.flush(tableName);
> regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
> for (HRegion region : regions) {
>   Store store = region.getStore(fam);
> - Flush and compaction has happened before here --->
>   while (!(store.getStorefilesCount() > 2)) {
> - Hung forever in here ---> 
> Thread.sleep(10);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12384) TestTags can hang on fast test hosts

2014-10-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12384:
---
Fix Version/s: 0.99.2
   0.98.8
   2.0.0
   Status: Patch Available  (was: Open)

> TestTags can hang on fast test hosts
> 
>
> Key: HBASE-12384
> URL: https://issues.apache.org/jira/browse/HBASE-12384
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-12384-0.98.patch, HBASE-12384-master.patch
>
>
> Waiting indefinitely expecting flushed files to reach a certain count after 
> triggering a flush but compaction has happened between the flush and check 
> for number of store files. 
> {code}
> admin.flush(tableName);
> regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
> for (HRegion region : regions) {
>   Store store = region.getStore(fam);
> - Flush and compaction has happened before here --->
>   while (!(store.getStorefilesCount() > 2)) {
> - Hung forever in here ---> 
> Thread.sleep(10);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12384) TestTags can hang on fast test hosts

2014-10-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190378#comment-14190378
 ] 

Andrew Purtell edited comment on HBASE-12384 at 10/30/14 5:03 PM:
--

This test makes some assumptions about compaction policy and in one case relies 
on winning a race between flush and subsequent count of store files and a 
compaction. Elsewhere for checking compaction progress the test uses an API 
available for that purpose. We are lacking an API for confirming flush request 
completion. Rather than count files, sleep for a short time. We lose the 
confirmation of flush completion but the test won't hang now and if in the 
future we change the default compaction policy.


was (Author: apurtell):
This patch makes some assumptions about compaction policy and in one case 
relies on winning a race between flush and subsequent count of store files and 
a compaction. Elsewhere for checking compaction progress the test uses an API 
available for that purpose. We are lacking an API for confirming flush request 
completion. Rather than count files, sleep for a short time. We lose the 
confirmation of flush completion but the test won't hang now and if in the 
future we change the default compaction policy.

> TestTags can hang on fast test hosts
> 
>
> Key: HBASE-12384
> URL: https://issues.apache.org/jira/browse/HBASE-12384
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-12384-0.98.patch, HBASE-12384-master.patch
>
>
> Waiting indefinitely expecting flushed files to reach a certain count after 
> triggering a flush but compaction has happened between the flush and check 
> for number of store files. 
> {code}
> admin.flush(tableName);
> regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
> for (HRegion region : regions) {
>   Store store = region.getStore(fam);
> - Flush and compaction has happened before here --->
>   while (!(store.getStorefilesCount() > 2)) {
> - Hung forever in here ---> 
> Thread.sleep(10);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12012) Improve cancellation for the scan RPCs

2014-10-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190385#comment-14190385
 ] 

stack commented on HBASE-12012:
---

I'm not really up on what is going on in here.  Patch looks good though.  What 
is the benefit?  HBASE-11564 published some numbers.  Any for this change?

Over in HBASE-11564 I asked

bq. Should we implement 
http://nick-lab.gs.washington.edu/java/jdk1.5b/api/java/util/concurrent/Cancellable.html
 ?

.. I don't think it got a response.  When I see the new CancellableCallable 
interface with startCancel, makes me ask it again.

Anymore on the concern raised on tail of HBASE-11564 by [~enis]?

> Improve cancellation for the scan RPCs
> --
>
> Key: HBASE-12012
> URL: https://issues.apache.org/jira/browse/HBASE-12012
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 2.0.0, 0.99.2
>
> Attachments: 12012-1.txt
>
>
> Similar to HBASE-11564 but for scans.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11835) Wrong managenement of non expected calls in the client

2014-10-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11835:
--
Attachment: 11835.rebase.patch

You going to commit this [~nkeywal]? Here is a rerun of hadoopqa.

> Wrong managenement of non expected calls in the client
> --
>
> Key: HBASE-11835
> URL: https://issues.apache.org/jira/browse/HBASE-11835
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Performance
>Affects Versions: 1.0.0, 2.0.0, 0.98.6
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 2.0.0, 0.99.2
>
> Attachments: 11835.rebase.patch, 11835.rebase.patch, 
> 11835.rebase.patch, rpcClient.patch
>
>
> If a call is purged or canceled we try to skip the reply from the server, but 
> we read the wrong number of bytes so we corrupt the tcp channel. It's hidden 
> as it triggers retry and so on, but it's bad for performances obviously.
> It happens with cell blocks.
> [~ram_krish_86], [~saint@gmail.com], you know this part better than me, 
> do you agree with the analysis and the patch?
> The changes in rpcServer are not fully related: as the client close the 
> connections in such situation, I observed  both ClosedChannelException and 
> CancelledKeyException. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11835) Wrong managenement of non expected calls in the client

2014-10-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11835:
--
Component/s: Performance

> Wrong managenement of non expected calls in the client
> --
>
> Key: HBASE-11835
> URL: https://issues.apache.org/jira/browse/HBASE-11835
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Performance
>Affects Versions: 1.0.0, 2.0.0, 0.98.6
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 2.0.0, 0.99.2
>
> Attachments: 11835.rebase.patch, 11835.rebase.patch, 
> 11835.rebase.patch, rpcClient.patch
>
>
> If a call is purged or canceled we try to skip the reply from the server, but 
> we read the wrong number of bytes so we corrupt the tcp channel. It's hidden 
> as it triggers retry and so on, but it's bad for performances obviously.
> It happens with cell blocks.
> [~ram_krish_86], [~saint@gmail.com], you know this part better than me, 
> do you agree with the analysis and the patch?
> The changes in rpcServer are not fully related: as the client close the 
> connections in such situation, I observed  both ClosedChannelException and 
> CancelledKeyException. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11764) Support per cell TTLs

2014-10-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190400#comment-14190400
 ] 

stack commented on HBASE-11764:
---

Still looking for +1s on this [~apurtell]?

> Support per cell TTLs
> -
>
> Key: HBASE-11764
> URL: https://issues.apache.org/jira/browse/HBASE-11764
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-11764-0.98.patch, HBASE-11764-0.98.patch, 
> HBASE-11764-0.98.patch, HBASE-11764-0.98.patch, HBASE-11764-0.98.patch, 
> HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, 
> HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, 
> HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12385) Close out defunct versions in jira

2014-10-30 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-12385:
---

 Summary: Close out defunct versions in jira
 Key: HBASE-12385
 URL: https://issues.apache.org/jira/browse/HBASE-12385
 Project: HBase
  Issue Type: Task
Reporter: Sean Busbey
Priority: Minor


We have a bunch of versions that won't be released shouldn't be used any more 
(0.90.x, 0.92.x, 0.96.x). We should either archive or delete them (I'd lean 
toward archive).

This work can be done by anyone with admin rights on jira from the [version 
maintenance 
page|https://issues.apache.org/jira/plugins/servlet/project-config/HBASE/versions].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-30 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190409#comment-14190409
 ] 

Ashish Singhi commented on HBASE-12375:
---

Thanks Matteo, Ted &Stack 

> LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
> --
>
> Key: HBASE-12375
> URL: https://issues.apache.org/jira/browse/HBASE-12375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.5
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12375-0.98.patch, HBASE-12375-v2.patch, 
> HBASE-12375.patch
>
>
> We do not restrict user from creating a table having column family starting 
> with '_'.
> So when user creates a table in such a way then LoadIncrementalHFiles will 
> skip those family data to load into the table.
> {code}
> // Skip _logs, etc
> if (familyDir.getName().startsWith("_")) continue;
> {code}
> I think we should remove that check as I do not see any _logs directory being 
> created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar

2014-10-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190415#comment-14190415
 ] 

stack commented on HBASE-9003:
--

[~ndimiduk] How does the IT test relate?  OK I commit this?

> TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
> -
>
> Key: HBASE-9003
> URL: https://issues.apache.org/jira/browse/HBASE-9003
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, 
> HBASE-9003.v2.patch
>
>
> This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on 
> {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. 
> However {{getJar()}} uses File.createTempFile() to create a temporary file 
> under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar 
> and its content is not purged after the JVM is destroyed. Since most 
> configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar 
> files get purged by {{tmpwatch}} or a similar tool, but boxes that have 
> {{hadoop.tmp.dir}} pointing to a different location not monitored by 
> {{tmpwatch}} will pile up a collection of jars causing all kind of issues. 
> Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] 
> comment on HADOOP-9737) we shouldn't use that as part of 
> {{TableMapReduceUtil}} in order to avoid this kind of issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar

2014-10-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9003:
-
Attachment: HBASE-9003.v2.patch

Try to see if needs rebasing.

> TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
> -
>
> Key: HBASE-9003
> URL: https://issues.apache.org/jira/browse/HBASE-9003
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, 
> HBASE-9003.v2.patch, HBASE-9003.v2.patch
>
>
> This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on 
> {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. 
> However {{getJar()}} uses File.createTempFile() to create a temporary file 
> under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar 
> and its content is not purged after the JVM is destroyed. Since most 
> configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar 
> files get purged by {{tmpwatch}} or a similar tool, but boxes that have 
> {{hadoop.tmp.dir}} pointing to a different location not monitored by 
> {{tmpwatch}} will pile up a collection of jars causing all kind of issues. 
> Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] 
> comment on HADOOP-9737) we shouldn't use that as part of 
> {{TableMapReduceUtil}} in order to avoid this kind of issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12385) Close out defunct versions in jira

2014-10-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190422#comment-14190422
 ] 

stack commented on HBASE-12385:
---

Good idea. I  made you an administrator [~busbey]  Go for it if you are up for 
it else, I can do it.

> Close out defunct versions in jira
> --
>
> Key: HBASE-12385
> URL: https://issues.apache.org/jira/browse/HBASE-12385
> Project: HBase
>  Issue Type: Task
>Reporter: Sean Busbey
>Priority: Minor
>
> We have a bunch of versions that won't be released shouldn't be used any more 
> (0.90.x, 0.92.x, 0.96.x). We should either archive or delete them (I'd lean 
> toward archive).
> This work can be done by anyone with admin rights on jira from the [version 
> maintenance 
> page|https://issues.apache.org/jira/plugins/servlet/project-config/HBASE/versions].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11764) Support per cell TTLs

2014-10-30 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190424#comment-14190424
 ] 

Andrew Purtell commented on HBASE-11764:


Yessir, [~stack]

> Support per cell TTLs
> -
>
> Key: HBASE-11764
> URL: https://issues.apache.org/jira/browse/HBASE-11764
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-11764-0.98.patch, HBASE-11764-0.98.patch, 
> HBASE-11764-0.98.patch, HBASE-11764-0.98.patch, HBASE-11764-0.98.patch, 
> HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, 
> HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, 
> HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12384) TestTags can hang on fast test hosts

2014-10-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190425#comment-14190425
 ] 

stack commented on HBASE-12384:
---

+1

On commit copy the comment above on to the first use of sleep to explain why 
the pattern.

> TestTags can hang on fast test hosts
> 
>
> Key: HBASE-12384
> URL: https://issues.apache.org/jira/browse/HBASE-12384
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-12384-0.98.patch, HBASE-12384-master.patch
>
>
> Waiting indefinitely expecting flushed files to reach a certain count after 
> triggering a flush but compaction has happened between the flush and check 
> for number of store files. 
> {code}
> admin.flush(tableName);
> regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
> for (HRegion region : regions) {
>   Store store = region.getStore(fam);
> - Flush and compaction has happened before here --->
>   while (!(store.getStorefilesCount() > 2)) {
> - Hung forever in here ---> 
> Thread.sleep(10);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12385) Close out defunct versions in jira

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-12385:
---

Assignee: Sean Busbey

> Close out defunct versions in jira
> --
>
> Key: HBASE-12385
> URL: https://issues.apache.org/jira/browse/HBASE-12385
> Project: HBase
>  Issue Type: Task
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> We have a bunch of versions that won't be released shouldn't be used any more 
> (0.90.x, 0.92.x, 0.96.x). We should either archive or delete them (I'd lean 
> toward archive).
> This work can be done by anyone with admin rights on jira from the [version 
> maintenance 
> page|https://issues.apache.org/jira/plugins/servlet/project-config/HBASE/versions].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HBASE-12385) Close out defunct versions in jira

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-12385 started by Sean Busbey.
---
> Close out defunct versions in jira
> --
>
> Key: HBASE-12385
> URL: https://issues.apache.org/jira/browse/HBASE-12385
> Project: HBase
>  Issue Type: Task
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> We have a bunch of versions that won't be released shouldn't be used any more 
> (0.90.x, 0.92.x, 0.96.x). We should either archive or delete them (I'd lean 
> toward archive).
> This work can be done by anyone with admin rights on jira from the [version 
> maintenance 
> page|https://issues.apache.org/jira/plugins/servlet/project-config/HBASE/versions].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar

2014-10-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190437#comment-14190437
 ] 

Hadoop QA commented on HBASE-9003:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678245/HBASE-9003.v2.patch
  against trunk revision .
  ATTACHMENT ID: 12678245

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 10 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11522//console

This message is automatically generated.

> TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
> -
>
> Key: HBASE-9003
> URL: https://issues.apache.org/jira/browse/HBASE-9003
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, 
> HBASE-9003.v2.patch, HBASE-9003.v2.patch
>
>
> This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on 
> {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. 
> However {{getJar()}} uses File.createTempFile() to create a temporary file 
> under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar 
> and its content is not purged after the JVM is destroyed. Since most 
> configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar 
> files get purged by {{tmpwatch}} or a similar tool, but boxes that have 
> {{hadoop.tmp.dir}} pointing to a different location not monitored by 
> {{tmpwatch}} will pile up a collection of jars causing all kind of issues. 
> Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] 
> comment on HADOOP-9737) we shouldn't use that as part of 
> {{TableMapReduceUtil}} in order to avoid this kind of issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12380) TestRegionServerNoMaster#testMultipleOpen is flaky after HBASE-11760

2014-10-30 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-12380:
--
Component/s: test
   Priority: Major  (was: Critical)
Summary: TestRegionServerNoMaster#testMultipleOpen is flaky after 
HBASE-11760  (was: Too many attempts to open a region can crash the 
RegionServer)

> TestRegionServerNoMaster#testMultipleOpen is flaky after HBASE-11760
> 
>
> Key: HBASE-12380
> URL: https://issues.apache.org/jira/browse/HBASE-12380
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Esteban Gutierrez
>
> Noticed this while trying to fix faulty test while working on a fix for 
> HBASE-12219:
> {code}
> Tests in error:
>   TestRegionServerNoMaster.testMultipleOpen:237 » Service 
> java.io.IOException: R...
>   TestRegionServerNoMaster.testCloseByRegionServer:211->closeRegionNoZK:201 » 
> Service
> {code}
> Initially I thought the problem was on my patch for HBASE-12219 but I noticed 
> that the issue was occurring on the 7th attempt to open the region. However I 
> was able to reproduce the same problem in the master branch after increasing 
> the number of requests in testMultipleOpen():
> {code}
> 2014-10-29 15:03:45,043 INFO  [Thread-216] regionserver.RSRpcServices(1334): 
> Receiving OPEN for the 
> region:TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.,
>  which we are already trying to OPEN - ignoring this new request for this 
> region.
> Submitting openRegion attempt: 16 <
> 2014-10-29 15:03:45,044 INFO  [Thread-216] regionserver.RSRpcServices(1311): 
> Open TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.
> 2014-10-29 15:03:45,044 INFO  
> [PostOpenDeployTasks:025198143197ea68803e49819eae27ca] 
> hbase.MetaTableAccessor(1307): Updated row 
> TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca. 
> with server=192.168.1.105,63082,1414620220789
> Submitting openRegion attempt: 17 <
> 2014-10-29 15:03:45,046 ERROR [RS_OPEN_REGION-192.168.1.105:63082-2] 
> handler.OpenRegionHandler(88): Region 025198143197ea68803e49819eae27ca was 
> already online when we started processing the opening. Marking this new 
> attempt as failed
> 2014-10-29 15:03:45,047 FATAL [Thread-216] regionserver.HRegionServer(1931): 
> ABORTING region server 192.168.1.105,63082,1414620220789: Received OPEN for 
> the 
> region:TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.,
>  which is already online
> 2014-10-29 15:03:45,047 FATAL [Thread-216] regionserver.HRegionServer(1937): 
> RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
> 2014-10-29 15:03:45,054 WARN  [Thread-216] regionserver.HRegionServer(1955): 
> Unable to report fatal error to master
> com.google.protobuf.ServiceException: java.io.IOException: Call to 
> /192.168.1.105:63079 failed on local exception: java.io.IOException: 
> Connection to /192.168.1.105:63079 is closing. Call id=4, waitTime=2
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1707)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1757)
> at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.reportRSFatalError(RegionServerStatusProtos.java:8301)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1952)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.abortRegionServer(MiniHBaseCluster.java:174)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$100(MiniHBaseCluster.java:108)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$2.run(MiniHBaseCluster.java:167)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:356)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:277)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.abort(MiniHBaseCluster.java:165)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1964)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1308)
> at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster.testMultipleOpen(TestRegionServerNoMaster.java:237)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 

[jira] [Resolved] (HBASE-12385) Close out defunct versions in jira

2014-10-30 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-12385.
-
Resolution: Fixed

archived any unreleased versions from the 0.90, 0.92, and 0.96 lines.

left the feature branch versions in place, because I didn't see an obvious 
mechanism to determine if they were completed. Probably should update our 
creation process to include a point of contact when creating one.

> Close out defunct versions in jira
> --
>
> Key: HBASE-12385
> URL: https://issues.apache.org/jira/browse/HBASE-12385
> Project: HBase
>  Issue Type: Task
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> We have a bunch of versions that won't be released shouldn't be used any more 
> (0.90.x, 0.92.x, 0.96.x). We should either archive or delete them (I'd lean 
> toward archive).
> This work can be done by anyone with admin rights on jira from the [version 
> maintenance 
> page|https://issues.apache.org/jira/plugins/servlet/project-config/HBASE/versions].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12380) TestRegionServerNoMaster#testMultipleOpen is flaky after HBASE-11760

2014-10-30 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-12380:
--
Attachment: HBASE-12380.v0.patch

> TestRegionServerNoMaster#testMultipleOpen is flaky after HBASE-11760
> 
>
> Key: HBASE-12380
> URL: https://issues.apache.org/jira/browse/HBASE-12380
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Esteban Gutierrez
> Attachments: HBASE-12380.v0.patch
>
>
> Noticed this while trying to fix faulty test while working on a fix for 
> HBASE-12219:
> {code}
> Tests in error:
>   TestRegionServerNoMaster.testMultipleOpen:237 » Service 
> java.io.IOException: R...
>   TestRegionServerNoMaster.testCloseByRegionServer:211->closeRegionNoZK:201 » 
> Service
> {code}
> Initially I thought the problem was on my patch for HBASE-12219 but I noticed 
> that the issue was occurring on the 7th attempt to open the region. However I 
> was able to reproduce the same problem in the master branch after increasing 
> the number of requests in testMultipleOpen():
> {code}
> 2014-10-29 15:03:45,043 INFO  [Thread-216] regionserver.RSRpcServices(1334): 
> Receiving OPEN for the 
> region:TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.,
>  which we are already trying to OPEN - ignoring this new request for this 
> region.
> Submitting openRegion attempt: 16 <
> 2014-10-29 15:03:45,044 INFO  [Thread-216] regionserver.RSRpcServices(1311): 
> Open TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.
> 2014-10-29 15:03:45,044 INFO  
> [PostOpenDeployTasks:025198143197ea68803e49819eae27ca] 
> hbase.MetaTableAccessor(1307): Updated row 
> TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca. 
> with server=192.168.1.105,63082,1414620220789
> Submitting openRegion attempt: 17 <
> 2014-10-29 15:03:45,046 ERROR [RS_OPEN_REGION-192.168.1.105:63082-2] 
> handler.OpenRegionHandler(88): Region 025198143197ea68803e49819eae27ca was 
> already online when we started processing the opening. Marking this new 
> attempt as failed
> 2014-10-29 15:03:45,047 FATAL [Thread-216] regionserver.HRegionServer(1931): 
> ABORTING region server 192.168.1.105,63082,1414620220789: Received OPEN for 
> the 
> region:TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.,
>  which is already online
> 2014-10-29 15:03:45,047 FATAL [Thread-216] regionserver.HRegionServer(1937): 
> RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
> 2014-10-29 15:03:45,054 WARN  [Thread-216] regionserver.HRegionServer(1955): 
> Unable to report fatal error to master
> com.google.protobuf.ServiceException: java.io.IOException: Call to 
> /192.168.1.105:63079 failed on local exception: java.io.IOException: 
> Connection to /192.168.1.105:63079 is closing. Call id=4, waitTime=2
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1707)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1757)
> at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.reportRSFatalError(RegionServerStatusProtos.java:8301)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1952)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.abortRegionServer(MiniHBaseCluster.java:174)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$100(MiniHBaseCluster.java:108)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$2.run(MiniHBaseCluster.java:167)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:356)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:277)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.abort(MiniHBaseCluster.java:165)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1964)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1308)
> at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster.testMultipleOpen(TestRegionServerNoMaster.java:237)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodA

[jira] [Commented] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar

2014-10-30 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190464#comment-14190464
 ] 

Nick Dimiduk commented on HBASE-9003:
-

In HBASE-12008 I removed the test from IntegrationTestImportTsv. The HCat 
feature our users wanted supported has since been deprecated from HCat. The 
feature never really worked on that side anyway. The test was removed because 
it was failing with security-enabled installs due to flummoxed credential 
passing. Since the one user of the feature was gone, I pulled it out. I should 
have included this whole code path as well.

[~esteban] are you seeing folks using this feature outside of the HCat 
scenario? Or are the files building up in a test env?

> TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
> -
>
> Key: HBASE-9003
> URL: https://issues.apache.org/jira/browse/HBASE-9003
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, 
> HBASE-9003.v2.patch, HBASE-9003.v2.patch
>
>
> This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on 
> {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. 
> However {{getJar()}} uses File.createTempFile() to create a temporary file 
> under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar 
> and its content is not purged after the JVM is destroyed. Since most 
> configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar 
> files get purged by {{tmpwatch}} or a similar tool, but boxes that have 
> {{hadoop.tmp.dir}} pointing to a different location not monitored by 
> {{tmpwatch}} will pile up a collection of jars causing all kind of issues. 
> Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] 
> comment on HADOOP-9737) we shouldn't use that as part of 
> {{TableMapReduceUtil}} in order to avoid this kind of issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12381) Add maven enforcer rules for build assumptions

2014-10-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12381:
--
   Resolution: Fixed
Fix Version/s: 0.99.2
   0.98.9
   0.94.26
   2.0.0
 Release Note: Enforces maven >= 3.0.3 (based on the oldest version we have 
building on jenkins) and java >= the source compilation target variable, which 
is 1.7 on master and branch-1 (based on java compat doc) and 1.6 before this.
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Nice patch [~busbey] Thanks.  Committed to 0.94+

> Add maven enforcer rules for build assumptions
> --
>
> Key: HBASE-12381
> URL: https://issues.apache.org/jira/browse/HBASE-12381
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 0.94.26, 0.98.9, 0.99.2
>
> Attachments: HBASE-12381.1.patch.txt
>
>
> our ref guide says that you need maven 3 to build. add an enforcer rule so 
> that people find out early that they have the wrong maven version, rather 
> then however things fall over if someone tries to build with maven 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190474#comment-14190474
 ] 

Hudson commented on HBASE-12375:


FAILURE: Integrated in HBase-TRUNK #5722 (See 
[https://builds.apache.org/job/HBase-TRUNK/5722/])
HBASE-12375 LoadIncrementalHFiles fails to load data in table when CF name 
starts with '_' (stack: rev 87939889bb19817493027fb84ca2c2b76a4e384e)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java


> LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
> --
>
> Key: HBASE-12375
> URL: https://issues.apache.org/jira/browse/HBASE-12375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.5
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12375-0.98.patch, HBASE-12375-v2.patch, 
> HBASE-12375.patch
>
>
> We do not restrict user from creating a table having column family starting 
> with '_'.
> So when user creates a table in such a way then LoadIncrementalHFiles will 
> skip those family data to load into the table.
> {code}
> // Skip _logs, etc
> if (familyDir.getName().startsWith("_")) continue;
> {code}
> I think we should remove that check as I do not see any _logs directory being 
> created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12380) TestRegionServerNoMaster#testMultipleOpen is flaky after HBASE-11760

2014-10-30 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-12380:
--
Status: Patch Available  (was: Open)

Thanks [~jxiang]!

> TestRegionServerNoMaster#testMultipleOpen is flaky after HBASE-11760
> 
>
> Key: HBASE-12380
> URL: https://issues.apache.org/jira/browse/HBASE-12380
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Esteban Gutierrez
> Attachments: HBASE-12380.v0.patch
>
>
> Noticed this while trying to fix faulty test while working on a fix for 
> HBASE-12219:
> {code}
> Tests in error:
>   TestRegionServerNoMaster.testMultipleOpen:237 » Service 
> java.io.IOException: R...
>   TestRegionServerNoMaster.testCloseByRegionServer:211->closeRegionNoZK:201 » 
> Service
> {code}
> Initially I thought the problem was on my patch for HBASE-12219 but I noticed 
> that the issue was occurring on the 7th attempt to open the region. However I 
> was able to reproduce the same problem in the master branch after increasing 
> the number of requests in testMultipleOpen():
> {code}
> 2014-10-29 15:03:45,043 INFO  [Thread-216] regionserver.RSRpcServices(1334): 
> Receiving OPEN for the 
> region:TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.,
>  which we are already trying to OPEN - ignoring this new request for this 
> region.
> Submitting openRegion attempt: 16 <
> 2014-10-29 15:03:45,044 INFO  [Thread-216] regionserver.RSRpcServices(1311): 
> Open TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.
> 2014-10-29 15:03:45,044 INFO  
> [PostOpenDeployTasks:025198143197ea68803e49819eae27ca] 
> hbase.MetaTableAccessor(1307): Updated row 
> TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca. 
> with server=192.168.1.105,63082,1414620220789
> Submitting openRegion attempt: 17 <
> 2014-10-29 15:03:45,046 ERROR [RS_OPEN_REGION-192.168.1.105:63082-2] 
> handler.OpenRegionHandler(88): Region 025198143197ea68803e49819eae27ca was 
> already online when we started processing the opening. Marking this new 
> attempt as failed
> 2014-10-29 15:03:45,047 FATAL [Thread-216] regionserver.HRegionServer(1931): 
> ABORTING region server 192.168.1.105,63082,1414620220789: Received OPEN for 
> the 
> region:TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.,
>  which is already online
> 2014-10-29 15:03:45,047 FATAL [Thread-216] regionserver.HRegionServer(1937): 
> RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
> 2014-10-29 15:03:45,054 WARN  [Thread-216] regionserver.HRegionServer(1955): 
> Unable to report fatal error to master
> com.google.protobuf.ServiceException: java.io.IOException: Call to 
> /192.168.1.105:63079 failed on local exception: java.io.IOException: 
> Connection to /192.168.1.105:63079 is closing. Call id=4, waitTime=2
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1707)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1757)
> at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.reportRSFatalError(RegionServerStatusProtos.java:8301)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1952)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.abortRegionServer(MiniHBaseCluster.java:174)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$100(MiniHBaseCluster.java:108)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$2.run(MiniHBaseCluster.java:167)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:356)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:277)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.abort(MiniHBaseCluster.java:165)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1964)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1308)
> at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster.testMultipleOpen(TestRegionServerNoMaster.java:237)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.re

[jira] [Commented] (HBASE-12380) TestRegionServerNoMaster#testMultipleOpen is flaky after HBASE-11760

2014-10-30 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190479#comment-14190479
 ] 

Jimmy Xiang commented on HBASE-12380:
-

+1

> TestRegionServerNoMaster#testMultipleOpen is flaky after HBASE-11760
> 
>
> Key: HBASE-12380
> URL: https://issues.apache.org/jira/browse/HBASE-12380
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Esteban Gutierrez
> Attachments: HBASE-12380.v0.patch
>
>
> Noticed this while trying to fix faulty test while working on a fix for 
> HBASE-12219:
> {code}
> Tests in error:
>   TestRegionServerNoMaster.testMultipleOpen:237 » Service 
> java.io.IOException: R...
>   TestRegionServerNoMaster.testCloseByRegionServer:211->closeRegionNoZK:201 » 
> Service
> {code}
> Initially I thought the problem was on my patch for HBASE-12219 but I noticed 
> that the issue was occurring on the 7th attempt to open the region. However I 
> was able to reproduce the same problem in the master branch after increasing 
> the number of requests in testMultipleOpen():
> {code}
> 2014-10-29 15:03:45,043 INFO  [Thread-216] regionserver.RSRpcServices(1334): 
> Receiving OPEN for the 
> region:TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.,
>  which we are already trying to OPEN - ignoring this new request for this 
> region.
> Submitting openRegion attempt: 16 <
> 2014-10-29 15:03:45,044 INFO  [Thread-216] regionserver.RSRpcServices(1311): 
> Open TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.
> 2014-10-29 15:03:45,044 INFO  
> [PostOpenDeployTasks:025198143197ea68803e49819eae27ca] 
> hbase.MetaTableAccessor(1307): Updated row 
> TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca. 
> with server=192.168.1.105,63082,1414620220789
> Submitting openRegion attempt: 17 <
> 2014-10-29 15:03:45,046 ERROR [RS_OPEN_REGION-192.168.1.105:63082-2] 
> handler.OpenRegionHandler(88): Region 025198143197ea68803e49819eae27ca was 
> already online when we started processing the opening. Marking this new 
> attempt as failed
> 2014-10-29 15:03:45,047 FATAL [Thread-216] regionserver.HRegionServer(1931): 
> ABORTING region server 192.168.1.105,63082,1414620220789: Received OPEN for 
> the 
> region:TestRegionServerNoMaster,,1414620223682.025198143197ea68803e49819eae27ca.,
>  which is already online
> 2014-10-29 15:03:45,047 FATAL [Thread-216] regionserver.HRegionServer(1937): 
> RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
> 2014-10-29 15:03:45,054 WARN  [Thread-216] regionserver.HRegionServer(1955): 
> Unable to report fatal error to master
> com.google.protobuf.ServiceException: java.io.IOException: Call to 
> /192.168.1.105:63079 failed on local exception: java.io.IOException: 
> Connection to /192.168.1.105:63079 is closing. Call id=4, waitTime=2
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1707)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1757)
> at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.reportRSFatalError(RegionServerStatusProtos.java:8301)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1952)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.abortRegionServer(MiniHBaseCluster.java:174)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$100(MiniHBaseCluster.java:108)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$2.run(MiniHBaseCluster.java:167)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:356)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:277)
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.abort(MiniHBaseCluster.java:165)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1964)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1308)
> at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster.testMultipleOpen(TestRegionServerNoMaster.java:237)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.Deleg

[jira] [Commented] (HBASE-12368) Use FastLongHistogram to accellerate histogram based metric stats

2014-10-30 Thread Yi Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190496#comment-14190496
 ] 

Yi Deng commented on HBASE-12368:
-

[~tedyu] I need it in `hbase-hadoop2-compat` which does not depend on 
hbase-common. I think hbase-server also depends on `hbase-hadoop2-compat` right?

> Use FastLongHistogram to accellerate histogram based metric stats
> -
>
> Key: HBASE-12368
> URL: https://issues.apache.org/jira/browse/HBASE-12368
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Yi Deng
>Assignee: Yi Deng
>Priority: Minor
>  Labels: metrics
> Attachments: 
> 0001-Add-Percentiles-who-uses-FatLongHistogram-to-replace.patch
>
>
> Use FastLongHistogram to accellerate histogram based metric stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar

2014-10-30 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190506#comment-14190506
 ] 

Esteban Gutierrez commented on HBASE-9003:
--

The files end up building even in non test environments when {{hadoop.tmp.dir}} 
points to a different location and that should be affecting any user that uses 
initTableMapperJob or initTableReducerJob and submits jobs using the 
distributed cache.

> TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
> -
>
> Key: HBASE-9003
> URL: https://issues.apache.org/jira/browse/HBASE-9003
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, 
> HBASE-9003.v2.patch, HBASE-9003.v2.patch
>
>
> This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on 
> {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. 
> However {{getJar()}} uses File.createTempFile() to create a temporary file 
> under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar 
> and its content is not purged after the JVM is destroyed. Since most 
> configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar 
> files get purged by {{tmpwatch}} or a similar tool, but boxes that have 
> {{hadoop.tmp.dir}} pointing to a different location not monitored by 
> {{tmpwatch}} will pile up a collection of jars causing all kind of issues. 
> Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] 
> comment on HADOOP-9737) we shouldn't use that as part of 
> {{TableMapReduceUtil}} in order to avoid this kind of issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190508#comment-14190508
 ] 

Hudson commented on HBASE-12375:


FAILURE: Integrated in HBase-0.98 #641 (See 
[https://builds.apache.org/job/HBase-0.98/641/])
HBASE-12375 LoadIncrementalHFiles fails to load data in table when CF name 
starts with '_' (stack: rev 68eb74b23e6eff60cf4410ff4af1a60b501a7c9c)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java


> LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
> --
>
> Key: HBASE-12375
> URL: https://issues.apache.org/jira/browse/HBASE-12375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.5
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12375-0.98.patch, HBASE-12375-v2.patch, 
> HBASE-12375.patch
>
>
> We do not restrict user from creating a table having column family starting 
> with '_'.
> So when user creates a table in such a way then LoadIncrementalHFiles will 
> skip those family data to load into the table.
> {code}
> // Skip _logs, etc
> if (familyDir.getName().startsWith("_")) continue;
> {code}
> I think we should remove that check as I do not see any _logs directory being 
> created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12368) Use FastLongHistogram to accellerate histogram based metric stats

2014-10-30 Thread Yi Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190521#comment-14190521
 ] 

Yi Deng commented on HBASE-12368:
-

It doesn't make sense for this compile problem for me:

[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/util/TestFastLongHistogram.java:[23,50]
 package org.apache.hadoop.hbase.testclassification does not exist

This package is defined in `hbase-annotation`, which is one of the dependencies 
of `hbase-hadoop2-compat`.

> Use FastLongHistogram to accellerate histogram based metric stats
> -
>
> Key: HBASE-12368
> URL: https://issues.apache.org/jira/browse/HBASE-12368
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Yi Deng
>Assignee: Yi Deng
>Priority: Minor
>  Labels: metrics
> Attachments: 
> 0001-Add-Percentiles-who-uses-FatLongHistogram-to-replace.patch
>
>
> Use FastLongHistogram to accellerate histogram based metric stats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12384) TestTags can hang on fast test hosts

2014-10-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12384:
---
Priority: Minor  (was: Major)

> TestTags can hang on fast test hosts
> 
>
> Key: HBASE-12384
> URL: https://issues.apache.org/jira/browse/HBASE-12384
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-12384-0.98.patch, HBASE-12384-master.patch
>
>
> Waiting indefinitely expecting flushed files to reach a certain count after 
> triggering a flush but compaction has happened between the flush and check 
> for number of store files. 
> {code}
> admin.flush(tableName);
> regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
> for (HRegion region : regions) {
>   Store store = region.getStore(fam);
> - Flush and compaction has happened before here --->
>   while (!(store.getStorefilesCount() > 2)) {
> - Hung forever in here ---> 
> Thread.sleep(10);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12384) TestTags can hang on fast test hosts

2014-10-30 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12384:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed with comment added as requested, one comment on the first sleep after 
flush in each unit test. Thanks for the review [~stack]

> TestTags can hang on fast test hosts
> 
>
> Key: HBASE-12384
> URL: https://issues.apache.org/jira/browse/HBASE-12384
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-12384-0.98.patch, HBASE-12384-master.patch
>
>
> Waiting indefinitely expecting flushed files to reach a certain count after 
> triggering a flush but compaction has happened between the flush and check 
> for number of store files. 
> {code}
> admin.flush(tableName);
> regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
> for (HRegion region : regions) {
>   Store store = region.getStore(fam);
> - Flush and compaction has happened before here --->
>   while (!(store.getStorefilesCount() > 2)) {
> - Hung forever in here ---> 
> Thread.sleep(10);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar

2014-10-30 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190530#comment-14190530
 ] 

Nick Dimiduk commented on HBASE-9003:
-

Makes sense. Let's get your fix in. What do you think about removing JarFinder 
all together for 1.0?

> TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
> -
>
> Key: HBASE-9003
> URL: https://issues.apache.org/jira/browse/HBASE-9003
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, 
> HBASE-9003.v2.patch, HBASE-9003.v2.patch
>
>
> This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on 
> {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. 
> However {{getJar()}} uses File.createTempFile() to create a temporary file 
> under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar 
> and its content is not purged after the JVM is destroyed. Since most 
> configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar 
> files get purged by {{tmpwatch}} or a similar tool, but boxes that have 
> {{hadoop.tmp.dir}} pointing to a different location not monitored by 
> {{tmpwatch}} will pile up a collection of jars causing all kind of issues. 
> Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] 
> comment on HADOOP-9737) we shouldn't use that as part of 
> {{TableMapReduceUtil}} in order to avoid this kind of issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190532#comment-14190532
 ] 

Hudson commented on HBASE-12375:


FAILURE: Integrated in HBase-1.0 #390 (See 
[https://builds.apache.org/job/HBase-1.0/390/])
HBASE-12375 LoadIncrementalHFiles fails to load data in table when CF name 
starts with '_' (stack: rev d8874fbc21525a5af2db3d8b9edd6e67fa1b5572)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java


> LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
> --
>
> Key: HBASE-12375
> URL: https://issues.apache.org/jira/browse/HBASE-12375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.5
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12375-0.98.patch, HBASE-12375-v2.patch, 
> HBASE-12375.patch
>
>
> We do not restrict user from creating a table having column family starting 
> with '_'.
> So when user creates a table in such a way then LoadIncrementalHFiles will 
> skip those family data to load into the table.
> {code}
> // Skip _logs, etc
> if (familyDir.getName().startsWith("_")) continue;
> {code}
> I think we should remove that check as I do not see any _logs directory being 
> created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar

2014-10-30 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190543#comment-14190543
 ] 

Esteban Gutierrez commented on HBASE-9003:
--

I don't think we can get rid of JarFinder as long as we have an option to use 
the distributed cache in initTable* I remember we used to ship the jars in 
similar way back in 0.90.x but we cleaned up the temporary jar. Here the only 
problem we have is that we don't clean up.

> TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
> -
>
> Key: HBASE-9003
> URL: https://issues.apache.org/jira/browse/HBASE-9003
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, 
> HBASE-9003.v2.patch, HBASE-9003.v2.patch
>
>
> This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on 
> {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. 
> However {{getJar()}} uses File.createTempFile() to create a temporary file 
> under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar 
> and its content is not purged after the JVM is destroyed. Since most 
> configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar 
> files get purged by {{tmpwatch}} or a similar tool, but boxes that have 
> {{hadoop.tmp.dir}} pointing to a different location not monitored by 
> {{tmpwatch}} will pile up a collection of jars causing all kind of issues. 
> Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] 
> comment on HADOOP-9737) we shouldn't use that as part of 
> {{TableMapReduceUtil}} in order to avoid this kind of issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12072) We are doing 35 x 35 retries for master operations

2014-10-30 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190546#comment-14190546
 ] 

Enis Soztutar commented on HBASE-12072:
---

Thanks Stack for checking the patch. It turned out to be a bigger patch than I 
anticipated. 
The patch aims to unify how we call master rpc's. It adds retrying where we had 
none (for example HBaseAdmin. enableCatalogJanitor(), etc), and removes the 
retrying at the makeStub() in favor of the higher level retry in the 
MasterCallable / retrying caller level. Now most of the HBaseAdmin methods use 
MasterCallable properly. The Exceptions are cleaned a bit for the public Admin 
interface. 

bq. Whats thinking behind removing isMasterRunning Enis Soztutar I like not 
depending on master for ops.
I am not sure what is the purpose of isMasterRunning() and why would a user 
want it. It is removed from Admin interface which is new, but kept in 
deprecated mode in HBaseAdmin. I can undo that if you think that we need to 
keep it. I just did not see a use case where the user will call 
Admin.isMasterRunning() other than internal stuff. 

bq. When you deprecate, want to point at what folks should use instead (or your 
thinking this is internal stuff and the heavies will just figure it out?)
I thought the deprecated stuff in HConnection was internal. But it is not 
clear. I think having those live in the Admin layer makes better sense. Let me 
add javadoc. 

bq. Not so mad about the flattening of exceptions into IOE exclusively.
I see your point. I think you mean these:
{code}
   void move(final byte[] encodedRegionName, final byte[] destServerName)
-  throws HBaseIOException, MasterNotRunningException, 
ZooKeeperConnectionException;
+  throws IOException;
{code}
With the patch, we are now calling it via the retrying rpc caller, which will 
throw a RetriesExhaustedException etc which will wrap the other exceptions. 
That is why they are not explicitly thrown now. 

bq. What we supposed to use in place of all deprecated stuff in 
ConnectionManager? Just implement instead up in HBaseAdmin?
yeah, let me add javadoc to use Admin methods. 


> We are doing 35 x 35 retries for master operations
> --
>
> Key: HBASE-12072
> URL: https://issues.apache.org/jira/browse/HBASE-12072
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.6
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 0.99.2
>
> Attachments: 12072-v1.txt, 12072-v2.txt, hbase-12072_v1.patch
>
>
> For master requests, there are two retry mechanisms in effect. The first one 
> is from HBaseAdmin.executeCallable() 
> {code}
>   private  V executeCallable(MasterCallable callable) throws 
> IOException {
> RpcRetryingCaller caller = rpcCallerFactory.newCaller();
> try {
>   return caller.callWithRetries(callable);
> } finally {
>   callable.close();
> }
>   }
> {code}
> And inside, the other one is from StubMaker.makeStub():
> {code}
> /**
>* Create a stub against the master.  Retry if necessary.
>* @return A stub to do intf against the master
>* @throws MasterNotRunningException
>*/
>   @edu.umd.cs.findbugs.annotations.SuppressWarnings 
> (value="SWL_SLEEP_WITH_LOCK_HELD")
>   Object makeStub() throws MasterNotRunningException {
> {code}
> The tests will just hang for 10 min * 35 ~= 6hours. 
> {code}
> 2014-09-23 16:19:05,151 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 1 of 35 
> failed; retrying after sleep of 100, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,253 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 2 of 35 
> failed; retrying after sleep of 200, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,456 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 3 of 35 
> failed; retrying after sleep of 300, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,759 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 4 of 35 
> failed; retrying after sleep of 500, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:06,262 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 5 of 35 
> failed; retrying after sleep of 1008, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:07,273 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 6 of 35 
> failed

[jira] [Commented] (HBASE-12381) Add maven enforcer rules for build assumptions

2014-10-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190563#comment-14190563
 ] 

Hadoop QA commented on HBASE-12381:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12678228/HBASE-12381.1.patch.txt
  against trunk revision .
  ATTACHMENT ID: 12678228

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.master.balancer.TestBaseLoadBalancer.testImmediateAssignment(TestBaseLoadBalancer.java:136)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11519//console

This message is automatically generated.

> Add maven enforcer rules for build assumptions
> --
>
> Key: HBASE-12381
> URL: https://issues.apache.org/jira/browse/HBASE-12381
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 0.94.26, 0.98.9, 0.99.2
>
> Attachments: HBASE-12381.1.patch.txt
>
>
> our ref guide says that you need maven 3 to build. add an enforcer rule so 
> that people find out early that they have the wrong maven version, rather 
> then however things fall over if someone tries to build with maven 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar

2014-10-30 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190564#comment-14190564
 ] 

Nick Dimiduk commented on HBASE-9003:
-

IIRC, I introduced JarFinder for the purpose of launching jobs from the output 
committer of a running job. In this context, the dependency jars have been 
unpacked, so to launch the job, JarFinder is used to re-pack the class files 
into a jar.

Which raises an interesting point: you're seeing this accumulation of files 
under hadoop.tmp.dir even for regular jobs? Should be that nothing is created 
unless the requested class is found to exist on the class path outside of a jar.

I don't remember the details; let me look into the code when I have a few 
minutes.

Back to the point at hand +1 for fixing the accumulation problem.

> TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
> -
>
> Key: HBASE-9003
> URL: https://issues.apache.org/jira/browse/HBASE-9003
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, 
> HBASE-9003.v2.patch, HBASE-9003.v2.patch
>
>
> This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on 
> {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. 
> However {{getJar()}} uses File.createTempFile() to create a temporary file 
> under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar 
> and its content is not purged after the JVM is destroyed. Since most 
> configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar 
> files get purged by {{tmpwatch}} or a similar tool, but boxes that have 
> {{hadoop.tmp.dir}} pointing to a different location not monitored by 
> {{tmpwatch}} will pile up a collection of jars causing all kind of issues. 
> Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] 
> comment on HADOOP-9737) we shouldn't use that as part of 
> {{TableMapReduceUtil}} in order to avoid this kind of issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12375) LoadIncrementalHFiles fails to load data in table when CF name starts with '_'

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190575#comment-14190575
 ] 

Hudson commented on HBASE-12375:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #610 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/610/])
HBASE-12375 LoadIncrementalHFiles fails to load data in table when CF name 
starts with '_' (stack: rev 68eb74b23e6eff60cf4410ff4af1a60b501a7c9c)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java


> LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
> --
>
> Key: HBASE-12375
> URL: https://issues.apache.org/jira/browse/HBASE-12375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.5
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.9, 0.99.2
>
> Attachments: HBASE-12375-0.98.patch, HBASE-12375-v2.patch, 
> HBASE-12375.patch
>
>
> We do not restrict user from creating a table having column family starting 
> with '_'.
> So when user creates a table in such a way then LoadIncrementalHFiles will 
> skip those family data to load into the table.
> {code}
> // Skip _logs, etc
> if (familyDir.getName().startsWith("_")) continue;
> {code}
> I think we should remove that check as I do not see any _logs directory being 
> created by the bulkload tool in the output directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12384) TestTags can hang on fast test hosts

2014-10-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190604#comment-14190604
 ] 

Hadoop QA commented on HBASE-12384:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12678238/HBASE-12384-master.patch
  against trunk revision .
  ATTACHMENT ID: 12678238

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11520//console

This message is automatically generated.

> TestTags can hang on fast test hosts
> 
>
> Key: HBASE-12384
> URL: https://issues.apache.org/jira/browse/HBASE-12384
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-12384-0.98.patch, HBASE-12384-master.patch
>
>
> Waiting indefinitely expecting flushed files to reach a certain count after 
> triggering a flush but compaction has happened between the flush and check 
> for number of store files. 
> {code}
> admin.flush(tableName);
> regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
> for (HRegion region : regions) {
>   Store store = region.getStore(fam);
> - Flush and compaction has happened before here --->
>   while (!(store.getStorefilesCount() > 2)) {
> - Hung forever in here ---> 
> Thread.sleep(10);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9003) TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar

2014-10-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190613#comment-14190613
 ] 

stack commented on HBASE-9003:
--

bq. Back to the point at hand +1 for fixing the accumulation problem.

That is +1 on patch [~ndimiduk]?

> TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
> -
>
> Key: HBASE-9003
> URL: https://issues.apache.org/jira/browse/HBASE-9003
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Fix For: 2.0.0, 0.99.2
>
> Attachments: HBASE-9003.v0.patch, HBASE-9003.v1.patch, 
> HBASE-9003.v2.patch, HBASE-9003.v2.patch
>
>
> This is the problem: {{TableMapReduceUtil#addDependencyJars}} relies on 
> {{org.apache.hadoop.util.JarFinder}} if available to call {{getJar()}}. 
> However {{getJar()}} uses File.createTempFile() to create a temporary file 
> under {{hadoop.tmp.dir}}{{/target/test-dir}}. Due HADOOP-9737 the created jar 
> and its content is not purged after the JVM is destroyed. Since most 
> configurations point {{hadoop.tmp.dir}} under {{/tmp}} the generated jar 
> files get purged by {{tmpwatch}} or a similar tool, but boxes that have 
> {{hadoop.tmp.dir}} pointing to a different location not monitored by 
> {{tmpwatch}} will pile up a collection of jars causing all kind of issues. 
> Since {{JarFinder#getJar}} is not a public API from Hadoop (see [~tucu00] 
> comment on HADOOP-9737) we shouldn't use that as part of 
> {{TableMapReduceUtil}} in order to avoid this kind of issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11835) Wrong managenement of non expected calls in the client

2014-10-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190622#comment-14190622
 ] 

Hadoop QA commented on HBASE-11835:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12678243/11835.rebase.patch
  against trunk revision .
  ATTACHMENT ID: 12678243

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3785 checkstyle errors (more than the trunk's current 3784 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.ambari.server.upgrade.UpgradeCatalog150Test.testAddHistoryServer(UpgradeCatalog150Test.java:189)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11521//console

This message is automatically generated.

> Wrong managenement of non expected calls in the client
> --
>
> Key: HBASE-11835
> URL: https://issues.apache.org/jira/browse/HBASE-11835
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Performance
>Affects Versions: 1.0.0, 2.0.0, 0.98.6
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 2.0.0, 0.99.2
>
> Attachments: 11835.rebase.patch, 11835.rebase.patch, 
> 11835.rebase.patch, rpcClient.patch
>
>
> If a call is purged or canceled we try to skip the reply from the server, but 
> we read the wrong number of bytes so we corrupt the tcp channel. It's hidden 
> as it triggers retry and so on, but it's bad for performances obviously.
> It happens with cell blocks.
> [~ram_krish_86], [~saint@gmail.com], you know this part better than me, 
> do you agree with the analysis and the patch?
> The changes in rpcServer are not fully related: as the

[jira] [Commented] (HBASE-11764) Support per cell TTLs

2014-10-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190648#comment-14190648
 ] 

Lars Hofhansl commented on HBASE-11764:
---

Scanned patch again. Looks good. +1
[~apurtell] you're confident enough that this won't destabilize 0.98?

> Support per cell TTLs
> -
>
> Key: HBASE-11764
> URL: https://issues.apache.org/jira/browse/HBASE-11764
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-11764-0.98.patch, HBASE-11764-0.98.patch, 
> HBASE-11764-0.98.patch, HBASE-11764-0.98.patch, HBASE-11764-0.98.patch, 
> HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, 
> HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, 
> HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch, HBASE-11764.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12381) Add maven enforcer rules for build assumptions

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190647#comment-14190647
 ] 

Hudson commented on HBASE-12381:


FAILURE: Integrated in HBase-TRUNK #5723 (See 
[https://builds.apache.org/job/HBase-TRUNK/5723/])
HBASE-12381 use the Maven Enforcer Plugin to check maven and java versions. 
(stack: rev 075fd3032135c55a6874a6f0c091e558540609d0)
* pom.xml


> Add maven enforcer rules for build assumptions
> --
>
> Key: HBASE-12381
> URL: https://issues.apache.org/jira/browse/HBASE-12381
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 0.94.26, 0.98.9, 0.99.2
>
> Attachments: HBASE-12381.1.patch.txt
>
>
> our ref guide says that you need maven 3 to build. add an enforcer rule so 
> that people find out early that they have the wrong maven version, rather 
> then however things fall over if someone tries to build with maven 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12381) Add maven enforcer rules for build assumptions

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190651#comment-14190651
 ] 

Hudson commented on HBASE-12381:


FAILURE: Integrated in HBase-0.94 #1437 (See 
[https://builds.apache.org/job/HBase-0.94/1437/])
HBASE-12381 use the Maven Enforcer Plugin to check maven and java versions. 
(stack: rev f0a8640f0ae3c5750da826e4ab5b847ad1b0ae34)
* pom.xml


> Add maven enforcer rules for build assumptions
> --
>
> Key: HBASE-12381
> URL: https://issues.apache.org/jira/browse/HBASE-12381
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 0.94.26, 0.98.9, 0.99.2
>
> Attachments: HBASE-12381.1.patch.txt
>
>
> our ref guide says that you need maven 3 to build. add an enforcer rule so 
> that people find out early that they have the wrong maven version, rather 
> then however things fall over if someone tries to build with maven 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12381) Add maven enforcer rules for build assumptions

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190654#comment-14190654
 ] 

Hudson commented on HBASE-12381:


FAILURE: Integrated in HBase-0.94-security #551 (See 
[https://builds.apache.org/job/HBase-0.94-security/551/])
HBASE-12381 use the Maven Enforcer Plugin to check maven and java versions. 
(stack: rev f0a8640f0ae3c5750da826e4ab5b847ad1b0ae34)
* pom.xml


> Add maven enforcer rules for build assumptions
> --
>
> Key: HBASE-12381
> URL: https://issues.apache.org/jira/browse/HBASE-12381
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 0.94.26, 0.98.9, 0.99.2
>
> Attachments: HBASE-12381.1.patch.txt
>
>
> our ref guide says that you need maven 3 to build. add an enforcer rule so 
> that people find out early that they have the wrong maven version, rather 
> then however things fall over if someone tries to build with maven 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12381) Add maven enforcer rules for build assumptions

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190659#comment-14190659
 ] 

Hudson commented on HBASE-12381:


SUCCESS: Integrated in HBase-0.94-JDK7 #206 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/206/])
HBASE-12381 use the Maven Enforcer Plugin to check maven and java versions. 
(stack: rev f0a8640f0ae3c5750da826e4ab5b847ad1b0ae34)
* pom.xml


> Add maven enforcer rules for build assumptions
> --
>
> Key: HBASE-12381
> URL: https://issues.apache.org/jira/browse/HBASE-12381
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 0.94.26, 0.98.9, 0.99.2
>
> Attachments: HBASE-12381.1.patch.txt
>
>
> our ref guide says that you need maven 3 to build. add an enforcer rule so 
> that people find out early that they have the wrong maven version, rather 
> then however things fall over if someone tries to build with maven 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12384) TestTags can hang on fast test hosts

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190677#comment-14190677
 ] 

Hudson commented on HBASE-12384:


SUCCESS: Integrated in HBase-1.0 #391 (See 
[https://builds.apache.org/job/HBase-1.0/391/])
HBASE-12384 TestTags can hang on fast test hosts (apurtell: rev 
f0091a90313f4c92e465df086407266a6ba18486)
* hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java


> TestTags can hang on fast test hosts
> 
>
> Key: HBASE-12384
> URL: https://issues.apache.org/jira/browse/HBASE-12384
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 0.98.8, 0.99.2
>
> Attachments: HBASE-12384-0.98.patch, HBASE-12384-master.patch
>
>
> Waiting indefinitely expecting flushed files to reach a certain count after 
> triggering a flush but compaction has happened between the flush and check 
> for number of store files. 
> {code}
> admin.flush(tableName);
> regions = TEST_UTIL.getHBaseCluster().getRegions(tableName);
> for (HRegion region : regions) {
>   Store store = region.getStore(fam);
> - Flush and compaction has happened before here --->
>   while (!(store.getStorefilesCount() > 2)) {
> - Hung forever in here ---> 
> Thread.sleep(10);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12381) Add maven enforcer rules for build assumptions

2014-10-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190678#comment-14190678
 ] 

Hudson commented on HBASE-12381:


SUCCESS: Integrated in HBase-1.0 #391 (See 
[https://builds.apache.org/job/HBase-1.0/391/])
HBASE-12381 use the Maven Enforcer Plugin to check maven and java versions. 
(stack: rev 158e009f4c554b792e0b868a8ec77ce19a401d7b)
* pom.xml


> Add maven enforcer rules for build assumptions
> --
>
> Key: HBASE-12381
> URL: https://issues.apache.org/jira/browse/HBASE-12381
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.0.0, 0.94.26, 0.98.9, 0.99.2
>
> Attachments: HBASE-12381.1.patch.txt
>
>
> our ref guide says that you need maven 3 to build. add an enforcer rule so 
> that people find out early that they have the wrong maven version, rather 
> then however things fall over if someone tries to build with maven 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >