[jira] [Commented] (HBASE-10419) Add multiget support to PerformanceEvaluation

2014-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902329#comment-13902329
 ] 

Hadoop QA commented on HBASE-10419:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12629179/HBASE-10419-v3-trunk.patch
  against trunk revision .
  ATTACHMENT ID: 12629179

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.master.TestDistributedLogSplitting

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8716//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8716//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8716//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8716//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8716//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8716//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8716//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8716//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8716//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8716//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8716//console

This message is automatically generated.

> Add multiget support to PerformanceEvaluation
> -
>
> Key: HBASE-10419
> URL: https://issues.apache.org/jira/browse/HBASE-10419
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-10419-v2-trunk.patch, HBASE-10419-v3-trunk.patch, 
> HBASE-10419.0.patch, HBASE-10419.1.patch
>
>
> Folks planning to use multiget may find this useful.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10548) PerfEval work around wrong runtime dependency version

2014-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902324#comment-13902324
 ] 

Hadoop QA commented on HBASE-10548:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12629186/HBASE-10548.00.patch
  against trunk revision .
  ATTACHMENT ID: 12629186

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8715//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8715//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8715//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8715//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8715//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8715//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8715//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8715//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8715//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8715//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8715//console

This message is automatically generated.

> PerfEval work around wrong runtime dependency version
> -
>
> Key: HBASE-10548
> URL: https://issues.apache.org/jira/browse/HBASE-10548
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.96.2, 0.98.1, 0.99.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-10548.00.patch
>
>
> From my 
> [comment|https://issues.apache.org/jira/browse/HBASE-10511?focusedCommentId=13902238&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13902238]
>  on HBASE-10511:
> I have hadoop-1.2.1 installed from tgz, which packages commons-math-2.1. This 
> is *different* from the listed maven dependency, 2.2.
> {noformat}
> $ tar tvf hadoop-1.2.1.tar.gz | grep commons-math
> -rw-rw-r--  0 0  0  832410 Jul 22  2013 
> hadoop-1.2.1/lib/commons-math-2.1.jar
> $ mvn -f pom.xml.hadoop1 dependency:tree | grep commons-math
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> {noformat}
> This is a p

[jira] [Commented] (HBASE-10296) Replace ZK with a consensus lib(paxos,zab or raft) running within master processes to provide better master failover performance and state consistency

2014-02-14 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902322#comment-13902322
 ] 

Feng Honghua commented on HBASE-10296:
--

bq.ZK is not good enough, but do it by your own will make things worse.
Would you list the detailed reasons for this statement? Do you mean the coding 
complexity and correctness risk when implementing our own consensus lib when 
saying 'will make things worse'? Or anything else? :-)
bq.The only real problem I can see is that ZK is not strong consistent.
ZK itself should be strong consistent, right? But our ZK usage of 'A process 
changes a znode, B process watches that znode and then reads the znode value to 
trigger its state-machine' pattern for maintaining the state-machine 
logic(especially assign state-machine) results in the inconsistency problem in 
HMaster...but the data/states we put in ZK still have consistency, right?
bq.This can be done with the existed API (but performance is much inefficient 
than chubby).
Actually if we make HMaster the arbitrator and only HMaster can write to ZK, ZK 
acts as the only truth holder, regionservers can't write/update the states 
directly to ZK but talk to HMaster and HMaster updates to ZK for them...this 
way the current inconsistency issue of HMaster can be remarkably alleviated. 
But still need careful treatment/handling for maintaining the consistency 
between ZK and HMaster's in-memory data...

> Replace ZK with a consensus lib(paxos,zab or raft) running within master 
> processes to provide better master failover performance and state consistency
> --
>
> Key: HBASE-10296
> URL: https://issues.apache.org/jira/browse/HBASE-10296
> Project: HBase
>  Issue Type: Brainstorming
>  Components: master, Region Assignment, regionserver
>Reporter: Feng Honghua
>
> Currently master relies on ZK to elect active master, monitor liveness and 
> store almost all of its states, such as region states, table info, 
> replication info and so on. And zk also plays as a channel for 
> master-regionserver communication(such as in region assigning) and 
> client-regionserver communication(such as replication state/behavior change). 
> But zk as a communication channel is fragile due to its one-time watch and 
> asynchronous notification mechanism which together can leads to missed 
> events(hence missed messages), for example the master must rely on the state 
> transition logic's idempotence to maintain the region assigning state 
> machine's correctness, actually almost all of the most tricky inconsistency 
> issues can trace back their root cause to the fragility of zk as a 
> communication channel.
> Replace zk with paxos running within master processes have following benefits:
> 1. better master failover performance: all master, either the active or the 
> standby ones, have the same latest states in memory(except lag ones but which 
> can eventually catch up later on). whenever the active master dies, the newly 
> elected active master can immediately play its role without such failover 
> work as building its in-memory states by consulting meta-table and zk.
> 2. better state consistency: master's in-memory states are the only truth 
> about the system,which can eliminate inconsistency from the very beginning. 
> and though the states are contained by all masters, paxos guarantees they are 
> identical at any time.
> 3. more direct and simple communication pattern: client changes state by 
> sending requests to master, master and regionserver talk directly to each 
> other by sending request and response...all don't bother to using a 
> third-party storage like zk which can introduce more uncertainty, worse 
> latency and more complexity.
> 4. zk can only be used as liveness monitoring for determining if a 
> regionserver is dead, and later on we can eliminate zk totally when we build 
> heartbeat between master and regionserver.
> I know this might looks like a very crazy re-architect, but it deserves deep 
> thinking and serious discussion for it, right?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10296) Replace ZK with a consensus lib(paxos,zab or raft) running within master processes to provide better master failover performance and state consistency

2014-02-14 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902319#comment-13902319
 ] 

Feng Honghua commented on HBASE-10296:
--

bq.Zookeeper is used also by HDFS, Kafka, Storm as well as several other 
systems. Is it realistic (or desirable) to assume it would go away (from an 
operations standpoint)?
Maybe my above answer to this question is a bit too general:-). To be specific 
for HDFS, personally I think name node also can apply the idea of this jira: 
the meta data maintained by name nodes can be replicated among all name nodes 
using consensus lib, removing ZKFC and JournalNodes. I was surprised by finding 
so many roles/processes introduced to accomplish HDFS HA for the first time, it 
would be due to some historical reason, right? :-)

> Replace ZK with a consensus lib(paxos,zab or raft) running within master 
> processes to provide better master failover performance and state consistency
> --
>
> Key: HBASE-10296
> URL: https://issues.apache.org/jira/browse/HBASE-10296
> Project: HBase
>  Issue Type: Brainstorming
>  Components: master, Region Assignment, regionserver
>Reporter: Feng Honghua
>
> Currently master relies on ZK to elect active master, monitor liveness and 
> store almost all of its states, such as region states, table info, 
> replication info and so on. And zk also plays as a channel for 
> master-regionserver communication(such as in region assigning) and 
> client-regionserver communication(such as replication state/behavior change). 
> But zk as a communication channel is fragile due to its one-time watch and 
> asynchronous notification mechanism which together can leads to missed 
> events(hence missed messages), for example the master must rely on the state 
> transition logic's idempotence to maintain the region assigning state 
> machine's correctness, actually almost all of the most tricky inconsistency 
> issues can trace back their root cause to the fragility of zk as a 
> communication channel.
> Replace zk with paxos running within master processes have following benefits:
> 1. better master failover performance: all master, either the active or the 
> standby ones, have the same latest states in memory(except lag ones but which 
> can eventually catch up later on). whenever the active master dies, the newly 
> elected active master can immediately play its role without such failover 
> work as building its in-memory states by consulting meta-table and zk.
> 2. better state consistency: master's in-memory states are the only truth 
> about the system,which can eliminate inconsistency from the very beginning. 
> and though the states are contained by all masters, paxos guarantees they are 
> identical at any time.
> 3. more direct and simple communication pattern: client changes state by 
> sending requests to master, master and regionserver talk directly to each 
> other by sending request and response...all don't bother to using a 
> third-party storage like zk which can introduce more uncertainty, worse 
> latency and more complexity.
> 4. zk can only be used as liveness monitoring for determining if a 
> regionserver is dead, and later on we can eliminate zk totally when we build 
> heartbeat between master and regionserver.
> I know this might looks like a very crazy re-architect, but it deserves deep 
> thinking and serious discussion for it, right?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10546) Two scanner objects are open for each hbase map task but only one scanner object is closed

2014-02-14 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902300#comment-13902300
 ] 

Lars Hofhansl commented on HBASE-10546:
---

Looks good. +1

> Two scanner objects are open for each hbase map task but only one scanner 
> object is closed
> --
>
> Key: HBASE-10546
> URL: https://issues.apache.org/jira/browse/HBASE-10546
> Project: HBase
>  Issue Type: Bug
>Reporter: Vasu Mariyala
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
> Attachments: 0.94-HBASE-10546.patch, trunk-HBASE-10546.patch
>
>
> Map reduce framework calls createRecordReader of the 
> TableInputFormat/MultiTableInputFormat to get the record reader instance. In 
> this method, we are initializing the TableRecordReaderImpl (restart method). 
> This initializes the scanner object. After this, map reduce framework calls 
> initialize on the RecordReader. In our case, this calls restart of the 
> TableRecordReaderImpl again. Here, it doesn't close the first scanner. At the 
> end of the task, only the second scanner object is closed. Because of this, 
> the smallest read point of HRegion is affected.
> We don't need to initialize the RecordReader in the createRecordReader method 
> and we need to close the scanner object in the restart method. (incase if the 
> restart method is called because of exceptions in the nextKeyValue method)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10546) Two scanner objects are open for each hbase map task but only one scanner object is closed

2014-02-14 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10546:
--

Fix Version/s: 0.94.18
   0.99.0
   0.98.1
   0.96.2

> Two scanner objects are open for each hbase map task but only one scanner 
> object is closed
> --
>
> Key: HBASE-10546
> URL: https://issues.apache.org/jira/browse/HBASE-10546
> Project: HBase
>  Issue Type: Bug
>Reporter: Vasu Mariyala
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
> Attachments: 0.94-HBASE-10546.patch, trunk-HBASE-10546.patch
>
>
> Map reduce framework calls createRecordReader of the 
> TableInputFormat/MultiTableInputFormat to get the record reader instance. In 
> this method, we are initializing the TableRecordReaderImpl (restart method). 
> This initializes the scanner object. After this, map reduce framework calls 
> initialize on the RecordReader. In our case, this calls restart of the 
> TableRecordReaderImpl again. Here, it doesn't close the first scanner. At the 
> end of the task, only the second scanner object is closed. Because of this, 
> the smallest read point of HRegion is affected.
> We don't need to initialize the RecordReader in the createRecordReader method 
> and we need to close the scanner object in the restart method. (incase if the 
> restart method is called because of exceptions in the nextKeyValue method)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10356) Failover RPC's for multi-get

2014-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902296#comment-13902296
 ] 

Hadoop QA commented on HBASE-10356:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12629178/HBASE-10356.01.patch
  against trunk revision .
  ATTACHMENT ID: 12629178

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8714//console

This message is automatically generated.

> Failover RPC's for multi-get 
> -
>
> Key: HBASE-10356
> URL: https://issues.apache.org/jira/browse/HBASE-10356
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Enis Soztutar
>Assignee: Sergey Shelukhin
> Fix For: 0.99.0
>
> Attachments: HBASE-10356.01.patch, HBASE-10356.01.patch, 
> HBASE-10356.patch, HBASE-10356.reference.patch, HBASE-10356.reference.patch
>
>
> This is extension of HBASE-10355 to add failover support for multi-gets. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10526) Using Cell instead of KeyValue in HFileOutputFormat

2014-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902284#comment-13902284
 ] 

Hadoop QA commented on HBASE-10526:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12629168/hbase-10526_v1.1.patch
  against trunk revision .
  ATTACHMENT ID: 12629168

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:red}-1 hadoop1.0{color}.  The patch failed to compile against the 
hadoop 1.0 profile.
Here is snippet of errors:
{code}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
(default-testCompile) on project hbase-server: Compilation failure: Compilation 
failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java:[184,19]
 cannot find symbol
[ERROR] symbol  : method getInstance(org.apache.hadoop.conf.Configuration)
[ERROR] location: class org.apache.hadoop.mapreduce.Job
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java:[236,19]
 cannot find symbol
[ERROR] symbol  : method getInstance(org.apache.hadoop.conf.Configuration)
--
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
(default-testCompile) on project hbase-server: Compilation failure
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
--
Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation 
failure
at 
org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:729)
at 
org.apache.maven.plugin.TestCompilerMojo.execute(TestCompilerMojo.java:161)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more{code}

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8713//console

This message is automatically generated.

> Using Cell instead of KeyValue in HFileOutputFormat
> ---
>
> Key: HBASE-10526
> URL: https://issues.apache.org/jira/browse/HBASE-10526
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-10526.patch, hbase-10526_v1.1.patch
>
>
> HFileOutputFormat/KeyValueSortReducer use KeyValue. We should deprecate them 
> and use Cell instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10548) PerfEval work around wrong runtime dependency version

2014-02-14 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10548:
-

Priority: Minor  (was: Major)

> PerfEval work around wrong runtime dependency version
> -
>
> Key: HBASE-10548
> URL: https://issues.apache.org/jira/browse/HBASE-10548
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.96.2, 0.98.1, 0.99.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-10548.00.patch
>
>
> From my 
> [comment|https://issues.apache.org/jira/browse/HBASE-10511?focusedCommentId=13902238&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13902238]
>  on HBASE-10511:
> I have hadoop-1.2.1 installed from tgz, which packages commons-math-2.1. This 
> is *different* from the listed maven dependency, 2.2.
> {noformat}
> $ tar tvf hadoop-1.2.1.tar.gz | grep commons-math
> -rw-rw-r--  0 0  0  832410 Jul 22  2013 
> hadoop-1.2.1/lib/commons-math-2.1.jar
> $ mvn -f pom.xml.hadoop1 dependency:tree | grep commons-math
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> {noformat}
> This is a problem because the 2.1 version of 
> [DescriptiveStatistics|http://commons.apache.org/proper/commons-math/javadocs/api-2.1/org/apache/commons/math/stat/descriptive/DescriptiveStatistics.html]
>  doesn't have a double[] constructor. Running the MR job, mappers fail:
> {noformat}
> java.lang.NoSuchMethodError: 
> org.apache.commons.math.stat.descriptive.DescriptiveStatistics.([D)V
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$RandomReadTest.testTakedown(PerformanceEvaluation.java:1163)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:984)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:1401)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:522)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:474)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10548) PerfEval work around wrong runtime dependency version

2014-02-14 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10548:
-

Attachment: HBASE-10548.00.patch

ping [~nkeywal] [~jmspaggi]. Either of you guys able to repro this?

> PerfEval work around wrong runtime dependency version
> -
>
> Key: HBASE-10548
> URL: https://issues.apache.org/jira/browse/HBASE-10548
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.96.2, 0.98.1, 0.99.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: HBASE-10548.00.patch
>
>
> From my 
> [comment|https://issues.apache.org/jira/browse/HBASE-10511?focusedCommentId=13902238&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13902238]
>  on HBASE-10511:
> I have hadoop-1.2.1 installed from tgz, which packages commons-math-2.1. This 
> is *different* from the listed maven dependency, 2.2.
> {noformat}
> $ tar tvf hadoop-1.2.1.tar.gz | grep commons-math
> -rw-rw-r--  0 0  0  832410 Jul 22  2013 
> hadoop-1.2.1/lib/commons-math-2.1.jar
> $ mvn -f pom.xml.hadoop1 dependency:tree | grep commons-math
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> {noformat}
> This is a problem because the 2.1 version of 
> [DescriptiveStatistics|http://commons.apache.org/proper/commons-math/javadocs/api-2.1/org/apache/commons/math/stat/descriptive/DescriptiveStatistics.html]
>  doesn't have a double[] constructor. Running the MR job, mappers fail:
> {noformat}
> java.lang.NoSuchMethodError: 
> org.apache.commons.math.stat.descriptive.DescriptiveStatistics.([D)V
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$RandomReadTest.testTakedown(PerformanceEvaluation.java:1163)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:984)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:1401)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:522)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:474)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10548) PerfEval work around wrong runtime dependency version

2014-02-14 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10548:
-

Status: Patch Available  (was: Open)

> PerfEval work around wrong runtime dependency version
> -
>
> Key: HBASE-10548
> URL: https://issues.apache.org/jira/browse/HBASE-10548
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.96.2, 0.98.1, 0.99.0
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: HBASE-10548.00.patch
>
>
> From my 
> [comment|https://issues.apache.org/jira/browse/HBASE-10511?focusedCommentId=13902238&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13902238]
>  on HBASE-10511:
> I have hadoop-1.2.1 installed from tgz, which packages commons-math-2.1. This 
> is *different* from the listed maven dependency, 2.2.
> {noformat}
> $ tar tvf hadoop-1.2.1.tar.gz | grep commons-math
> -rw-rw-r--  0 0  0  832410 Jul 22  2013 
> hadoop-1.2.1/lib/commons-math-2.1.jar
> $ mvn -f pom.xml.hadoop1 dependency:tree | grep commons-math
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
> from 2.1)
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> [INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
> {noformat}
> This is a problem because the 2.1 version of 
> [DescriptiveStatistics|http://commons.apache.org/proper/commons-math/javadocs/api-2.1/org/apache/commons/math/stat/descriptive/DescriptiveStatistics.html]
>  doesn't have a double[] constructor. Running the MR job, mappers fail:
> {noformat}
> java.lang.NoSuchMethodError: 
> org.apache.commons.math.stat.descriptive.DescriptiveStatistics.([D)V
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$RandomReadTest.testTakedown(PerformanceEvaluation.java:1163)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:984)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:1401)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:522)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:474)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10511) Add latency percentiles on PerformanceEvaluation

2014-02-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902280#comment-13902280
 ] 

Nick Dimiduk commented on HBASE-10511:
--

Filed HBASE-10548.

> Add latency percentiles on PerformanceEvaluation
> 
>
> Key: HBASE-10511
> URL: https://issues.apache.org/jira/browse/HBASE-10511
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.99.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.96.2, 0.98.1, 0.99.0, hbase-10070
>
> Attachments: 10511.v1.patch
>
>
> The latency is reported as an array or float. It's easier to deal with 
> percentiles :-)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10548) PerfEval work around wrong runtime dependency version

2014-02-14 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-10548:


 Summary: PerfEval work around wrong runtime dependency version
 Key: HBASE-10548
 URL: https://issues.apache.org/jira/browse/HBASE-10548
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.2, 0.98.1, 0.99.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk


>From my 
>[comment|https://issues.apache.org/jira/browse/HBASE-10511?focusedCommentId=13902238&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13902238]
> on HBASE-10511:

I have hadoop-1.2.1 installed from tgz, which packages commons-math-2.1. This 
is *different* from the listed maven dependency, 2.2.

{noformat}
$ tar tvf hadoop-1.2.1.tar.gz | grep commons-math
-rw-rw-r--  0 0  0  832410 Jul 22  2013 
hadoop-1.2.1/lib/commons-math-2.1.jar
$ mvn -f pom.xml.hadoop1 dependency:tree | grep commons-math
[INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
from 2.1)
[INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
from 2.1)
[INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
from 2.1)
[INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile (version managed 
from 2.1)
[INFO] +- org.apache.commons:commons-math:jar:2.2:compile
[INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
[INFO] +- org.apache.commons:commons-math:jar:2.2:compile
[INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
[INFO] +- org.apache.commons:commons-math:jar:2.2:compile
[INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
[INFO] |  +- org.apache.commons:commons-math:jar:2.2:compile
{noformat}

This is a problem because the 2.1 version of 
[DescriptiveStatistics|http://commons.apache.org/proper/commons-math/javadocs/api-2.1/org/apache/commons/math/stat/descriptive/DescriptiveStatistics.html]
 doesn't have a double[] constructor. Running the MR job, mappers fail:

{noformat}
java.lang.NoSuchMethodError: 
org.apache.commons.math.stat.descriptive.DescriptiveStatistics.([D)V
at 
org.apache.hadoop.hbase.PerformanceEvaluation$RandomReadTest.testTakedown(PerformanceEvaluation.java:1163)
at 
org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:984)
at 
org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:1401)
at 
org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:522)
at 
org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:474)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10518) DirectMemoryUtils.getDirectMemoryUsage spams when none is configured

2014-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902261#comment-13902261
 ] 

Hudson commented on HBASE-10518:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #90 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/90/])
HBASE-10518 DirectMemoryUtils.getDirectMemoryUsage spams when none is 
configured (ndimiduk: rev 1568417)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/DirectMemoryUtils.java


> DirectMemoryUtils.getDirectMemoryUsage spams when none is configured
> 
>
> Key: HBASE-10518
> URL: https://issues.apache.org/jira/browse/HBASE-10518
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Nick Dimiduk
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10518.00.patch
>
>
> My logs are full of "Failed to retrieve nio.BufferPool direct MemoryUsed". 
> Even if it's DEBUG, it adds no value. It'd just remove.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10498) Add new APIs to load balancer interface

2014-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902260#comment-13902260
 ] 

Hudson commented on HBASE-10498:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #90 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/90/])
HBASE-10498 Add new APIs to load balancer interface(Rajesh) (rajeshbabu: rev 
1568188)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java


> Add new APIs to load balancer interface
> ---
>
> Key: HBASE-10498
> URL: https://issues.apache.org/jira/browse/HBASE-10498
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-10498.patch
>
>
> If a custom load balancer required to maintain region and corresponding 
> server locations,
> we can capture this information when we run any balancer algorithm before 
> assignment(like random,retain).
> But during master startup we will not call any balancer algorithm if a region 
> already assinged
> During split also we open child regions first in RS and then notify to master 
> through zookeeper. 
> So split regions information cannot be captured into balancer.
> Since balancer has access to master we can get the information from online 
> regions or region plan data structures in AM.
> But some use cases we cannot relay on this information(mainly to maintain 
> colocation of two tables regions). 
> So it's better to add some APIs to load balancer to notify balancer when 
> *region is online or offline*.
> These APIs helps a lot to maintain *regions colocation through custom load 
> balancer* which is very important in secondary indexing. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10419) Add multiget support to PerformanceEvaluation

2014-02-14 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10419:
-

Attachment: HBASE-10419-v3-trunk.patch

This patch fixed mapred mode and adds a note about the new feature to the help 
output.

I assume +1's cary forth. Will commit to trunk, 0.98, and 0.96 after a 
successful HadoopQA report.

> Add multiget support to PerformanceEvaluation
> -
>
> Key: HBASE-10419
> URL: https://issues.apache.org/jira/browse/HBASE-10419
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-10419-v2-trunk.patch, HBASE-10419-v3-trunk.patch, 
> HBASE-10419.0.patch, HBASE-10419.1.patch
>
>
> Folks planning to use multiget may find this useful.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10451) Enable back Tag compression on HFiles

2014-02-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902255#comment-13902255
 ] 

Andrew Purtell commented on HBASE-10451:


+1 for patch v2

I like the new test.

Thanks Anoop!

> Enable back Tag compression on HFiles
> -
>
> Key: HBASE-10451
> URL: https://issues.apache.org/jira/browse/HBASE-10451
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-10451.patch, HBASE-10451_V2.patch
>
>
> HBASE-10443 disables tag compression on HFiles. This Jira is to fix the 
> issues we have found out in HBASE-10443 and enable it back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10356) Failover RPC's for multi-get

2014-02-14 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-10356:


Attachment: HBASE-10356.01.patch

This is the same as what Sergey last uploaded with some minor conflicts fixed.

> Failover RPC's for multi-get 
> -
>
> Key: HBASE-10356
> URL: https://issues.apache.org/jira/browse/HBASE-10356
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Enis Soztutar
>Assignee: Sergey Shelukhin
> Fix For: 0.99.0
>
> Attachments: HBASE-10356.01.patch, HBASE-10356.01.patch, 
> HBASE-10356.patch, HBASE-10356.reference.patch, HBASE-10356.reference.patch
>
>
> This is extension of HBASE-10355 to add failover support for multi-gets. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10451) Enable back Tag compression on HFiles

2014-02-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10451:
---

Affects Version/s: 0.98.0
Fix Version/s: 0.99.0

> Enable back Tag compression on HFiles
> -
>
> Key: HBASE-10451
> URL: https://issues.apache.org/jira/browse/HBASE-10451
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-10451.patch, HBASE-10451_V2.patch
>
>
> HBASE-10443 disables tag compression on HFiles. This Jira is to fix the 
> issues we have found out in HBASE-10443 and enable it back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK

2014-02-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902252#comment-13902252
 ] 

Andrew Purtell commented on HBASE-10547:


Kind of a WTF at first glance. I did try substituting the assertArrayEquals 
call with assertTrue(Bytes.equals(..)) with no difference, I wasn't expecting 
one. Any ideas [~ndimiduk]?

> TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK
> 
>
> Key: HBASE-10547
> URL: https://issues.apache.org/jira/browse/HBASE-10547
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
> Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 
> Compressed References 20131114_175264 (JIT enabled, AOT enabled)
>Reporter: Andrew Purtell
>Priority: Minor
>
> Here's the trace.
> {noformat}
> Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec 
> <<< FAILURE!
> testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper)  Time 
> elapsed: 0.025 sec  <<< FAILURE!
> arrays first differed at element [8]; expected:<-40> but was:<0>
> at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
> at org.junit.Assert.internalArrayEquals(Assert.java:473)
> at org.junit.Assert.assertArrayEquals(Assert.java:294)
> at org.junit.Assert.assertArrayEquals(Assert.java:305)
> at 
> org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60)
> {noformat}
> This is with 0.98.0.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10547) TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK

2014-02-14 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10547:
--

 Summary: TestFixedLengthWrapper#testReadWrite occasionally fails 
with the IBM JDK
 Key: HBASE-10547
 URL: https://issues.apache.org/jira/browse/HBASE-10547
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
 Environment: IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed 
References 20131114_175264 (JIT enabled, AOT enabled)
Reporter: Andrew Purtell
Priority: Minor


Here's the trace.

{noformat}
Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 29.288 sec <<< 
FAILURE!
testReadWrite(org.apache.hadoop.hbase.types.TestFixedLengthWrapper)  Time 
elapsed: 0.025 sec  <<< FAILURE!
arrays first differed at element [8]; expected:<-40> but was:<0>
at 
org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
at org.junit.Assert.internalArrayEquals(Assert.java:473)
at org.junit.Assert.assertArrayEquals(Assert.java:294)
at org.junit.Assert.assertArrayEquals(Assert.java:305)
at 
org.apache.hadoop.hbase.types.TestFixedLengthWrapper.testReadWrite(TestFixedLengthWrapper.java:60)
{noformat}

This is with 0.98.0.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902247#comment-13902247
 ] 

Hadoop QA commented on HBASE-10541:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12629149/trunk-HBASE-10541-rev2.patch
  against trunk revision .
  ATTACHMENT ID: 12629149

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8712//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8712//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8712//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8712//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8712//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8712//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8712//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8712//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8712//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8712//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8712//console

This message is automatically generated.

> Make the file system properties customizable per table/column family
> 
>
> Key: HBASE-10541
> URL: https://issues.apache.org/jira/browse/HBASE-10541
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vasu Mariyala
> Attachments: trunk-HBASE-10541-rev1.patch, 
> trunk-HBASE-10541-rev2.patch, trunk-HBASE-10541.patch
>
>
> The file system properties like replication (the number of nodes to which the 
> hfile needs to be replicated), block size need to be customizable per 
> table/column family. This is important especially in the testing scenarios or 
> for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10511) Add latency percentiles on PerformanceEvaluation

2014-02-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902239#comment-13902239
 ] 

Nick Dimiduk commented on HBASE-10511:
--

Good new is, 2.2 appears to be a drop-in replacement.

> Add latency percentiles on PerformanceEvaluation
> 
>
> Key: HBASE-10511
> URL: https://issues.apache.org/jira/browse/HBASE-10511
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.99.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.96.2, 0.98.1, 0.99.0, hbase-10070
>
> Attachments: 10511.v1.patch
>
>
> The latency is reported as an array or float. It's easier to deal with 
> percentiles :-)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10511) Add latency percentiles on PerformanceEvaluation

2014-02-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902238#comment-13902238
 ] 

Nick Dimiduk commented on HBASE-10511:
--

So this is bad. I have hadoop-1.2.1 installed from tgz, which packages 
commons-math-2.1. This is *different* from the listed maven dependency, 2.2. 
This is a problem because the 2.1 version of 
[DescriptiveStatistics|http://commons.apache.org/proper/commons-math/javadocs/api-2.1/org/apache/commons/math/stat/descriptive/DescriptiveStatistics.html]
 doesn't have a double[] constructor. Running the MR job, mappers fail:

{noformat}
java.lang.NoSuchMethodError: 
org.apache.commons.math.stat.descriptive.DescriptiveStatistics.([D)V
at 
org.apache.hadoop.hbase.PerformanceEvaluation$RandomReadTest.testTakedown(PerformanceEvaluation.java:1163)
at 
org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:984)
at 
org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:1401)
at 
org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:522)
at 
org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:474)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
{noformat}

> Add latency percentiles on PerformanceEvaluation
> 
>
> Key: HBASE-10511
> URL: https://issues.apache.org/jira/browse/HBASE-10511
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.99.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.96.2, 0.98.1, 0.99.0, hbase-10070
>
> Attachments: 10511.v1.patch
>
>
> The latency is reported as an array or float. It's easier to deal with 
> percentiles :-)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10546) Two scanner objects are open for each hbase map task but only one scanner object is closed

2014-02-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902231#comment-13902231
 ] 

Ted Yu commented on HBASE-10546:


+1

> Two scanner objects are open for each hbase map task but only one scanner 
> object is closed
> --
>
> Key: HBASE-10546
> URL: https://issues.apache.org/jira/browse/HBASE-10546
> Project: HBase
>  Issue Type: Bug
>Reporter: Vasu Mariyala
> Attachments: 0.94-HBASE-10546.patch, trunk-HBASE-10546.patch
>
>
> Map reduce framework calls createRecordReader of the 
> TableInputFormat/MultiTableInputFormat to get the record reader instance. In 
> this method, we are initializing the TableRecordReaderImpl (restart method). 
> This initializes the scanner object. After this, map reduce framework calls 
> initialize on the RecordReader. In our case, this calls restart of the 
> TableRecordReaderImpl again. Here, it doesn't close the first scanner. At the 
> end of the task, only the second scanner object is closed. Because of this, 
> the smallest read point of HRegion is affected.
> We don't need to initialize the RecordReader in the createRecordReader method 
> and we need to close the scanner object in the restart method. (incase if the 
> restart method is called because of exceptions in the nextKeyValue method)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10546) Two scanner objects are open for each hbase map task but only one scanner object is closed

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10546:
--

Attachment: trunk-HBASE-10546.patch

> Two scanner objects are open for each hbase map task but only one scanner 
> object is closed
> --
>
> Key: HBASE-10546
> URL: https://issues.apache.org/jira/browse/HBASE-10546
> Project: HBase
>  Issue Type: Bug
>Reporter: Vasu Mariyala
> Attachments: 0.94-HBASE-10546.patch, trunk-HBASE-10546.patch
>
>
> Map reduce framework calls createRecordReader of the 
> TableInputFormat/MultiTableInputFormat to get the record reader instance. In 
> this method, we are initializing the TableRecordReaderImpl (restart method). 
> This initializes the scanner object. After this, map reduce framework calls 
> initialize on the RecordReader. In our case, this calls restart of the 
> TableRecordReaderImpl again. Here, it doesn't close the first scanner. At the 
> end of the task, only the second scanner object is closed. Because of this, 
> the smallest read point of HRegion is affected.
> We don't need to initialize the RecordReader in the createRecordReader method 
> and we need to close the scanner object in the restart method. (incase if the 
> restart method is called because of exceptions in the nextKeyValue method)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10352) Region and RegionServer changes for opening region replicas, and refreshing store files

2014-02-14 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-10352:
--

Attachment: hbase-10352_v3.patch

Patch from RB. 

> Region and RegionServer changes for opening region replicas, and refreshing 
> store files
> ---
>
> Key: HBASE-10352
> URL: https://issues.apache.org/jira/browse/HBASE-10352
> Project: HBase
>  Issue Type: Sub-task
>  Components: Region Assignment, regionserver
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.99.0
>
> Attachments: hbase-10352_v2.patch, hbase-10352_v3.patch
>
>
> Region replicas should be opened in read-only mode, and the "replica" mode so 
> that they serve queries from the primary regions' files. 
> This jira will also capture periodic refreshing of the store files from the 
> secondary regions so that they can get flushed and compacted files according 
> to the "region snapshots" section in the design doc for the parent jira. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-4047) [Coprocessors] Generic external process host

2014-02-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902224#comment-13902224
 ] 

Andrew Purtell commented on HBASE-4047:
---

This is just waiting for a strong requirement to do all the work required 
[~adela], or a contribution from elsewhere where the same requirement came up. 
I'd say it could be a post-1.0 feature. 

> [Coprocessors] Generic external process host
> 
>
> Key: HBASE-4047
> URL: https://issues.apache.org/jira/browse/HBASE-4047
> Project: HBase
>  Issue Type: New Feature
>  Components: Coprocessors
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>
> Where HBase coprocessors deviate substantially from the design (as I 
> understand it) of Google's BigTable coprocessors is we've reimagined it as a 
> framework for internal extension. In contrast BigTable coprocessors run as 
> separate processes colocated with tablet servers. The essential trade off is 
> between performance, flexibility and possibility, and the ability to control 
> and enforce resource usage.
> Since the initial design of HBase coprocessors some additional considerations 
> are in play:
> - Developing computational frameworks sitting directly on top of HBase hosted 
> in coprocessor(s);
> - Introduction of the map reduce next generation (mrng) resource management 
> model, and the probability that limits will be enforced via cgroups at the OS 
> level after this is generally available, e.g. when RHEL 6 deployments are 
> common;
> - The possibility of deployment of HBase onto mrng-enabled Hadoop clusters 
> via the mrng resource manager and a HBase-specific application controller.
> Therefore we should consider developing a coprocessor that is a generic host 
> for another coprocessor, but one that forks a child process, loads the target 
> coprocessor into the child, establishes a bidirectional pipe and uses an 
> eventing model and umbilical protocol to provide for the coprocessor loaded 
> into the child the same semantics as if it was loaded internally to the 
> parent, and (eventually) use available resource management capabilities on 
> the platform -- perhaps via the mrng resource controller or directly with 
> cgroups -- to limit the child as desired by system administrators or the 
> application designer.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10545) RS Hangs waiting on region to close on shutdown; has to timeout before can go down

2014-02-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10545:
---

Fix Version/s: 0.99.0
   0.98.1

> RS Hangs waiting on region to close on shutdown; has to timeout before can go 
> down
> --
>
> Key: HBASE-10545
> URL: https://issues.apache.org/jira/browse/HBASE-10545
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: stack
> Fix For: 0.98.1, 0.99.0
>
>
> I am seeing cluster sometimes fails to go down hanging out waiting on close - 
> it looks like it is waiting on hbase:meta that is the issue.  I am running 
> 0.98.0RC3 and hadoop-2.4.0-SNAPSHOT.  Might be my setup.  Filing this issue 
> to keep an eye on this as I go.
> It looks like we are not calling close the region that is holding us up.  Log 
> is full of this:
> {code}
> 2014-02-14 16:07:21,095 DEBUG [regionserver60020] regionserver.HRegionServer: 
> Waiting on 1588230740
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10546) Two scanner objects are open for each hbase map task but only one scanner object is closed

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10546:
--

Attachment: 0.94-HBASE-10546.patch

> Two scanner objects are open for each hbase map task but only one scanner 
> object is closed
> --
>
> Key: HBASE-10546
> URL: https://issues.apache.org/jira/browse/HBASE-10546
> Project: HBase
>  Issue Type: Bug
>Reporter: Vasu Mariyala
> Attachments: 0.94-HBASE-10546.patch
>
>
> Map reduce framework calls createRecordReader of the 
> TableInputFormat/MultiTableInputFormat to get the record reader instance. In 
> this method, we are initializing the TableRecordReaderImpl (restart method). 
> This initializes the scanner object. After this, map reduce framework calls 
> initialize on the RecordReader. In our case, this calls restart of the 
> TableRecordReaderImpl again. Here, it doesn't close the first scanner. At the 
> end of the task, only the second scanner object is closed. Because of this, 
> the smallest read point of HRegion is affected.
> We don't need to initialize the RecordReader in the createRecordReader method 
> and we need to close the scanner object in the restart method. (incase if the 
> restart method is called because of exceptions in the nextKeyValue method)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10546) Two scanner objects are open for each hbase map task but only one scanner object is closed

2014-02-14 Thread Vasu Mariyala (JIRA)
Vasu Mariyala created HBASE-10546:
-

 Summary: Two scanner objects are open for each hbase map task but 
only one scanner object is closed
 Key: HBASE-10546
 URL: https://issues.apache.org/jira/browse/HBASE-10546
 Project: HBase
  Issue Type: Bug
Reporter: Vasu Mariyala


Map reduce framework calls createRecordReader of the 
TableInputFormat/MultiTableInputFormat to get the record reader instance. In 
this method, we are initializing the TableRecordReaderImpl (restart method). 
This initializes the scanner object. After this, map reduce framework calls 
initialize on the RecordReader. In our case, this calls restart of the 
TableRecordReaderImpl again. Here, it doesn't close the first scanner. At the 
end of the task, only the second scanner object is closed. Because of this, the 
smallest read point of HRegion is affected.

We don't need to initialize the RecordReader in the createRecordReader method 
and we need to close the scanner object in the restart method. (incase if the 
restart method is called because of exceptions in the nextKeyValue method)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10545) RS Hangs waiting on region to close on shutdown; has to timeout before can go down

2014-02-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902221#comment-13902221
 ] 

stack commented on HBASE-10545:
---

Seems pretty easy to reproduce.  Just happened to me again.   The hbase:meta 
again.   Looks like it is not being closed.

> RS Hangs waiting on region to close on shutdown; has to timeout before can go 
> down
> --
>
> Key: HBASE-10545
> URL: https://issues.apache.org/jira/browse/HBASE-10545
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: stack
>
> I am seeing cluster sometimes fails to go down hanging out waiting on close - 
> it looks like it is waiting on hbase:meta that is the issue.  I am running 
> 0.98.0RC3 and hadoop-2.4.0-SNAPSHOT.  Might be my setup.  Filing this issue 
> to keep an eye on this as I go.
> It looks like we are not calling close the region that is holding us up.  Log 
> is full of this:
> {code}
> 2014-02-14 16:07:21,095 DEBUG [regionserver60020] regionserver.HRegionServer: 
> Waiting on 1588230740
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10518) DirectMemoryUtils.getDirectMemoryUsage spams when none is configured

2014-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902214#comment-13902214
 ] 

Hudson commented on HBASE-10518:


SUCCESS: Integrated in hbase-0.96-hadoop2 #204 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/204/])
HBASE-10518 DirectMemoryUtils.getDirectMemoryUsage spams when none is 
configured (ndimiduk: rev 1568420)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/DirectMemoryUtils.java


> DirectMemoryUtils.getDirectMemoryUsage spams when none is configured
> 
>
> Key: HBASE-10518
> URL: https://issues.apache.org/jira/browse/HBASE-10518
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Nick Dimiduk
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10518.00.patch
>
>
> My logs are full of "Failed to retrieve nio.BufferPool direct MemoryUsed". 
> Even if it's DEBUG, it adds no value. It'd just remove.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10526) Using Cell instead of KeyValue in HFileOutputFormat

2014-02-14 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10526:


Attachment: hbase-10526_v1.1.patch

Attached v1.1, using the deprecated method from SequenceFile.

> Using Cell instead of KeyValue in HFileOutputFormat
> ---
>
> Key: HBASE-10526
> URL: https://issues.apache.org/jira/browse/HBASE-10526
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-10526.patch, hbase-10526_v1.1.patch
>
>
> HFileOutputFormat/KeyValueSortReducer use KeyValue. We should deprecate them 
> and use Cell instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10543) Two rare test failures with TestLogsCleaner and TestSplitLogWorker

2014-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902192#comment-13902192
 ] 

Hadoop QA commented on HBASE-10543:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12629130/hbase-10543.patch
  against trunk revision .
  ATTACHMENT ID: 12629130

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.util.TestHBaseFsck

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8710//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8710//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8710//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8710//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8710//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8710//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8710//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8710//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8710//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8710//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8710//console

This message is automatically generated.

> Two rare test failures with TestLogsCleaner and TestSplitLogWorker
> --
>
> Key: HBASE-10543
> URL: https://issues.apache.org/jira/browse/HBASE-10543
> Project: HBase
>  Issue Type: Test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Attachments: hbase-10543.patch
>
>
> TestSplitLogWorker#testPreemptTask timed out in waiting for a task prempted.  
> TestLogsCleaner#testLogCleaning failed to check the files remaining.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10545) RS Hangs waiting on region to close on shutdown; has to timeout before can go down

2014-02-14 Thread stack (JIRA)
stack created HBASE-10545:
-

 Summary: RS Hangs waiting on region to close on shutdown; has to 
timeout before can go down
 Key: HBASE-10545
 URL: https://issues.apache.org/jira/browse/HBASE-10545
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: stack


I am seeing cluster sometimes fails to go down hanging out waiting on close - 
it looks like it is waiting on hbase:meta that is the issue.  I am running 
0.98.0RC3 and hadoop-2.4.0-SNAPSHOT.  Might be my setup.  Filing this issue to 
keep an eye on this as I go.

It looks like we are not calling close the region that is holding us up.  Log 
is full of this:

{code}
2014-02-14 16:07:21,095 DEBUG [regionserver60020] regionserver.HRegionServer: 
Waiting on 1588230740
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10526) Using Cell instead of KeyValue in HFileOutputFormat

2014-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902169#comment-13902169
 ] 

Hadoop QA commented on HBASE-10526:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12629154/hbase-10526.patch
  against trunk revision .
  ATTACHMENT ID: 12629154

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:red}-1 hadoop1.0{color}.  The patch failed to compile against the 
hadoop 1.0 profile.
Here is snippet of errors:
{code}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) 
on project hbase-server: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java:[328,12]
 cannot find symbol
[ERROR] symbol  : method file(org.apache.hadoop.fs.Path)
[ERROR] location: class org.apache.hadoop.io.SequenceFile.Writer
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java:[328,41]
 cannot find symbol
[ERROR] symbol  : method 
keyClass(java.lang.Class)
--
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) 
on project hbase-server: Compilation failure
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
--
Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation 
failure
at 
org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:729)
at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:128)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more{code}

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8711//console

This message is automatically generated.

> Using Cell instead of KeyValue in HFileOutputFormat
> ---
>
> Key: HBASE-10526
> URL: https://issues.apache.org/jira/browse/HBASE-10526
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-10526.patch
>
>
> HFileOutputFormat/KeyValueSortReducer use KeyValue. We should deprecate them 
> and use Cell instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10351) LoadBalancer changes for supporting region replicas

2014-02-14 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902168#comment-13902168
 ] 

Enis Soztutar commented on HBASE-10351:
---

I could not get RB to apply my patch (with a parent patch). 

You can review from github:
https://github.com/enis/hbase/commit/276f8256a7241247d2fc9f7c02adf28f3fbd9ba5

> LoadBalancer changes for supporting region replicas
> ---
>
> Key: HBASE-10351
> URL: https://issues.apache.org/jira/browse/HBASE-10351
> Project: HBase
>  Issue Type: Sub-task
>  Components: master
>Affects Versions: 0.99.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: hbase-10351_v0.patch, hbase-10351_v1.patch, 
> hbase-10351_v3.patch
>
>
> LoadBalancer has to be aware of and enforce placement of region replicas so 
> that the replicas are not co-hosted in the same server, host or rack. This 
> will ensure that the region is highly available during process / host / rack 
> failover. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10351) LoadBalancer changes for supporting region replicas

2014-02-14 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-10351:
--

Attachment: hbase-10351_v3.patch

updated patch for working on top of HBASE-10350. I've fixed a couple of  issues 
with retainAssignment() and roundRobinAssignment() 

> LoadBalancer changes for supporting region replicas
> ---
>
> Key: HBASE-10351
> URL: https://issues.apache.org/jira/browse/HBASE-10351
> Project: HBase
>  Issue Type: Sub-task
>  Components: master
>Affects Versions: 0.99.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: hbase-10351_v0.patch, hbase-10351_v1.patch, 
> hbase-10351_v3.patch
>
>
> LoadBalancer has to be aware of and enforce placement of region replicas so 
> that the replicas are not co-hosted in the same server, host or rack. This 
> will ensure that the region is highly available during process / host / rack 
> failover. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10501) Make IncreasingToUpperBoundRegionSplitPolicy configurable

2014-02-14 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10501:
--

Priority: Critical  (was: Major)

Marking critical as the current default will lead to a bad experience for 
everyone using HBase the first time and loading a lot of data.

> Make IncreasingToUpperBoundRegionSplitPolicy configurable
> -
>
> Key: HBASE-10501
> URL: https://issues.apache.org/jira/browse/HBASE-10501
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Critical
> Attachments: 10501-0.94-v2.txt, 10501-0.94.txt
>
>
> During some (admittedly artificial) load testing we found a large amount 
> split activity, which we tracked down the 
> IncreasingToUpperBoundRegionSplitPolicy.
> The current logic is this (from the comments):
> "regions that are on this server that all are of the same table, squared, 
> times the region flush size OR the maximum region split size, whichever is 
> smaller"
> So with a flush size of 128mb and max file size of 20gb, we'd need 13 region 
> of the same table on an RS to reach the max size.
> With 10gb file sized it is still 9 regions of the same table.
> Considering that the number of regions that an RS can carry is limited and 
> there might be multiple tables, this should be more configurable.
> I think the squaring is smart and we do not need to change it.
> We could
> * Make the start size configurable and default it to the flush size
> * Add multiplier for the initial size, i.e. start with n * flushSize
> * Also change the default to start with 2*flush size
> Of course one can override the default split policy, but these seem like 
> simple tweaks.
> Or we could instead set the goal of how many regions of the same table would 
> need to be present in order to reach the max size. In that case we'd start 
> with maxSize/goal^2. So if max size is 20gb and the goal is three we'd start 
> with 20g/9 = 2.2g for the initial region size.
> [~stack], I'm especially interested in your opinion.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10533) commands.rb is giving wrong error messages on exceptions

2014-02-14 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10533:
--

Fix Version/s: (was: 0.94.17)
   0.94.18

> commands.rb is giving wrong error messages on exceptions
> 
>
> Key: HBASE-10533
> URL: https://issues.apache.org/jira/browse/HBASE-10533
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
> Attachments: HBASE-10533_trunk.patch
>
>
> 1) Clone into existing table name is printing snapshot name instead of table 
> name.
> {code}
> hbase(main):004:0> clone_snapshot 'myTableSnapshot-122112','table'
> ERROR: Table already exists: myTableSnapshot-122112!
> {code}
> The reason for this is we are printing first argument instead of exception 
> message.
> {code}
> if cause.kind_of?(org.apache.hadoop.hbase.TableExistsException) then
>   raise "Table already exists: #{args.first}!"
> end
> {code}
> 2) If we give wrong column family in put or delete. Expectation is to print 
> actual column families in the table but instead throwing the exception.
> {code}
> hbase(main):002:0> put 't1','r','unkwown_cf','value'
> 2014-02-14 15:51:10,037 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2014-02-14 15:51:10,640 INFO  [main] hdfs.PeerCache: SocketCache disabled.
> ERROR: Failed 1 action: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family unkwown_cf does not exist in region 
> t1,,1392118273512.c7230b923c58f1af406a6d84930e40c1. in table 't1', 
> {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '6', TTL => 
> '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE 
> => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4206)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3441)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3345)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28460)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> : 1 time,
> {code}
> The reason for this is server will not throw NoSuchColumnFamilyException 
> directly, instead RetriesExhaustedWithDetailsException will be thrown.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10536) ImportTsv should fail fast if any of the column family passed to the job is not present in the table

2014-02-14 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10536:
--

Fix Version/s: (was: 0.94.17)
   0.94.18

> ImportTsv should fail fast if any of the column family passed to the job is 
> not present in the table
> 
>
> Key: HBASE-10536
> URL: https://issues.apache.org/jira/browse/HBASE-10536
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 0.98.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>
>
> While checking 0.98 rc, running bulkload tools. By mistake passed wrong 
> column family to importtsv. LoadIncrementalHfiles failed with following 
> exception
> {code}
> Exception in thread "main" java.io.IOException: Unmatched family names found: 
> unmatched family names in HFiles to be bulkloaded: [f1]; valid family names 
> of table test are: [f]
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:241)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:823)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:828)
> {code}
>  
> Its better to fail fast if any of the passed column family is not present in 
> table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10399) Add documentation for VerifyReplication to refguide

2014-02-14 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902134#comment-13902134
 ] 

Jean-Daniel Cryans commented on HBASE-10399:


It's fine if you want to do that, but it's going to be close to a revamp :)

> Add documentation for VerifyReplication to refguide
> ---
>
> Key: HBASE-10399
> URL: https://issues.apache.org/jira/browse/HBASE-10399
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Gustavo Anatoly
>Priority: Minor
> Attachments: HBASE-10399.patch
>
>
> HBase refguide currently doesn't document how VerifyReplication is used for 
> comparing local table with remote table.
> Document for VerifyReplication should be added so that users know how to use 
> it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-10544) Surface completion state of global administrative actions

2014-02-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902126#comment-13902126
 ] 

Andrew Purtell edited comment on HBASE-10544 at 2/14/14 11:34 PM:
--

bq. I think we need to redo flush and compaction requests as Procedures.

I don't relish the thought of the work involved but there are operational and 
security risks involved with incomplete major compaction after schema changes, 
such as changing compression settings, enabling encryption, etc. Assigning 
myself.

Thoughts?


was (Author: apurtell):
bq. I think we need to redo flush and compaction requests as Procedures.

I don't relish the thought of the work involved but there are operational and 
security risks involved with incomplete major compaction after schema changes, 
such as changing compaction, enabling encryption, etc. 

Thoughts?

> Surface completion state of global administrative actions
> -
>
> Key: HBASE-10544
> URL: https://issues.apache.org/jira/browse/HBASE-10544
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.98.1, 0.99.0
>
>
> When issuing requests for global administrative actions, such as major 
> compaction, users have to look for indirect evidence the action has 
> completed, and cannot really be sure of the final state. 
> Hat tip to J-D and Stack.
> We can approach this a couple of ways. We could add a per regionserver metric 
> for percentage of admin requests complete, maybe also aggregated by the 
> master. This would provide a single point of reference. However if we also 
> want to insure 100% completion even in the presence of node failures, or 
> provide separate completion feedback for each request, I think we need to 
> redo flush and compaction requests as Procedures. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10544) Surface completion state of global administrative actions

2014-02-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902126#comment-13902126
 ] 

Andrew Purtell commented on HBASE-10544:


bq. I think we need to redo flush and compaction requests as Procedures.

I don't relish the thought of the work involved but there are operational and 
security risks involved with incomplete major compaction after schema changes, 
such as changing compaction, enabling encryption, etc. 

Thoughts?

> Surface completion state of global administrative actions
> -
>
> Key: HBASE-10544
> URL: https://issues.apache.org/jira/browse/HBASE-10544
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.98.1, 0.99.0
>
>
> When issuing requests for global administrative actions, such as major 
> compaction, users have to look for indirect evidence the action has 
> completed, and cannot really be sure of the final state. 
> Hat tip to J-D and Stack.
> We can approach this a couple of ways. We could add a per regionserver metric 
> for percentage of admin requests complete, maybe also aggregated by the 
> master. This would provide a single point of reference. However if we also 
> want to insure 100% completion even in the presence of node failures, or 
> provide separate completion feedback for each request, I think we need to 
> redo flush and compaction requests as Procedures. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HBASE-10544) Surface completion state of global administrative actions

2014-02-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reassigned HBASE-10544:
--

Assignee: Andrew Purtell

> Surface completion state of global administrative actions
> -
>
> Key: HBASE-10544
> URL: https://issues.apache.org/jira/browse/HBASE-10544
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.98.1, 0.99.0
>
>
> When issuing requests for global administrative actions, such as major 
> compaction, users have to look for indirect evidence the action has 
> completed, and cannot really be sure of the final state. 
> Hat tip to J-D and Stack.
> We can approach this a couple of ways. We could add a per regionserver metric 
> for percentage of admin requests complete, maybe also aggregated by the 
> master. This would provide a single point of reference. However if we also 
> want to insure 100% completion even in the presence of node failures, or 
> provide separate completion feedback for each request, I think we need to 
> redo flush and compaction requests as Procedures. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10526) Using Cell instead of KeyValue in HFileOutputFormat

2014-02-14 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10526:


Status: Patch Available  (was: Open)

Attached a patch that added a new class HFileOutputFormat2. To share the code, 
most of the functions in HFileOutputFormat are moved to HFileOutputFormat2.

As long as old client applications don't use the package protected 
methods/class in HFileOutputFormat, they will still work as before.

It's recommended to use HFileOutputFormat2.configureIncrementalLoad to 
configure a corresponding Job.  If users want to configure the Job themselves, 
they still need to specify the output value class to KeyValue 
(setOutputValueClass) because MR ensures the output value class is an actual 
class instead of interface/abstract class.

> Using Cell instead of KeyValue in HFileOutputFormat
> ---
>
> Key: HBASE-10526
> URL: https://issues.apache.org/jira/browse/HBASE-10526
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-10526.patch
>
>
> HFileOutputFormat/KeyValueSortReducer use KeyValue. We should deprecate them 
> and use Cell instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10526) Using Cell instead of KeyValue in HFileOutputFormat

2014-02-14 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10526:


Attachment: hbase-10526.patch

> Using Cell instead of KeyValue in HFileOutputFormat
> ---
>
> Key: HBASE-10526
> URL: https://issues.apache.org/jira/browse/HBASE-10526
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-10526.patch
>
>
> HFileOutputFormat/KeyValueSortReducer use KeyValue. We should deprecate them 
> and use Cell instead.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10544) Surface completion state of global administrative actions

2014-02-14 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10544:
--

 Summary: Surface completion state of global administrative actions
 Key: HBASE-10544
 URL: https://issues.apache.org/jira/browse/HBASE-10544
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
 Fix For: 0.98.1, 0.99.0


When issuing requests for global administrative actions, such as major 
compaction, users have to look for indirect evidence the action has completed, 
and cannot really be sure of the final state. 

Hat tip to J-D and Stack.

We can approach this a couple of ways. We could add a per regionserver metric 
for percentage of admin requests complete, maybe also aggregated by the master. 
This would provide a single point of reference. However if we also want to 
insure 100% completion even in the presence of node failures, or provide 
separate completion feedback for each request, I think we need to redo flush 
and compaction requests as Procedures. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Attachment: trunk-HBASE-10541-rev2.patch

Thanks [~yuzhih...@gmail.com] for the review.  Uploaded the rev2 patch which 
addresses your comments.

> Make the file system properties customizable per table/column family
> 
>
> Key: HBASE-10541
> URL: https://issues.apache.org/jira/browse/HBASE-10541
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vasu Mariyala
> Attachments: trunk-HBASE-10541-rev1.patch, 
> trunk-HBASE-10541-rev2.patch, trunk-HBASE-10541.patch
>
>
> The file system properties like replication (the number of nodes to which the 
> hfile needs to be replicated), block size need to be customizable per 
> table/column family. This is important especially in the testing scenarios or 
> for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-4047) [Coprocessors] Generic external process host

2014-02-14 Thread Adela Maznikar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902088#comment-13902088
 ] 

Adela Maznikar commented on HBASE-4047:
---

I am curious if there is any additional progress here. Really exciting idea!

> [Coprocessors] Generic external process host
> 
>
> Key: HBASE-4047
> URL: https://issues.apache.org/jira/browse/HBASE-4047
> Project: HBase
>  Issue Type: New Feature
>  Components: Coprocessors
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>
> Where HBase coprocessors deviate substantially from the design (as I 
> understand it) of Google's BigTable coprocessors is we've reimagined it as a 
> framework for internal extension. In contrast BigTable coprocessors run as 
> separate processes colocated with tablet servers. The essential trade off is 
> between performance, flexibility and possibility, and the ability to control 
> and enforce resource usage.
> Since the initial design of HBase coprocessors some additional considerations 
> are in play:
> - Developing computational frameworks sitting directly on top of HBase hosted 
> in coprocessor(s);
> - Introduction of the map reduce next generation (mrng) resource management 
> model, and the probability that limits will be enforced via cgroups at the OS 
> level after this is generally available, e.g. when RHEL 6 deployments are 
> common;
> - The possibility of deployment of HBase onto mrng-enabled Hadoop clusters 
> via the mrng resource manager and a HBase-specific application controller.
> Therefore we should consider developing a coprocessor that is a generic host 
> for another coprocessor, but one that forks a child process, loads the target 
> coprocessor into the child, establishes a bidirectional pipe and uses an 
> eventing model and umbilical protocol to provide for the coprocessor loaded 
> into the child the same semantics as if it was loaded internally to the 
> parent, and (eventually) use available resource management capabilities on 
> the platform -- perhaps via the mrng resource controller or directly with 
> cgroups -- to limit the child as desired by system administrators or the 
> application designer.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902068#comment-13902068
 ] 

Hadoop QA commented on HBASE-10541:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12629108/trunk-HBASE-10541-rev1.patch
  against trunk revision .
  ATTACHMENT ID: 12629108

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8709//console

This message is automatically generated.

> Make the file system properties customizable per table/column family
> 
>
> Key: HBASE-10541
> URL: https://issues.apache.org/jira/browse/HBASE-10541
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vasu Mariyala
> Attachments: trunk-HBASE-10541-rev1.patch, trunk-HBASE-10541.patch
>
>
> The file system properties like replication (the number of nodes to which the 
> hfile needs to be replicated), block size need to be customizable per 
> table/column family. This is important especially in the testing scenarios or 
> for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-7320) Remove KeyValue.getBuffer()

2014-02-14 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902066#comment-13902066
 ] 

Matt Corgan commented on HBASE-7320:


{quote}I am just trying to make a patch where in every place possible we will 
refer as Cell rather than KV{quote}I think maybe changing *every* occurrence to 
Cell is going too far.  There are places where we know it is a KeyValue, like 
the memstore, so a method that gets a KeyValue from the memstore should have a 
return type of KeyValue.  This return type will be accepted by callers who want 
a Cell, but it's better because it contains more information.

Because of the above, you can rely on the KeyValue.heapSize() method from the 
memstore, but anywhere you get a Cell, you couldn't rely on heapSize.  If you 
are dealing with Cells, then heapSize should be calculated on a more granular 
basis (the size of the block of encoded bytes that contains the cells).  So I'm 
basically proposing that Cell should not implement heapSize().

I'm not sure if that helps with every situation Ram, just trying to illustrate 
some general thoughts.

> Remove KeyValue.getBuffer()
> ---
>
> Key: HBASE-7320
> URL: https://issues.apache.org/jira/browse/HBASE-7320
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 7320-simple.txt
>
>
> In many places this is simple task of just replacing the method name.
> There, however, quite a few places where we assume that:
> # the entire KV is backed by a single byte array
> # the KVs key portion is backed by a single byte array
> Some of those can easily be fixed, others will need their own jiras.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-8181) WebUIs HTTPS support

2014-02-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-8181.
---

Resolution: Implemented

HBASE-9986 added https support in 0.94
HBASE-9954 did the same to other branches.

> WebUIs HTTPS support
> 
>
> Key: HBASE-8181
> URL: https://issues.apache.org/jira/browse/HBASE-8181
> Project: HBase
>  Issue Type: New Feature
>  Components: UI
>Affects Versions: 0.94.5
>Reporter: Michael Weng
>Assignee: Michael Weng
>Priority: Minor
> Attachments: HBASE-8181-0.94.txt, HBASE-8181-trunk.txt
>
>
> With https enabled on hadoop 1.2
>https://issues.apache.org/jira/browse/MAPREDUCE-4661
> HBase automatically inherits the feature. However, there are some hardcoded 
> places need to be fixed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10533) commands.rb is giving wrong error messages on exceptions

2014-02-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902039#comment-13902039
 ] 

Ted Yu commented on HBASE-10533:


+1

> commands.rb is giving wrong error messages on exceptions
> 
>
> Key: HBASE-10533
> URL: https://issues.apache.org/jira/browse/HBASE-10533
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
> Attachments: HBASE-10533_trunk.patch
>
>
> 1) Clone into existing table name is printing snapshot name instead of table 
> name.
> {code}
> hbase(main):004:0> clone_snapshot 'myTableSnapshot-122112','table'
> ERROR: Table already exists: myTableSnapshot-122112!
> {code}
> The reason for this is we are printing first argument instead of exception 
> message.
> {code}
> if cause.kind_of?(org.apache.hadoop.hbase.TableExistsException) then
>   raise "Table already exists: #{args.first}!"
> end
> {code}
> 2) If we give wrong column family in put or delete. Expectation is to print 
> actual column families in the table but instead throwing the exception.
> {code}
> hbase(main):002:0> put 't1','r','unkwown_cf','value'
> 2014-02-14 15:51:10,037 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2014-02-14 15:51:10,640 INFO  [main] hdfs.PeerCache: SocketCache disabled.
> ERROR: Failed 1 action: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family unkwown_cf does not exist in region 
> t1,,1392118273512.c7230b923c58f1af406a6d84930e40c1. in table 't1', 
> {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '6', TTL => 
> '2147483647', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'false', BLOCKSIZE 
> => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4206)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3441)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3345)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28460)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> : 1 time,
> {code}
> The reason for this is server will not throw NoSuchColumnFamilyException 
> directly, instead RetriesExhaustedWithDetailsException will be thrown.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10534) Rowkey in TsvImporterTextMapper initializing with wrong length

2014-02-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902035#comment-13902035
 ] 

Ted Yu commented on HBASE-10534:


+1

> Rowkey in TsvImporterTextMapper initializing with wrong length
> --
>
> Key: HBASE-10534
> URL: https://issues.apache.org/jira/browse/HBASE-10534
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 0.96.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10534.patch
>
>
> In TsvImporterTextMapper rowkey initializing with wrong length. 
> parser.parseRowKey give pair of start and end positions, so the rowkey length 
> initialized with rowkey end position instead of length.
> {code}
>   Pair rowKeyOffests = 
> parser.parseRowKey(value.getBytes(), value.getLength());
>   ImmutableBytesWritable rowKey = new ImmutableBytesWritable(
>   value.getBytes(), rowKeyOffests.getFirst(), 
> rowKeyOffests.getSecond());
> {code}
> Its better to change TsvParser#parseRowKey to return starting position and 
> length of rowkey.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10543) Two rare test failures with TestLogsCleaner and TestSplitLogWorker

2014-02-14 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10543:


Status: Patch Available  (was: Open)

> Two rare test failures with TestLogsCleaner and TestSplitLogWorker
> --
>
> Key: HBASE-10543
> URL: https://issues.apache.org/jira/browse/HBASE-10543
> Project: HBase
>  Issue Type: Test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Attachments: hbase-10543.patch
>
>
> TestSplitLogWorker#testPreemptTask timed out in waiting for a task prempted.  
> TestLogsCleaner#testLogCleaning failed to check the files remaining.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10543) Two rare test failures with TestLogsCleaner and TestSplitLogWorker

2014-02-14 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10543:


Attachment: hbase-10543.patch

> Two rare test failures with TestLogsCleaner and TestSplitLogWorker
> --
>
> Key: HBASE-10543
> URL: https://issues.apache.org/jira/browse/HBASE-10543
> Project: HBase
>  Issue Type: Test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Attachments: hbase-10543.patch
>
>
> TestSplitLogWorker#testPreemptTask timed out in waiting for a task prempted.  
> TestLogsCleaner#testLogCleaning failed to check the files remaining.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10543) Two rare test failures with TestLogsCleaner and TestSplitLogWorker

2014-02-14 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-10543:
---

 Summary: Two rare test failures with TestLogsCleaner and 
TestSplitLogWorker
 Key: HBASE-10543
 URL: https://issues.apache.org/jira/browse/HBASE-10543
 Project: HBase
  Issue Type: Test
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor


TestSplitLogWorker#testPreemptTask timed out in waiting for a task prempted.  
TestLogsCleaner#testLogCleaning failed to check the files remaining.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10359) Master/RS WebUI changes for region replicas

2014-02-14 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-10359:


Attachment: 10359-3.txt

> Master/RS WebUI changes for region replicas 
> 
>
> Key: HBASE-10359
> URL: https://issues.apache.org/jira/browse/HBASE-10359
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Devaraj Das
> Fix For: 0.99.0
>
> Attachments: 10359-2.txt, 10359-3.txt
>
>
> Some UI changes to make region replicas more visible. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10359) Master/RS WebUI changes for region replicas

2014-02-14 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902018#comment-13902018
 ] 

Devaraj Das commented on HBASE-10359:
-

The last patch addresses Enis's comment.

> Master/RS WebUI changes for region replicas 
> 
>
> Key: HBASE-10359
> URL: https://issues.apache.org/jira/browse/HBASE-10359
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Devaraj Das
> Fix For: 0.99.0
>
> Attachments: 10359-2.txt, 10359-3.txt
>
>
> Some UI changes to make region replicas more visible. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902007#comment-13902007
 ] 

Ted Yu commented on HBASE-10541:


{code}
+   * Configuration key for setting the number of nodes to while hfile would be
+   * replicated. This can set at the table or the column family level.
{code}
'to while hfile': while -> which.
'This can set': This can be set
{code}
+String value = getValue(HFILE_FS_REPLICATION);
+return value == null ? -1 : Short.valueOf(value);
{code}
How was the default value of -1 determined ?

TestTableFSProperties needs license.
{code}
+  private static final String TABLE_NAME_REPLICATION = "testTableReplication";
{code}
nit: testTableReplication -> testTableWithCustomReplication

> Make the file system properties customizable per table/column family
> 
>
> Key: HBASE-10541
> URL: https://issues.apache.org/jira/browse/HBASE-10541
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vasu Mariyala
> Attachments: trunk-HBASE-10541-rev1.patch, trunk-HBASE-10541.patch
>
>
> The file system properties like replication (the number of nodes to which the 
> hfile needs to be replicated), block size need to be customizable per 
> table/column family. This is important especially in the testing scenarios or 
> for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10169) Batch coprocessor

2014-02-14 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902000#comment-13902000
 ] 

Gary Helmling commented on HBASE-10169:
---

[~jingcheng...@intel.com] Thanks for the comments.

I understand that, for a given request, executing the coprocessor service 
invocations in parallel would be beneficial.  But I think it would make sense 
to add this in a follow on JIRA for a couple of reasons:
# If it's beneficial for coprocessor service invocations, would it also be 
beneficial for other batch operations?  The answer might be "no" or we might 
want to use a separate thread pool for coprocessor service invocations anyway 
(so that they don't block / starve other operations).  But I think this might 
deserve broader discussion.
# I believe it's worth thinking through exactly how we structure the thread 
pool given the interaction with RPC handler threads and RPC call queue.

In both cases, I think we can have a more focused discussion and maybe get more 
participants by handling the parallelization in a separate JIRA instead of 
tacking it on here.  Are you okay with that approach?

> Batch coprocessor
> -
>
> Key: HBASE-10169
> URL: https://issues.apache.org/jira/browse/HBASE-10169
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Affects Versions: 0.99.0
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: Batch Coprocessor Design Document.docx, 
> HBASE-10169-V2.patch, HBASE-10169-V3.patch, HBASE-10169-V3.patch, 
> HBASE-10169-V4.patch, HBASE-10169-V5.patch, HBASE-10169-alternate-2.patch, 
> HBASE-10169-alternate-3.patch, HBASE-10169-alternate-4.patch, 
> HBASE-10169-alternate.patch, HBASE-10169.patch
>
>
> This is designed to improve the coprocessor invocation in the client side. 
> Currently the coprocessor invocation is to send a call to each region. If 
> there’s one region server, and 100 regions are located in this server, each 
> coprocessor invocation will send 100 calls, each call uses a single thread in 
> the client side. The threads will run out soon when the coprocessor 
> invocations are heavy. 
> In this design, all the calls to the same region server will be grouped into 
> one in a single coprocessor invocation. This call will be spread into each 
> region in the server side.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10504) Define Replication Interface

2014-02-14 Thread Gabriel Reid (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902005#comment-13902005
 ] 

Gabriel Reid commented on HBASE-10504:
--

Making the NRT listening of changes on HBase into a first-class API as 
[~toffer] suggested sounds like an interesting idea, if there's interest in 
going that far in actively supporting this use case. 

If there is interest in doing that, the current hbase-sep library (which is the 
current library used for latching in via replication and responding to changes 
for the hbase-indexer project) could be a possible starting point. The current 
API interfaces[1] as well as the impl are within sub-modules[2] of the 
hbase-indexer project on GitHub.

[1] 
https://github.com/NGDATA/hbase-indexer/tree/master/hbase-sep/hbase-sep-api/src/main/java/com/ngdata/sep
[2] https://github.com/NGDATA/hbase-indexer/tree/master/hbase-sep

> Define Replication Interface
> 
>
> Key: HBASE-10504
> URL: https://issues.apache.org/jira/browse/HBASE-10504
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
>
> HBase has replication.  Fellas have been hijacking the replication apis to do 
> all kinds of perverse stuff like indexing hbase content (hbase-indexer 
> https://github.com/NGDATA/hbase-indexer) and our [~toffer] just showed up w/ 
> overrides that replicate via an alternate channel (over a secure thrift 
> channel between dcs over on HBASE-9360).  This issue is about surfacing these 
> APIs as public with guarantees to downstreamers similar to those we have on 
> our public client-facing APIs (and so we don't break them for downstreamers).
> Any input [~phunt] or [~gabriel.reid] or [~toffer]?
> Thanks.
>  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10518) DirectMemoryUtils.getDirectMemoryUsage spams when none is configured

2014-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13902002#comment-13902002
 ] 

Hudson commented on HBASE-10518:


SUCCESS: Integrated in hbase-0.96 #295 (See 
[https://builds.apache.org/job/hbase-0.96/295/])
HBASE-10518 DirectMemoryUtils.getDirectMemoryUsage spams when none is 
configured (ndimiduk: rev 1568420)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/util/DirectMemoryUtils.java


> DirectMemoryUtils.getDirectMemoryUsage spams when none is configured
> 
>
> Key: HBASE-10518
> URL: https://issues.apache.org/jira/browse/HBASE-10518
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Nick Dimiduk
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10518.00.patch
>
>
> My logs are full of "Failed to retrieve nio.BufferPool direct MemoryUsed". 
> Even if it's DEBUG, it adds no value. It'd just remove.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10359) Master/RS WebUI changes for region replicas

2014-02-14 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901984#comment-13901984
 ] 

Enis Soztutar commented on HBASE-10359:
---

should we instead show the replicaId as the column. For replicaId == 0, we can 
display 0(default) or something like that. 

> Master/RS WebUI changes for region replicas 
> 
>
> Key: HBASE-10359
> URL: https://issues.apache.org/jira/browse/HBASE-10359
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Devaraj Das
> Fix For: 0.99.0
>
> Attachments: 10359-2.txt
>
>
> Some UI changes to make region replicas more visible. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10356) Failover RPC's for multi-get

2014-02-14 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901972#comment-13901972
 ] 

Sergey Shelukhin commented on HBASE-10356:
--

sorry not trunk

> Failover RPC's for multi-get 
> -
>
> Key: HBASE-10356
> URL: https://issues.apache.org/jira/browse/HBASE-10356
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Enis Soztutar
>Assignee: Sergey Shelukhin
> Fix For: 0.99.0
>
> Attachments: HBASE-10356.01.patch, HBASE-10356.patch, 
> HBASE-10356.reference.patch, HBASE-10356.reference.patch
>
>
> This is extension of HBASE-10355 to add failover support for multi-gets. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10356) Failover RPC's for multi-get

2014-02-14 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901970#comment-13901970
 ] 

Sergey Shelukhin commented on HBASE-10356:
--

will commit to trunk and hbase-10070 today

> Failover RPC's for multi-get 
> -
>
> Key: HBASE-10356
> URL: https://issues.apache.org/jira/browse/HBASE-10356
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Enis Soztutar
>Assignee: Sergey Shelukhin
> Fix For: 0.99.0
>
> Attachments: HBASE-10356.01.patch, HBASE-10356.patch, 
> HBASE-10356.reference.patch, HBASE-10356.reference.patch
>
>
> This is extension of HBASE-10355 to add failover support for multi-gets. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Status: Open  (was: Patch Available)

> Make the file system properties customizable per table/column family
> 
>
> Key: HBASE-10541
> URL: https://issues.apache.org/jira/browse/HBASE-10541
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vasu Mariyala
> Attachments: trunk-HBASE-10541-rev1.patch, trunk-HBASE-10541.patch
>
>
> The file system properties like replication (the number of nodes to which the 
> hfile needs to be replicated), block size need to be customizable per 
> table/column family. This is important especially in the testing scenarios or 
> for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Status: Patch Available  (was: Open)

> Make the file system properties customizable per table/column family
> 
>
> Key: HBASE-10541
> URL: https://issues.apache.org/jira/browse/HBASE-10541
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vasu Mariyala
> Attachments: trunk-HBASE-10541-rev1.patch, trunk-HBASE-10541.patch
>
>
> The file system properties like replication (the number of nodes to which the 
> hfile needs to be replicated), block size need to be customizable per 
> table/column family. This is important especially in the testing scenarios or 
> for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10356) Failover RPC's for multi-get

2014-02-14 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901966#comment-13901966
 ] 

Devaraj Das commented on HBASE-10356:
-

LGTM overall.

> Failover RPC's for multi-get 
> -
>
> Key: HBASE-10356
> URL: https://issues.apache.org/jira/browse/HBASE-10356
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Enis Soztutar
>Assignee: Sergey Shelukhin
> Fix For: 0.99.0
>
> Attachments: HBASE-10356.01.patch, HBASE-10356.patch, 
> HBASE-10356.reference.patch, HBASE-10356.reference.patch
>
>
> This is extension of HBASE-10355 to add failover support for multi-gets. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10359) Master/RS WebUI changes for region replicas

2014-02-14 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-10359:


Attachment: 10359-2.txt

Prelim patch

> Master/RS WebUI changes for region replicas 
> 
>
> Key: HBASE-10359
> URL: https://issues.apache.org/jira/browse/HBASE-10359
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Devaraj Das
> Fix For: 0.99.0
>
> Attachments: 10359-2.txt
>
>
> Some UI changes to make region replicas more visible. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10518) DirectMemoryUtils.getDirectMemoryUsage spams when none is configured

2014-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901955#comment-13901955
 ] 

Hudson commented on HBASE-10518:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #148 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/148/])
HBASE-10518 DirectMemoryUtils.getDirectMemoryUsage spams when none is 
configured (ndimiduk: rev 1568419)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/DirectMemoryUtils.java


> DirectMemoryUtils.getDirectMemoryUsage spams when none is configured
> 
>
> Key: HBASE-10518
> URL: https://issues.apache.org/jira/browse/HBASE-10518
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Nick Dimiduk
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10518.00.patch
>
>
> My logs are full of "Failed to retrieve nio.BufferPool direct MemoryUsed". 
> Even if it's DEBUG, it adds no value. It'd just remove.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10518) DirectMemoryUtils.getDirectMemoryUsage spams when none is configured

2014-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901942#comment-13901942
 ] 

Hudson commented on HBASE-10518:


FAILURE: Integrated in HBase-0.98 #159 (See 
[https://builds.apache.org/job/HBase-0.98/159/])
HBASE-10518 DirectMemoryUtils.getDirectMemoryUsage spams when none is 
configured (ndimiduk: rev 1568419)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/DirectMemoryUtils.java


> DirectMemoryUtils.getDirectMemoryUsage spams when none is configured
> 
>
> Key: HBASE-10518
> URL: https://issues.apache.org/jira/browse/HBASE-10518
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>Assignee: Nick Dimiduk
> Fix For: 0.96.2, 0.98.1, 0.99.0
>
> Attachments: HBASE-10518.00.patch
>
>
> My logs are full of "Failed to retrieve nio.BufferPool direct MemoryUsed". 
> Even if it's DEBUG, it adds no value. It'd just remove.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10451) Enable back Tag compression on HFiles

2014-02-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901943#comment-13901943
 ] 

Ted Yu commented on HBASE-10451:


+1

Thanks for the quick response.

> Enable back Tag compression on HFiles
> -
>
> Key: HBASE-10451
> URL: https://issues.apache.org/jira/browse/HBASE-10451
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 0.98.1
>
> Attachments: HBASE-10451.patch, HBASE-10451_V2.patch
>
>
> HBASE-10443 disables tag compression on HFiles. This Jira is to fix the 
> issues we have found out in HBASE-10443 and enable it back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10451) Enable back Tag compression on HFiles

2014-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901939#comment-13901939
 ] 

Hadoop QA commented on HBASE-10451:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12629091/HBASE-10451_V2.patch
  against trunk revision .
  ATTACHMENT ID: 12629091

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8707//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8707//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8707//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8707//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8707//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8707//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8707//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8707//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8707//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8707//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8707//console

This message is automatically generated.

> Enable back Tag compression on HFiles
> -
>
> Key: HBASE-10451
> URL: https://issues.apache.org/jira/browse/HBASE-10451
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 0.98.1
>
> Attachments: HBASE-10451.patch, HBASE-10451_V2.patch
>
>
> HBASE-10443 disables tag compression on HFiles. This Jira is to fix the 
> issues we have found out in HBASE-10443 and enable it back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Attachment: trunk-HBASE-10541-rev1.patch

> Make the file system properties customizable per table/column family
> 
>
> Key: HBASE-10541
> URL: https://issues.apache.org/jira/browse/HBASE-10541
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vasu Mariyala
> Attachments: trunk-HBASE-10541-rev1.patch, trunk-HBASE-10541.patch
>
>
> The file system properties like replication (the number of nodes to which the 
> hfile needs to be replicated), block size need to be customizable per 
> table/column family. This is important especially in the testing scenarios or 
> for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10542) ConcurrentModificationException in TestHTraceHooks

2014-02-14 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-10542:
-

 Summary: ConcurrentModificationException in TestHTraceHooks
 Key: HBASE-10542
 URL: https://issues.apache.org/jira/browse/HBASE-10542
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
 Fix For: 0.99.0


I got this in one of my test runs: 
{code}
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(HashMap.java:894)
at java.util.HashMap$KeyIterator.next(HashMap.java:928)
at org.cloudera.htrace.TraceTree.(TraceTree.java:48)
at 
org.apache.hadoop.hbase.trace.TestHTraceHooks.testTraceCreateTable(TestHTraceHooks.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
{code}

It looks like TraceTree ctor, clones the spans collection, but iterates over 
the original argument, rather than cloned. 

I don't know enough of HTrace to fix it, so just reporting it here. [~eclark] 
FYI. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901873#comment-13901873
 ] 

Hadoop QA commented on HBASE-10541:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12629098/trunk-HBASE-10541.patch
  against trunk revision .
  ATTACHMENT ID: 12629098

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 hadoop1.0{color}.  The patch failed to compile against the 
hadoop 1.0 profile.
Here is snippet of errors:
{code}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
(default-testCompile) on project hbase-server: Compilation failure: Compilation 
failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTableFSProperties.java:[81,26]
 getDefaultReplication() in org.apache.hadoop.fs.FileSystem cannot be applied 
to (org.apache.hadoop.fs.Path)
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTableFSProperties.java:[82,23]
 getDefaultBlockSize() in org.apache.hadoop.fs.FileSystem cannot be applied to 
(org.apache.hadoop.fs.Path)
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
(default-testCompile) on project hbase-server: Compilation failure
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
--
Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation 
failure
at 
org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:729)
at 
org.apache.maven.plugin.TestCompilerMojo.execute(TestCompilerMojo.java:161)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more{code}

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8708//console

This message is automatically generated.

> Make the file system properties customizable per table/column family
> 
>
> Key: HBASE-10541
> URL: https://issues.apache.org/jira/browse/HBASE-10541
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vasu Mariyala
> Attachments: trunk-HBASE-10541.patch
>
>
> The file system properties like replication (the number of nodes to which the 
> hfile needs to be replicated), block size need to be customizable per 
> table/column family. This is important especially in the testing scenarios or 
> for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901860#comment-13901860
 ] 

Vasu Mariyala commented on HBASE-10541:
---

FSUtils.create method doesn't honor the permissions passed to it if the file 
system is a DistributedFileSystem. It uses FsPermission.getDefault(). Not sure 
if this is intended. But fixing this issue too in this patch.

> Make the file system properties customizable per table/column family
> 
>
> Key: HBASE-10541
> URL: https://issues.apache.org/jira/browse/HBASE-10541
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vasu Mariyala
> Attachments: trunk-HBASE-10541.patch
>
>
> The file system properties like replication (the number of nodes to which the 
> hfile needs to be replicated), block size need to be customizable per 
> table/column family. This is important especially in the testing scenarios or 
> for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Status: Patch Available  (was: Open)

> Make the file system properties customizable per table/column family
> 
>
> Key: HBASE-10541
> URL: https://issues.apache.org/jira/browse/HBASE-10541
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vasu Mariyala
> Attachments: trunk-HBASE-10541.patch
>
>
> The file system properties like replication (the number of nodes to which the 
> hfile needs to be replicated), block size need to be customizable per 
> table/column family. This is important especially in the testing scenarios or 
> for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-10541:
--

Attachment: trunk-HBASE-10541.patch

> Make the file system properties customizable per table/column family
> 
>
> Key: HBASE-10541
> URL: https://issues.apache.org/jira/browse/HBASE-10541
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vasu Mariyala
> Attachments: trunk-HBASE-10541.patch
>
>
> The file system properties like replication (the number of nodes to which the 
> hfile needs to be replicated), block size need to be customizable per 
> table/column family. This is important especially in the testing scenarios or 
> for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10541) Make the file system properties customizable per table/column family

2014-02-14 Thread Vasu Mariyala (JIRA)
Vasu Mariyala created HBASE-10541:
-

 Summary: Make the file system properties customizable per 
table/column family
 Key: HBASE-10541
 URL: https://issues.apache.org/jira/browse/HBASE-10541
 Project: HBase
  Issue Type: New Feature
Reporter: Vasu Mariyala
 Attachments: trunk-HBASE-10541.patch

The file system properties like replication (the number of nodes to which the 
hfile needs to be replicated), block size need to be customizable per 
table/column family. This is important especially in the testing scenarios or 
for test tables where we don't want the hfile to be replicated 3 times.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10399) Add documentation for VerifyReplication to refguide

2014-02-14 Thread Gustavo Anatoly (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901853#comment-13901853
 ] 

Gustavo Anatoly commented on HBASE-10399:
-

Hi, [~jdcryans]

Thank you for review. An alternative we could do is fix the points listed above 
and publish it, while this, the documentation is being rewritten. Thus the 
users would have something to guide about verify replication.

[~yuzhih...@gmail.com] and [~jdcryans] what do you think about this alternative?




> Add documentation for VerifyReplication to refguide
> ---
>
> Key: HBASE-10399
> URL: https://issues.apache.org/jira/browse/HBASE-10399
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Gustavo Anatoly
>Priority: Minor
> Attachments: HBASE-10399.patch
>
>
> HBase refguide currently doesn't document how VerifyReplication is used for 
> comparing local table with remote table.
> Document for VerifyReplication should be added so that users know how to use 
> it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10451) Enable back Tag compression on HFiles

2014-02-14 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10451:
---

Attachment: HBASE-10451_V2.patch

> Enable back Tag compression on HFiles
> -
>
> Key: HBASE-10451
> URL: https://issues.apache.org/jira/browse/HBASE-10451
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 0.98.1
>
> Attachments: HBASE-10451.patch, HBASE-10451_V2.patch
>
>
> HBASE-10443 disables tag compression on HFiles. This Jira is to fix the 
> issues we have found out in HBASE-10443 and enable it back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10419) Add multiget support to PerformanceEvaluation

2014-02-14 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901828#comment-13901828
 ] 

Jean-Marc Spaggiari commented on HBASE-10419:
-

Are going to give it a try with both modes and confirm?

> Add multiget support to PerformanceEvaluation
> -
>
> Key: HBASE-10419
> URL: https://issues.apache.org/jira/browse/HBASE-10419
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-10419-v2-trunk.patch, HBASE-10419.0.patch, 
> HBASE-10419.1.patch
>
>
> Folks planning to use multiget may find this useful.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10501) Make IncreasingToUpperBoundRegionSplitPolicy configurable

2014-02-14 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901827#comment-13901827
 ] 

Enis Soztutar commented on HBASE-10501:
---

+1 for changing the default. 2*flushSize + cube looks good. 
I think we should also go for more global split decisions, rather than local 
decisions. But this is for another issue.  

> Make IncreasingToUpperBoundRegionSplitPolicy configurable
> -
>
> Key: HBASE-10501
> URL: https://issues.apache.org/jira/browse/HBASE-10501
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 10501-0.94-v2.txt, 10501-0.94.txt
>
>
> During some (admittedly artificial) load testing we found a large amount 
> split activity, which we tracked down the 
> IncreasingToUpperBoundRegionSplitPolicy.
> The current logic is this (from the comments):
> "regions that are on this server that all are of the same table, squared, 
> times the region flush size OR the maximum region split size, whichever is 
> smaller"
> So with a flush size of 128mb and max file size of 20gb, we'd need 13 region 
> of the same table on an RS to reach the max size.
> With 10gb file sized it is still 9 regions of the same table.
> Considering that the number of regions that an RS can carry is limited and 
> there might be multiple tables, this should be more configurable.
> I think the squaring is smart and we do not need to change it.
> We could
> * Make the start size configurable and default it to the flush size
> * Add multiplier for the initial size, i.e. start with n * flushSize
> * Also change the default to start with 2*flush size
> Of course one can override the default split policy, but these seem like 
> simple tweaks.
> Or we could instead set the goal of how many regions of the same table would 
> need to be present in order to reach the max size. In that case we'd start 
> with maxSize/goal^2. So if max size is 20gb and the goal is three we'd start 
> with 20g/9 = 2.2g for the initial region size.
> [~stack], I'm especially interested in your opinion.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10539) HRegion.addAndGetGlobalMemstoreSize() is expected to return the new memstore size after added, but actually the previous size before added is returned instead

2014-02-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901813#comment-13901813
 ] 

Ted Yu commented on HBASE-10539:


+1

> HRegion.addAndGetGlobalMemstoreSize() is expected to return the new memstore 
> size after added, but actually the previous size before added is returned 
> instead
> --
>
> Key: HBASE-10539
> URL: https://issues.apache.org/jira/browse/HBASE-10539
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Attachments: HBASE-10539-trunk_v1.patch
>
>
> HRegion.addAndGetGlobalMemstoreSize(addedSize) is called once some write 
> succeeds and 'addedSize' is the size of the edits newly put to the memstore, 
> the returned value of HRegion.addAndGetGlobalMemstoreSize(addedSize) is then 
> checked against the flush threshold to determine if a flush for the region 
> should be triggered.
> By design the returned value should be the updated memstore size after adding 
> 'addedSize', but current implementation uses this.memstoreSize.getAndAdd 
> which returns the previous size before adding, actually 'addAndGet' rather 
> than 'getAndAdd' should be used here.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10535) Table trash to recover table deleted by mistake

2014-02-14 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901815#comment-13901815
 ] 

Enis Soztutar commented on HBASE-10535:
---

I think we should not have yet another directory for trash. Current .archive 
directory is for deleted files, and we already have the TTL mechanism, so we 
should build trash on top of this. A simple thing would be to move the table's 
HTD to the archive directory as well, and delete that based on the same TTL. We 
can have the "untrash" command to retrieve from this dir. 
Trash'ed table SHOULD NOT prevent creating tables of the same name. 

> Table trash to recover table deleted by mistake
> ---
>
> Key: HBASE-10535
> URL: https://issues.apache.org/jira/browse/HBASE-10535
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Priority: Minor
>
> When table is deleted, only Hfiles are moved to archives dir, table and 
> region infos are deleted immediately. So it's very difficult to recover 
> tables which are deleted by mistakes.
> I think if we can introduce an table trash dir in HDFS. When the table is 
> deleted, the entire table dir is moved to trash dir. And after an 
> configurable ttl,  the dir is deleted actually. This can be done by HMaster.
> If we want to recover the deleted table, we can use a tool which moves table 
> dir out of trash and recovery the meta data of the table. There are many 
> problems the recover tool will encountered eg, parent  and daughter regions 
> are all in the table dir. But I think this feature is useful to handle some 
> special cases.
> Discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-7320) Remove KeyValue.getBuffer()

2014-02-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901805#comment-13901805
 ] 

Nick Dimiduk commented on HBASE-7320:
-

For the heapsize question, there's further discussion over on HBASE-9383.

> Remove KeyValue.getBuffer()
> ---
>
> Key: HBASE-7320
> URL: https://issues.apache.org/jira/browse/HBASE-7320
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 7320-simple.txt
>
>
> In many places this is simple task of just replacing the method name.
> There, however, quite a few places where we assume that:
> # the entire KV is backed by a single byte array
> # the KVs key portion is backed by a single byte array
> Some of those can easily be fixed, others will need their own jiras.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10539) HRegion.addAndGetGlobalMemstoreSize() is expected to return the new memstore size after added, but actually the previous size before added is returned instead

2014-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901798#comment-13901798
 ] 

Hadoop QA commented on HBASE-10539:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12629038/HBASE-10539-trunk_v1.patch
  against trunk revision .
  ATTACHMENT ID: 12629038

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8706//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8706//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8706//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8706//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8706//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8706//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8706//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8706//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8706//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8706//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8706//console

This message is automatically generated.

> HRegion.addAndGetGlobalMemstoreSize() is expected to return the new memstore 
> size after added, but actually the previous size before added is returned 
> instead
> --
>
> Key: HBASE-10539
> URL: https://issues.apache.org/jira/browse/HBASE-10539
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Attachments: HBASE-10539-trunk_v1.patch
>
>
> HRegion.addAndGetGlobalMemstoreSize(addedSize) is called once some write 
> succeeds and 'addedSize' is the size of the edits newly put to the memstore, 
> the returned value of HRegion.addAndGetGlobalMemstoreSize(addedSize) is then 
> checked against the flush threshold to determine if a flush for the region 
> should be triggered.
> By design the returned value should be the updated memstore size after adding 
> 'addedSize', but current implementation uses this.memstoreSize.getAndAdd 
> which returns the previous size before adding, actually 'addAndGet' rather 
> than 'getAndAdd' should be used here.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10419) Add multiget support to PerformanceEvaluation

2014-02-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901799#comment-13901799
 ] 

Nick Dimiduk commented on HBASE-10419:
--

Let me make sure it works in both mapred and nomapred modes and i'll get it 
committed. Thanks for reminding me of it [~jmspaggi], [~stack].

> Add multiget support to PerformanceEvaluation
> -
>
> Key: HBASE-10419
> URL: https://issues.apache.org/jira/browse/HBASE-10419
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-10419-v2-trunk.patch, HBASE-10419.0.patch, 
> HBASE-10419.1.patch
>
>
> Folks planning to use multiget may find this useful.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10361) Enable/AlterTable support for region replicas

2014-02-14 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901797#comment-13901797
 ] 

Enis Soztutar commented on HBASE-10361:
---

v4 patch loos good. +1. 

> Enable/AlterTable support for region replicas
> -
>
> Key: HBASE-10361
> URL: https://issues.apache.org/jira/browse/HBASE-10361
> Project: HBase
>  Issue Type: Sub-task
>  Components: master
>Reporter: Enis Soztutar
>Assignee: Devaraj Das
> Fix For: 0.99.0
>
> Attachments: 10361-1.txt, 10361-3.txt, 10361-4-add.txt, 10361-4.txt
>
>
> Add support for region replicas in master operations enable table and modify 
> table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10419) Add multiget support to PerformanceEvaluation

2014-02-14 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901786#comment-13901786
 ] 

Jean-Marc Spaggiari commented on HBASE-10419:
-

Got it! So we should be fine with the last attached updated version.

> Add multiget support to PerformanceEvaluation
> -
>
> Key: HBASE-10419
> URL: https://issues.apache.org/jira/browse/HBASE-10419
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-10419-v2-trunk.patch, HBASE-10419.0.patch, 
> HBASE-10419.1.patch
>
>
> Folks planning to use multiget may find this useful.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10419) Add multiget support to PerformanceEvaluation

2014-02-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901781#comment-13901781
 ] 

Nick Dimiduk commented on HBASE-10419:
--

bq. Any specific reason why you used float?

[~nkeywal] / HBASE-10511 just slipped in the summary statistics business, which 
changed my floats into doubles.

> Add multiget support to PerformanceEvaluation
> -
>
> Key: HBASE-10419
> URL: https://issues.apache.org/jira/browse/HBASE-10419
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: HBASE-10419-v2-trunk.patch, HBASE-10419.0.patch, 
> HBASE-10419.1.patch
>
>
> Folks planning to use multiget may find this useful.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10535) Table trash to recover table deleted by mistake

2014-02-14 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901773#comment-13901773
 ] 

Jean-Marc Spaggiari commented on HBASE-10535:
-

I see this as "trash" beeing the namespace and "myspace:mytable" being the 
table name. Not really a nested namespace.

> Table trash to recover table deleted by mistake
> ---
>
> Key: HBASE-10535
> URL: https://issues.apache.org/jira/browse/HBASE-10535
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Priority: Minor
>
> When table is deleted, only Hfiles are moved to archives dir, table and 
> region infos are deleted immediately. So it's very difficult to recover 
> tables which are deleted by mistakes.
> I think if we can introduce an table trash dir in HDFS. When the table is 
> deleted, the entire table dir is moved to trash dir. And after an 
> configurable ttl,  the dir is deleted actually. This can be done by HMaster.
> If we want to recover the deleted table, we can use a tool which moves table 
> dir out of trash and recovery the meta data of the table. There are many 
> problems the recover tool will encountered eg, parent  and daughter regions 
> are all in the table dir. But I think this feature is useful to handle some 
> special cases.
> Discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10535) Table trash to recover table deleted by mistake

2014-02-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901768#comment-13901768
 ] 

Ted Yu commented on HBASE-10535:


bq. Move it under "trash:myspace:mytable"

Is nested namespace supported ?

> Table trash to recover table deleted by mistake
> ---
>
> Key: HBASE-10535
> URL: https://issues.apache.org/jira/browse/HBASE-10535
> Project: HBase
>  Issue Type: Improvement
>Reporter: Liu Shaohui
>Priority: Minor
>
> When table is deleted, only Hfiles are moved to archives dir, table and 
> region infos are deleted immediately. So it's very difficult to recover 
> tables which are deleted by mistakes.
> I think if we can introduce an table trash dir in HDFS. When the table is 
> deleted, the entire table dir is moved to trash dir. And after an 
> configurable ttl,  the dir is deleted actually. This can be done by HMaster.
> If we want to recover the deleted table, we can use a tool which moves table 
> dir out of trash and recovery the meta data of the table. There are many 
> problems the recover tool will encountered eg, parent  and daughter regions 
> are all in the table dir. But I think this feature is useful to handle some 
> special cases.
> Discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10536) ImportTsv should fail fast if any of the column family passed to the job is not present in the table

2014-02-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901766#comment-13901766
 ] 

Ted Yu commented on HBASE-10536:


bq. a "--no-strict" flag instead

+1

> ImportTsv should fail fast if any of the column family passed to the job is 
> not present in the table
> 
>
> Key: HBASE-10536
> URL: https://issues.apache.org/jira/browse/HBASE-10536
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 0.98.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.17
>
>
> While checking 0.98 rc, running bulkload tools. By mistake passed wrong 
> column family to importtsv. LoadIncrementalHfiles failed with following 
> exception
> {code}
> Exception in thread "main" java.io.IOException: Unmatched family names found: 
> unmatched family names in HFiles to be bulkloaded: [f1]; valid family names 
> of table test are: [f]
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:241)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:823)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at 
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:828)
> {code}
>  
> Its better to fail fast if any of the passed column family is not present in 
> table.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10362) HBCK changes for supporting region replicas

2014-02-14 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-10362:


Attachment: 10362-1.txt

> HBCK changes for supporting region replicas
> ---
>
> Key: HBASE-10362
> URL: https://issues.apache.org/jira/browse/HBASE-10362
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbck
>Reporter: Enis Soztutar
>Assignee: Devaraj Das
> Fix For: 0.99.0
>
> Attachments: 10362-1.txt
>
>
> We should support region replicas in HBCK. The changes are probably not that 
> intrusive. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10399) Add documentation for VerifyReplication to refguide

2014-02-14 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901738#comment-13901738
 ] 

Jean-Daniel Cryans commented on HBASE-10399:


{noformat}
+  This package is experimental quality software and is only meant to be a 
base
+  for future developments. The current implementation offers the following
+  features:
{noformat}

We can stop saying that now, it's even enabled by default since 0.96.

{noformat}
+  
+   hbase.replication
+   true
+  
{noformat}

Not necessary anymore.

{noformat}
+  Considering 1 rs, with ratio 0.1
+  Getting 1 rs from peer cluster # 0
+  Choosing peer 10.10.1.49:62020
{noformat}

Those log lines don't even exist anymore.

{noformat}
+$HBASE_HOME/bin/hbase 
org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication --help
{noformat}

Looking at the current code, it expects -h or --h. Not sure if --help was ever 
true or when it was changed.

So that documentation needs a complete revamp. I'm fine if we want to just 
document VerifyReplication in the ref guide but the right thing to do would be 
to rewrite it.

> Add documentation for VerifyReplication to refguide
> ---
>
> Key: HBASE-10399
> URL: https://issues.apache.org/jira/browse/HBASE-10399
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Gustavo Anatoly
>Priority: Minor
> Attachments: HBASE-10399.patch
>
>
> HBase refguide currently doesn't document how VerifyReplication is used for 
> comparing local table with remote table.
> Document for VerifyReplication should be added so that users know how to use 
> it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   >