[jira] [Commented] (HBASE-5306) Add support for protocol buffer based RPC

2012-09-12 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454688#comment-13454688
 ] 

Gregory Chanan commented on HBASE-5306:
---

Do you think there is more to do here Devaraj?  Or does HBASE-5705 and 
HBASE-5451 cover this?

> Add support for protocol buffer based RPC
> -
>
> Key: HBASE-5306
> URL: https://issues.apache.org/jira/browse/HBASE-5306
> Project: HBase
>  Issue Type: Sub-task
>  Components: ipc, master, migration, regionserver
>Reporter: Devaraj Das
>Assignee: Devaraj Das
>
> This will help HBase to achieve wire compatibility across versions. The idea 
> (to start with) is to leverage the recent work that has gone in in the Hadoop 
> core in this area.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454685#comment-13454685
 ] 

Elliott Clark commented on HBASE-6769:
--

haha yeah I guess I have more context than 0.94 source would give a reader.  
Sorry about that.

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.94-1.patch, 
> HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454683#comment-13454683
 ] 

Lars Hofhansl commented on HBASE-6769:
--

+1 on 0.94 patch as well.

This comment is weird, as it refers to a non existing exception.
{code}
+  // Don't send a FailedSanityCheckException as older clients will 
not know about
+  // that class being a subclass of DoNotRetryIOException
+  // and will retry mutations that will never succeed.
{code}

Don't post a new patch :) ... I'll change the comment on commit:
{code}
// Use generic DoNotRetryIOException so that older clients know how to deal 
with it.
{code}


> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.94-1.patch, 
> HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454679#comment-13454679
 ] 

Hadoop QA commented on HBASE-6769:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12544945/HBASE-6769-0.94-1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2856//console

This message is automatically generated.

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.94-1.patch, 
> HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-5447) Support for custom filters with PB-based RPC

2012-09-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-5447.
--

   Resolution: Fixed
Fix Version/s: 0.96.0
 Assignee: Gregory Chanan  (was: Todd Lipcon)
 Hadoop Flags: Reviewed

Closing.  Assigned Gregory.

Regards custom filters, let them come out of the woodwork.  We'll help them 
make the convertion to pb.

> Support for custom filters with PB-based RPC
> 
>
> Key: HBASE-5447
> URL: https://issues.apache.org/jira/browse/HBASE-5447
> Project: HBase
>  Issue Type: Sub-task
>  Components: ipc, master, migration, regionserver
>Reporter: Todd Lipcon
>Assignee: Gregory Chanan
> Fix For: 0.96.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-6769:
-

Attachment: HBASE-6769-0.94-1.patch

Here's the patch without the FailedSanityCheckException.  I put in comments 
around everywhere that catches the exception so hopefully that will keep things 
sane.

TestHRegion and TestFromClientSide are both passing on my machine locally.  
Running the rest of the suite right now.

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.94-1.patch, 
> HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6500) hbck comlaining, Exception in thread "main" java.lang.NullPointerException

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454673#comment-13454673
 ] 

Lars Hofhansl commented on HBASE-6500:
--

@luili: Can you confirm that HBASE-6464 fixes this issue?

> hbck comlaining, Exception in thread "main" java.lang.NullPointerException 
> ---
>
> Key: HBASE-6500
> URL: https://issues.apache.org/jira/browse/HBASE-6500
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.94.0
> Environment: Hadoop 0.20.205.0
> Zookeeper: zookeeper-3.3.5.jar
> Hbase: hbase-0.94.0
>Reporter: liuli
>
> I met problem with starting Hbase:
> I have 5 machines (Ubuntu)
> 109.123.121.23 rsmm-master.example.com
> 109.123.121.24 rsmm-slave-1.example.com
> 109.123.121.25 rsmm-slave-2.example.com
> 109.123.121.26 rsmm-slave-3.example.com
> 109.123.121.27 rsmm-slave-4.example.com
> Hadoop 0.20.205.0
> Zookeeper: zookeeper-3.3.5.jar
> Hbase: hbase-0.94.0
> After starting HBase, running hbck
> hduser@rsmm-master:~/hbase/bin$ ./hbase hbck
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:zookeeper.version=3.3.5-1301095, built on 03/15/2012 19:48 GMT
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:host.name=rsmm-master.example.com
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.version=1.6.0_33
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.vendor=Sun Microsystems Inc.
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.home=/usr/java/jre1.6.0_33
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.class.path=/home/hduser/hbase/conf:/usr/java/jre1.6.0_33/lib/tools.jar:/home/hduser/hbase:/home/hduser/hbase/hbase-0.94.0.jar:/home/hduser/hbase/hbase-0.94.0-tests.jar:/home/hduser/hbase/lib/activation-1.1.jar:/home/hduser/hbase/lib/asm-3.1.jar:/home/hduser/hbase/lib/avro-1.5.3.jar:/home/hduser/hbase/lib/avro-ipc-1.5.3.jar:/home/hduser/hbase/lib/commons-beanutils-1.7.0.jar:/home/hduser/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hbase/lib/commons-cli-1.2.jar:/home/hduser/hbase/lib/commons-codec-1.4.jar:/home/hduser/hbase/lib/commons-collections-3.2.1.jar:/home/hduser/hbase/lib/commons-configuration-1.6.jar:/home/hduser/hbase/lib/commons-digester-1.8.jar:/home/hduser/hbase/lib/commons-el-1.0.jar:/home/hduser/hbase/lib/commons-httpclient-3.1.jar:/home/hduser/hbase/lib/commons-io-2.1.jar:/home/hduser/hbase/lib/commons-lang-2.5.jar:/home/hduser/hbase/lib/commons-logging-1.1.1.jar:/home/hduser/hbase/lib/commons-math-2.1.jar:/home/hduser/hbase/lib/commons-net-1.4.1.jar:/home/hduser/hbase/lib/core-3.1.1.jar:/home/hduser/hbase/lib/guava-r09.jar:/home/hduser/hbase/lib/hadoop-core-0.20.205.0.jar:/home/hduser/hbase/lib/high-scale-lib-1.1.1.jar:/home/hduser/hbase/lib/httpclient-4.1.2.jar:/home/hduser/hbase/lib/httpcore-4.1.3.jar:/home/hduser/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hduser/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hduser/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hduser/hbase/lib/jackson-xc-1.5.5.jar:/home/hduser/hbase/lib/jamon-runtime-2.3.1.jar:/home/hduser/hbase/lib/jasper-compiler-5.5.23.jar:/home/hduser/hbase/lib/jasper-runtime-5.5.23.jar:/home/hduser/hbase/lib/jaxb-api-2.1.jar:/home/hduser/hbase/lib/jaxb-impl-2.1.12.jar:/home/hduser/hbase/lib/jersey-core-1.4.jar:/home/hduser/hbase/lib/jersey-json-1.4.jar:/home/hduser/hbase/lib/jersey-server-1.4.jar:/home/hduser/hbase/lib/jettison-1.1.jar:/home/hduser/hbase/lib/jetty-6.1.26.jar:/home/hduser/hbase/lib/jetty-util-6.1.26.jar:/home/hduser/hbase/lib/jruby-complete-1.6.5.jar:/home/hduser/hbase/lib/jsp-2.1-6.1.14.jar:/home/hduser/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hduser/hbase/lib/libthrift-0.8.0.jar:/home/hduser/hbase/lib/log4j-1.2.16.jar:/home/hduser/hbase/lib/netty-3.2.4.Final.jar:/home/hduser/hbase/lib/protobuf-java-2.4.0a.jar:/home/hduser/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hduser/hbase/lib/slf4j-api-1.5.8.jar:/home/hduser/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hduser/hbase/lib/snappy-java-1.0.3.2.jar:/home/hduser/hbase/lib/stax-api-1.0.1.jar:/home/hduser/hbase/lib/velocity-1.7.jar:/home/hduser/hbase/lib/xmlenc-0.52.jar:/home/hduser/hbase/lib/zookeeper-3.3.5.jar:/home/hduser/hadoop-0.20.205.0/conf:/home/hduser/hadoop-0.20.205.0/libexec/../conf:/usr/java/jre1.6.0_33/lib/tools.jar:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop/hadoop-core-0.20.205.0.jar:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop/lib/asm-3.2.jar:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop/lib/aspectjrt-1.6.5.jar:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop/lib/aspectjtools-1.6.5.jar:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop

[jira] [Commented] (HBASE-5447) Support for custom filters with PB-based RPC

2012-09-12 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454666#comment-13454666
 ] 

Gregory Chanan commented on HBASE-5447:
---

Should we close this now that HBASE-6454 and HBASE-6477 are in?  Or do we think 
there is more to do?

> Support for custom filters with PB-based RPC
> 
>
> Key: HBASE-5447
> URL: https://issues.apache.org/jira/browse/HBASE-5447
> Project: HBase
>  Issue Type: Sub-task
>  Components: ipc, master, migration, regionserver
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6500) hbck comlaining, Exception in thread "main" java.lang.NullPointerException

2012-09-12 Thread liuli (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454663#comment-13454663
 ] 

liuli commented on HBASE-6500:
--

Mr. Michael Drzal: what can I do to help on this issue?
If you want to close this one because there is 6464, it is OK for me

Anyway, if you need any information/operation , let me know!

> hbck comlaining, Exception in thread "main" java.lang.NullPointerException 
> ---
>
> Key: HBASE-6500
> URL: https://issues.apache.org/jira/browse/HBASE-6500
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.94.0
> Environment: Hadoop 0.20.205.0
> Zookeeper: zookeeper-3.3.5.jar
> Hbase: hbase-0.94.0
>Reporter: liuli
>
> I met problem with starting Hbase:
> I have 5 machines (Ubuntu)
> 109.123.121.23 rsmm-master.example.com
> 109.123.121.24 rsmm-slave-1.example.com
> 109.123.121.25 rsmm-slave-2.example.com
> 109.123.121.26 rsmm-slave-3.example.com
> 109.123.121.27 rsmm-slave-4.example.com
> Hadoop 0.20.205.0
> Zookeeper: zookeeper-3.3.5.jar
> Hbase: hbase-0.94.0
> After starting HBase, running hbck
> hduser@rsmm-master:~/hbase/bin$ ./hbase hbck
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:zookeeper.version=3.3.5-1301095, built on 03/15/2012 19:48 GMT
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:host.name=rsmm-master.example.com
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.version=1.6.0_33
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.vendor=Sun Microsystems Inc.
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.home=/usr/java/jre1.6.0_33
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.class.path=/home/hduser/hbase/conf:/usr/java/jre1.6.0_33/lib/tools.jar:/home/hduser/hbase:/home/hduser/hbase/hbase-0.94.0.jar:/home/hduser/hbase/hbase-0.94.0-tests.jar:/home/hduser/hbase/lib/activation-1.1.jar:/home/hduser/hbase/lib/asm-3.1.jar:/home/hduser/hbase/lib/avro-1.5.3.jar:/home/hduser/hbase/lib/avro-ipc-1.5.3.jar:/home/hduser/hbase/lib/commons-beanutils-1.7.0.jar:/home/hduser/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hbase/lib/commons-cli-1.2.jar:/home/hduser/hbase/lib/commons-codec-1.4.jar:/home/hduser/hbase/lib/commons-collections-3.2.1.jar:/home/hduser/hbase/lib/commons-configuration-1.6.jar:/home/hduser/hbase/lib/commons-digester-1.8.jar:/home/hduser/hbase/lib/commons-el-1.0.jar:/home/hduser/hbase/lib/commons-httpclient-3.1.jar:/home/hduser/hbase/lib/commons-io-2.1.jar:/home/hduser/hbase/lib/commons-lang-2.5.jar:/home/hduser/hbase/lib/commons-logging-1.1.1.jar:/home/hduser/hbase/lib/commons-math-2.1.jar:/home/hduser/hbase/lib/commons-net-1.4.1.jar:/home/hduser/hbase/lib/core-3.1.1.jar:/home/hduser/hbase/lib/guava-r09.jar:/home/hduser/hbase/lib/hadoop-core-0.20.205.0.jar:/home/hduser/hbase/lib/high-scale-lib-1.1.1.jar:/home/hduser/hbase/lib/httpclient-4.1.2.jar:/home/hduser/hbase/lib/httpcore-4.1.3.jar:/home/hduser/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hduser/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hduser/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hduser/hbase/lib/jackson-xc-1.5.5.jar:/home/hduser/hbase/lib/jamon-runtime-2.3.1.jar:/home/hduser/hbase/lib/jasper-compiler-5.5.23.jar:/home/hduser/hbase/lib/jasper-runtime-5.5.23.jar:/home/hduser/hbase/lib/jaxb-api-2.1.jar:/home/hduser/hbase/lib/jaxb-impl-2.1.12.jar:/home/hduser/hbase/lib/jersey-core-1.4.jar:/home/hduser/hbase/lib/jersey-json-1.4.jar:/home/hduser/hbase/lib/jersey-server-1.4.jar:/home/hduser/hbase/lib/jettison-1.1.jar:/home/hduser/hbase/lib/jetty-6.1.26.jar:/home/hduser/hbase/lib/jetty-util-6.1.26.jar:/home/hduser/hbase/lib/jruby-complete-1.6.5.jar:/home/hduser/hbase/lib/jsp-2.1-6.1.14.jar:/home/hduser/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hduser/hbase/lib/libthrift-0.8.0.jar:/home/hduser/hbase/lib/log4j-1.2.16.jar:/home/hduser/hbase/lib/netty-3.2.4.Final.jar:/home/hduser/hbase/lib/protobuf-java-2.4.0a.jar:/home/hduser/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hduser/hbase/lib/slf4j-api-1.5.8.jar:/home/hduser/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hduser/hbase/lib/snappy-java-1.0.3.2.jar:/home/hduser/hbase/lib/stax-api-1.0.1.jar:/home/hduser/hbase/lib/velocity-1.7.jar:/home/hduser/hbase/lib/xmlenc-0.52.jar:/home/hduser/hbase/lib/zookeeper-3.3.5.jar:/home/hduser/hadoop-0.20.205.0/conf:/home/hduser/hadoop-0.20.205.0/libexec/../conf:/usr/java/jre1.6.0_33/lib/tools.jar:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop/hadoop-core-0.20.205.0.jar:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop/lib/asm-3.2.jar:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop/lib/aspectjrt-1.6.5.jar:/home/hduser/hadoop-0.2

[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454658#comment-13454658
 ] 

Lars Hofhansl commented on HBASE-6769:
--

Either way's fine. Leaning slightly towards less code for 0.94.
But if you think readability at the server is improved even if that exception 
is ultimately not passed to the client... go for it :)


> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454656#comment-13454656
 ] 

Elliott Clark commented on HBASE-6769:
--

Keeping the exception and using it in hregion but not using it when sending 
data to the client makes the try/catch logic a little cleaner to read. But does 
add a class that's no used for much.  I'm up for whichever you think would be 
best.

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454654#comment-13454654
 ] 

Lars Hofhansl commented on HBASE-6769:
--

Yep... I was thinking that something like this can happen. Maybe in 0.94 we 
just go without the new exception and just throw DoNotRetryException when the 
timestamp check fails...? That is the current behavior anyway.
(but we do throw NoSuchColumnFamilyException in case of a wrong CF)


> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454652#comment-13454652
 ] 

Elliott Clark commented on HBASE-6769:
--

Hmmm so looking at the logs.  While the client does seem to function it's 
re-trying multiple times if the error is the new exception.  So looks like 
since the client can't determine that the new exception class is a sub class of 
DoNotRetryIOException, all of the reties are being issued.

I'll post a new 0.94 patch.

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454646#comment-13454646
 ] 

Elliott Clark commented on HBASE-6769:
--

Yes I totally forgot to git add the FailedSanityCheckException.

I just spun up a local 0.94 build and a 0.92.1 hbase shell. resulting in:

{code}
1.9.3-p194 :001 > list
TABLE   


0 row(s) in 0.7450 seconds

1.9.3-p194 :002 > create 'test_table', 'd'
0 row(s) in 0.1560 seconds

1.9.3-p194 :003 > put 'test_table', 'rk', 'b:1', '0'

ERROR: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: 
Failed 1 action: 
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family 
b does not exist in region 
test_table,,1347513173188.d9d026b9ac9c899bd21cff52682905bd. in table {NAME => 
'test_table', FAMILIES => [{NAME => 'd', BLOOMFILTER => 'NONE', 
REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => 
'2147483647', MIN_VERSIONS => '0', BLOCKSIZE => '65536', IN_MEMORY => 'false', 
BLOCKCACHE => 'true'}]}
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3470)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)
: 1 time, servers with issues: android-c6740ac93630ee19:57140, 

Here is some help for this command:
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates.  To put a cell value into table 't1' at
row 'r1' under column 'c1' marked with the time 'ts1', do:

  hbase> put 't1', 'r1', 'c1', 'value', ts1
{code}

So it looks like everything works well.

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454642#comment-13454642
 ] 

Hadoop QA commented on HBASE-6769:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12544939/HBASE-6769-0.94-0.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2855//console

This message is automatically generated.

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454639#comment-13454639
 ] 

Lars Hofhansl commented on HBASE-6769:
--

Thanks Elliot! Did you forget to add FailedSanityCheckException to the 0.94 
patch? If the same as in 0.96 I'll just add it at commit.

Will the new exception cause compatibility issues in old 0.92.x and 0.94.{0|1} 
clients? I don't think so... Just making sure.


> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-6769:
-

Attachment: HBASE-6769-0.94-0.patch

The 0.94 versions is a little simpler, because it's not likely that on 0.94 we 
will add different exception types. (We should in trunk and I have some issues 
to file.)

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.94-0.patch, HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6592) [shell] Add means of custom formatting output by column

2012-09-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454631#comment-13454631
 ] 

stack commented on HBASE-6592:
--

Sorry Jie.  Patch looks good.

There are some issues w/ the help but I can fix those on commit.  When
you say 'converterFun' in the help, what do you mean?  Is this a
place-holder for 'toInt' or 'c(CLASSNAME).METHOD'?

I tried it and had following issue:

{code}
hbase(main):021:0> get 'x', 'y', 'x:x'
COLUMNCELL
 x:x  timestamp=1347496098567,
value=1
1 row(s) in 0.0110 seconds

hbase(main):022:0> get 'x', 'y', {COLUMN => ['x:x:toInt']}

ERROR: java.lang.IllegalArgumentException: offset (0) + length (4)
exceed the capacity of the array: 1

Here is some help for this command:
Get row or cell contents; pass table name, row, and optionally
a dictionary of column(s), timestamp, timerange and versions. Examples:

  hbase> get 't1', 'r1'
  hbase> get 't1', 'r1', {TIMERANGE => [ts1, ts2]}

{code}

What you think I'm doing wrong?  Thanks Jie.


> [shell] Add means of custom formatting output by column
> ---
>
> Key: HBASE-6592
> URL: https://issues.apache.org/jira/browse/HBASE-6592
> Project: HBase
>  Issue Type: New Feature
>  Components: shell
>Reporter: stack
>Priority: Minor
>  Labels: noob
> Attachments: hbase-6592.patch, hbase-6592-v2.patch, 
> hbase-6952-v1.patch
>
>
> See Jacques suggestion toward end of this thread for how we should allow 
> adding a custom formatter per column to use outputting column content in 
> shell: 
> http://search-hadoop.com/m/2WxUB1fuxL11/Printing+integers+in+the+Hbase+shell&subj=Printing+integers+in+the+Hbase+shell

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6771) To recover parts of the properties of .tableinfo file by reading HFile

2012-09-12 Thread Jie Huang (JIRA)
Jie Huang created HBASE-6771:


 Summary: To recover parts of the properties of .tableinfo file by 
reading HFile 
 Key: HBASE-6771
 URL: https://issues.apache.org/jira/browse/HBASE-6771
 Project: HBase
  Issue Type: Improvement
  Components: hbck
Affects Versions: 0.96.0
Reporter: Jie Huang
Assignee: Jie Huang


Currently, we only fabricate a bare minimum .tableinfo version while it is 
missing in hbck. The end-user still needs to correct them later. According to 
the [~jmhsieh]'s proposal in HBASE-5631, we'd better to recover it by reading 
some properties(e.g., compression settings, encodings, etc) from exiting newest 
HFile under each region folder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6658) Rename WritableByteArrayComparable to something not mentioning Writable

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454627#comment-13454627
 ] 

Lars Hofhansl commented on HBASE-6658:
--

Nope. This was the 2nd already. HBASE-6710 was the first :)

> Rename WritableByteArrayComparable to something not mentioning Writable
> ---
>
> Key: HBASE-6658
> URL: https://issues.apache.org/jira/browse/HBASE-6658
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HBASE-6658.patch, HBASE-6658-v3.patch, 
> HBASE-6658-v4.patch, HBASE-6658-v5.patch, HBASE-6658-v6.patch
>
>
> After HBASE-6477, WritableByteArrayComparable will no longer be Writable, so 
> should be renamed.
> Current idea is ByteArrayComparator (since all the derived classes are 
> *Comparator not *Comparable), but I'm open to suggestions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6658) Rename WritableByteArrayComparable to something not mentioning Writable

2012-09-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454624#comment-13454624
 ] 

stack commented on HBASE-6658:
--

Hurray!  First commit!

> Rename WritableByteArrayComparable to something not mentioning Writable
> ---
>
> Key: HBASE-6658
> URL: https://issues.apache.org/jira/browse/HBASE-6658
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HBASE-6658.patch, HBASE-6658-v3.patch, 
> HBASE-6658-v4.patch, HBASE-6658-v5.patch, HBASE-6658-v6.patch
>
>
> After HBASE-6477, WritableByteArrayComparable will no longer be Writable, so 
> should be renamed.
> Current idea is ByteArrayComparator (since all the derived classes are 
> *Comparator not *Comparable), but I'm open to suggestions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454622#comment-13454622
 ] 

Elliott Clark commented on HBASE-6769:
--

I'll get a 0.94 patch tonight so you can cut an rc whenever.  I just wanted 
jenkins to take a crack at it first.

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6658) Rename WritableByteArrayComparable to something not mentioning Writable

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454616#comment-13454616
 ] 

Hudson commented on HBASE-6658:
---

Integrated in HBase-TRUNK #3326 (See 
[https://builds.apache.org/job/HBase-TRUNK/3326/])
HBASE-6658 Rename WritableByteArrayComparable to something not mentioning 
Writable (Revision 1384191)

 Result = FAILURE
gchanan : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/BitComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/ByteArrayComparable.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/Filter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/NullComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/SubstringComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/WritableByteArrayComparable.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/model/ScannerModel.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFakeKeyInFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHbaseObjectWritable.java


> Rename WritableByteArrayComparable to something not mentioning Writable
> ---
>
> Key: HBASE-6658
> URL: https://issues.apache.org/jira/browse/HBASE-6658
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HBASE-6658.patch, HBASE-6658-v3.patch, 
> HBASE-6658-v4.patch, HBASE-6658-v5.patch, HBASE-6658-v6.patch
>
>
> After HBASE-6477, WritableByteArrayComparable will no longer be Writable, so 
> should be renamed.
> Current idea is ByteArrayComparator (since all the derived classes are 
> *Comparator not *Comparable), but I'm open to suggestions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454611#comment-13454611
 ] 

Lars Hofhansl commented on HBASE-6769:
--

+1 on patch. Need a 0.94 version as well. I'm happy to make one.

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454601#comment-13454601
 ] 

Hadoop QA commented on HBASE-6769:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544923/HBASE-6769-0.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The patch appears to cause mvn compile goal to fail.

-1 findbugs.  The patch appears to cause Findbugs (version 1.3.9) to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2854//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2854//console

This message is automatically generated.

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5206) Port HBASE-5155 to 0.92, 0.94, and TRUNK

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454598#comment-13454598
 ] 

Hudson commented on HBASE-5206:
---

Integrated in HBase-0.94 #465 (See 
[https://builds.apache.org/job/HBase-0.94/465/])
HBASE-6710 0.92/0.94 compatibility issues due to HBASE-5206 (Revision 
1384181)

 Result = SUCCESS
gchanan : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTable.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTableReadOnly.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKTable.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKTableReadOnly.java


> Port HBASE-5155 to 0.92, 0.94, and TRUNK
> 
>
> Key: HBASE-5206
> URL: https://issues.apache.org/jira/browse/HBASE-5206
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.2, 0.94.0, 0.96.0
>Reporter: Ted Yu
>Assignee: Ashutosh Jindal
> Fix For: 0.94.0, 0.96.0
>
> Attachments: 5206_92_1.patch, 5206_92_latest_1.patch, 
> 5206_92_latest_2.patch, 5206_92_latest_3.patch, 5206_trunk_1.patch, 
> 5206_trunk_latest_1.patch, 5206_trunk_latest_2.patch, 
> 5206_trunk_latest_3.patch
>
>
> This JIRA ports HBASE-5155 (ServerShutDownHandler And Disable/Delete should 
> not happen parallely leading to recreation of regions that were deleted) to 
> 0.92 and TRUNK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6710) 0.92/0.94 compatibility issues due to HBASE-5206

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454599#comment-13454599
 ] 

Hudson commented on HBASE-6710:
---

Integrated in HBase-0.94 #465 (See 
[https://builds.apache.org/job/HBase-0.94/465/])
HBASE-6710 0.92/0.94 compatibility issues due to HBASE-5206 (Revision 
1384181)

 Result = SUCCESS
gchanan : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTable.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTableReadOnly.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKTable.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKTableReadOnly.java


> 0.92/0.94 compatibility issues due to HBASE-5206
> 
>
> Key: HBASE-6710
> URL: https://issues.apache.org/jira/browse/HBASE-6710
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Critical
> Fix For: 0.94.2
>
> Attachments: HBASE-6710-v3.patch
>
>
> HBASE-5206 introduces some compatibility issues between {0.94,0.94.1} and
> {0.92.0,0.92.1}.  The release notes of HBASE-5155 describes the issue 
> (HBASE-5206 is a backport of HBASE-5155).
> I think we can make 0.94.2 compatible with both {0.94.0,0.94.1} and 
> {0.92.0,0.92.1}, although one of those sets will require configuration 
> changes.
> The basic problem is that there is a znode for each table 
> "zookeeper.znode.tableEnableDisable" that is handled differently.
> On 0.92.0 and 0.92.1 the states for this table are:
> [ disabled, disabling, enabling ] or deleted if the table is enabled
> On 0.94.1 and 0.94.2 the states for this table are:
> [ disabled, disabling, enabling, enabled ]
> What saves us is that the location of this znode is configurable.  So the 
> basic idea is to have the 0.94.2 master write two different znodes, 
> "zookeeper.znode.tableEnableDisabled92" and 
> "zookeeper.znode.tableEnableDisabled94" where the 92 node is in 92 format, 
> the 94 node is in 94 format.  And internally, the master would only use the 
> 94 format in order to solve the original bug HBASE-5155 solves.
> We can of course make one of these the same default as exists now, so we 
> don't need to make config changes for one of 0.92 or 0.94 clients.  I argue 
> that 0.92 clients shouldn't have to make config changes for the same reason I 
> argued above.  But that is debatable.
> Then, I think the only question left is the question of how to bring along 
> the {0.94.0, 0.94.1} crew.  A {0.94.0, 0.94.1} client would work against a 
> 0.94.2 cluster by just configuring "zookeeper.znode.tableEnableDisable" in 
> the client to be whatever "zookeeper.znode.tableEnableDisabled94" is in the 
> cluster.  A 0.94.2 client would work against both a {0.94.0, 0.94.1} and 
> {0.92.0, 0.92.1} cluster if it had HBASE-6268 applied.  About rolling upgrade 
> from {0.94.0, 0.94.1} to 0.94.2 -- I'd have to think about that.  Do the 
> regionservers ever read the tableEnableDisabled znode?
> On the mailing list, Lars H suggested the following:
> "The only input I'd have is that format we'll use going forward will not have 
> a version attached to it.
> So maybe the 92 version would still be called 
> "zookeeper.znode.tableEnableDisable" and the new node could have a different 
> name "zookeeper.znode.tableEnableDisableNew" (or something)."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6658) Rename WritableByteArrayComparable to something not mentioning Writable

2012-09-12 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-6658:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the reviews, stack and Lars.

Committed to trunk.  I used "svn move" so hopefully the history is saved.  I'll 
double check once it makes it to git.

> Rename WritableByteArrayComparable to something not mentioning Writable
> ---
>
> Key: HBASE-6658
> URL: https://issues.apache.org/jira/browse/HBASE-6658
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HBASE-6658.patch, HBASE-6658-v3.patch, 
> HBASE-6658-v4.patch, HBASE-6658-v5.patch, HBASE-6658-v6.patch
>
>
> After HBASE-6477, WritableByteArrayComparable will no longer be Writable, so 
> should be renamed.
> Current idea is ByteArrayComparator (since all the derived classes are 
> *Comparator not *Comparable), but I'm open to suggestions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-6769:
-

Status: Patch Available  (was: Open)

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.1, 0.94.0
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6260) balancer state should be stored in ZK

2012-09-12 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454577#comment-13454577
 ] 

Gregory Chanan commented on HBASE-6260:
---

@stack:

- I don't know what's going on with ClientProtos.java.  I've noticed that a few 
times when I regenerate, the generated code changes.  I'll leave it out.
- I think PB_MAGIC is a good idea.  I noticed without it, an empty 
(newly-created) znode is valid for the purposes of mergeFrom.  Not likely to 
happen, but probably best to have it throw an error in that case.

> balancer state should be stored in ZK
> -
>
> Key: HBASE-6260
> URL: https://issues.apache.org/jira/browse/HBASE-6260
> Project: HBase
>  Issue Type: Task
>  Components: master, zookeeper
>Affects Versions: 0.96.0
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Blocker
> Attachments: HBASE-6260.patch
>
>
> See: 
> https://issues.apache.org/jira/browse/HBASE-5953?focusedCommentId=13270200&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13270200
> And: 
> https://issues.apache.org/jira/browse/HBASE-5630?focusedCommentId=13399225&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13399225
> In short, we need to move the balancer state to ZK so that it won't have to 
> be restarted if the master dies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-6769:
-

Attachment: HBASE-6769-0.patch

Made it so NoSuchColumnFamily is thrown again
Made it so that FaileSanityCheck is thrown if timestamp sanity checks are 
enabled and failed.
made it so that hregionserver doesn't cast everything to a generic do not retry 
io exception.
Added Test for hregion.batchPutWithBadTs
Added test from client side to make sure that the correct exception is the 
cause.


> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
> Attachments: HBASE-6769-0.patch
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6710) 0.92/0.94 compatibility issues due to HBASE-5206

2012-09-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6710:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> 0.92/0.94 compatibility issues due to HBASE-5206
> 
>
> Key: HBASE-6710
> URL: https://issues.apache.org/jira/browse/HBASE-6710
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Critical
> Fix For: 0.94.2
>
> Attachments: HBASE-6710-v3.patch
>
>
> HBASE-5206 introduces some compatibility issues between {0.94,0.94.1} and
> {0.92.0,0.92.1}.  The release notes of HBASE-5155 describes the issue 
> (HBASE-5206 is a backport of HBASE-5155).
> I think we can make 0.94.2 compatible with both {0.94.0,0.94.1} and 
> {0.92.0,0.92.1}, although one of those sets will require configuration 
> changes.
> The basic problem is that there is a znode for each table 
> "zookeeper.znode.tableEnableDisable" that is handled differently.
> On 0.92.0 and 0.92.1 the states for this table are:
> [ disabled, disabling, enabling ] or deleted if the table is enabled
> On 0.94.1 and 0.94.2 the states for this table are:
> [ disabled, disabling, enabling, enabled ]
> What saves us is that the location of this znode is configurable.  So the 
> basic idea is to have the 0.94.2 master write two different znodes, 
> "zookeeper.znode.tableEnableDisabled92" and 
> "zookeeper.znode.tableEnableDisabled94" where the 92 node is in 92 format, 
> the 94 node is in 94 format.  And internally, the master would only use the 
> 94 format in order to solve the original bug HBASE-5155 solves.
> We can of course make one of these the same default as exists now, so we 
> don't need to make config changes for one of 0.92 or 0.94 clients.  I argue 
> that 0.92 clients shouldn't have to make config changes for the same reason I 
> argued above.  But that is debatable.
> Then, I think the only question left is the question of how to bring along 
> the {0.94.0, 0.94.1} crew.  A {0.94.0, 0.94.1} client would work against a 
> 0.94.2 cluster by just configuring "zookeeper.znode.tableEnableDisable" in 
> the client to be whatever "zookeeper.znode.tableEnableDisabled94" is in the 
> cluster.  A 0.94.2 client would work against both a {0.94.0, 0.94.1} and 
> {0.92.0, 0.92.1} cluster if it had HBASE-6268 applied.  About rolling upgrade 
> from {0.94.0, 0.94.1} to 0.94.2 -- I'd have to think about that.  Do the 
> regionservers ever read the tableEnableDisabled znode?
> On the mailing list, Lars H suggested the following:
> "The only input I'd have is that format we'll use going forward will not have 
> a version attached to it.
> So maybe the 92 version would still be called 
> "zookeeper.znode.tableEnableDisable" and the new node could have a different 
> name "zookeeper.znode.tableEnableDisableNew" (or something)."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6710) 0.92/0.94 compatibility issues due to HBASE-5206

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454563#comment-13454563
 ] 

Hadoop QA commented on HBASE-6710:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544917/HBASE-6710-v3.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 8 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2853//console

This message is automatically generated.

> 0.92/0.94 compatibility issues due to HBASE-5206
> 
>
> Key: HBASE-6710
> URL: https://issues.apache.org/jira/browse/HBASE-6710
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Critical
> Fix For: 0.94.2
>
> Attachments: HBASE-6710-v3.patch
>
>
> HBASE-5206 introduces some compatibility issues between {0.94,0.94.1} and
> {0.92.0,0.92.1}.  The release notes of HBASE-5155 describes the issue 
> (HBASE-5206 is a backport of HBASE-5155).
> I think we can make 0.94.2 compatible with both {0.94.0,0.94.1} and 
> {0.92.0,0.92.1}, although one of those sets will require configuration 
> changes.
> The basic problem is that there is a znode for each table 
> "zookeeper.znode.tableEnableDisable" that is handled differently.
> On 0.92.0 and 0.92.1 the states for this table are:
> [ disabled, disabling, enabling ] or deleted if the table is enabled
> On 0.94.1 and 0.94.2 the states for this table are:
> [ disabled, disabling, enabling, enabled ]
> What saves us is that the location of this znode is configurable.  So the 
> basic idea is to have the 0.94.2 master write two different znodes, 
> "zookeeper.znode.tableEnableDisabled92" and 
> "zookeeper.znode.tableEnableDisabled94" where the 92 node is in 92 format, 
> the 94 node is in 94 format.  And internally, the master would only use the 
> 94 format in order to solve the original bug HBASE-5155 solves.
> We can of course make one of these the same default as exists now, so we 
> don't need to make config changes for one of 0.92 or 0.94 clients.  I argue 
> that 0.92 clients shouldn't have to make config changes for the same reason I 
> argued above.  But that is debatable.
> Then, I think the only question left is the question of how to bring along 
> the {0.94.0, 0.94.1} crew.  A {0.94.0, 0.94.1} client would work against a 
> 0.94.2 cluster by just configuring "zookeeper.znode.tableEnableDisable" in 
> the client to be whatever "zookeeper.znode.tableEnableDisabled94" is in the 
> cluster.  A 0.94.2 client would work against both a {0.94.0, 0.94.1} and 
> {0.92.0, 0.92.1} cluster if it had HBASE-6268 applied.  About rolling upgrade 
> from {0.94.0, 0.94.1} to 0.94.2 -- I'd have to think about that.  Do the 
> regionservers ever read the tableEnableDisabled znode?
> On the mailing list, Lars H suggested the following:
> "The only input I'd have is that format we'll use going forward will not have 
> a version attached to it.
> So maybe the 92 version would still be called 
> "zookeeper.znode.tableEnableDisable" and the new node could have a different 
> name "zookeeper.znode.tableEnableDisableNew" (or something)."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6710) 0.92/0.94 compatibility issues due to HBASE-5206

2012-09-12 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-6710:
--

Attachment: HBASE-6710-v3.patch

> 0.92/0.94 compatibility issues due to HBASE-5206
> 
>
> Key: HBASE-6710
> URL: https://issues.apache.org/jira/browse/HBASE-6710
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Critical
> Fix For: 0.94.2
>
> Attachments: HBASE-6710-v3.patch
>
>
> HBASE-5206 introduces some compatibility issues between {0.94,0.94.1} and
> {0.92.0,0.92.1}.  The release notes of HBASE-5155 describes the issue 
> (HBASE-5206 is a backport of HBASE-5155).
> I think we can make 0.94.2 compatible with both {0.94.0,0.94.1} and 
> {0.92.0,0.92.1}, although one of those sets will require configuration 
> changes.
> The basic problem is that there is a znode for each table 
> "zookeeper.znode.tableEnableDisable" that is handled differently.
> On 0.92.0 and 0.92.1 the states for this table are:
> [ disabled, disabling, enabling ] or deleted if the table is enabled
> On 0.94.1 and 0.94.2 the states for this table are:
> [ disabled, disabling, enabling, enabled ]
> What saves us is that the location of this znode is configurable.  So the 
> basic idea is to have the 0.94.2 master write two different znodes, 
> "zookeeper.znode.tableEnableDisabled92" and 
> "zookeeper.znode.tableEnableDisabled94" where the 92 node is in 92 format, 
> the 94 node is in 94 format.  And internally, the master would only use the 
> 94 format in order to solve the original bug HBASE-5155 solves.
> We can of course make one of these the same default as exists now, so we 
> don't need to make config changes for one of 0.92 or 0.94 clients.  I argue 
> that 0.92 clients shouldn't have to make config changes for the same reason I 
> argued above.  But that is debatable.
> Then, I think the only question left is the question of how to bring along 
> the {0.94.0, 0.94.1} crew.  A {0.94.0, 0.94.1} client would work against a 
> 0.94.2 cluster by just configuring "zookeeper.znode.tableEnableDisable" in 
> the client to be whatever "zookeeper.znode.tableEnableDisabled94" is in the 
> cluster.  A 0.94.2 client would work against both a {0.94.0, 0.94.1} and 
> {0.92.0, 0.92.1} cluster if it had HBASE-6268 applied.  About rolling upgrade 
> from {0.94.0, 0.94.1} to 0.94.2 -- I'd have to think about that.  Do the 
> regionservers ever read the tableEnableDisabled znode?
> On the mailing list, Lars H suggested the following:
> "The only input I'd have is that format we'll use going forward will not have 
> a version attached to it.
> So maybe the 92 version would still be called 
> "zookeeper.znode.tableEnableDisable" and the new node could have a different 
> name "zookeeper.znode.tableEnableDisableNew" (or something)."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6710) 0.92/0.94 compatibility issues due to HBASE-5206

2012-09-12 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-6710:
--

Status: Patch Available  (was: Open)

> 0.92/0.94 compatibility issues due to HBASE-5206
> 
>
> Key: HBASE-6710
> URL: https://issues.apache.org/jira/browse/HBASE-6710
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Critical
> Fix For: 0.94.2
>
> Attachments: HBASE-6710-v3.patch
>
>
> HBASE-5206 introduces some compatibility issues between {0.94,0.94.1} and
> {0.92.0,0.92.1}.  The release notes of HBASE-5155 describes the issue 
> (HBASE-5206 is a backport of HBASE-5155).
> I think we can make 0.94.2 compatible with both {0.94.0,0.94.1} and 
> {0.92.0,0.92.1}, although one of those sets will require configuration 
> changes.
> The basic problem is that there is a znode for each table 
> "zookeeper.znode.tableEnableDisable" that is handled differently.
> On 0.92.0 and 0.92.1 the states for this table are:
> [ disabled, disabling, enabling ] or deleted if the table is enabled
> On 0.94.1 and 0.94.2 the states for this table are:
> [ disabled, disabling, enabling, enabled ]
> What saves us is that the location of this znode is configurable.  So the 
> basic idea is to have the 0.94.2 master write two different znodes, 
> "zookeeper.znode.tableEnableDisabled92" and 
> "zookeeper.znode.tableEnableDisabled94" where the 92 node is in 92 format, 
> the 94 node is in 94 format.  And internally, the master would only use the 
> 94 format in order to solve the original bug HBASE-5155 solves.
> We can of course make one of these the same default as exists now, so we 
> don't need to make config changes for one of 0.92 or 0.94 clients.  I argue 
> that 0.92 clients shouldn't have to make config changes for the same reason I 
> argued above.  But that is debatable.
> Then, I think the only question left is the question of how to bring along 
> the {0.94.0, 0.94.1} crew.  A {0.94.0, 0.94.1} client would work against a 
> 0.94.2 cluster by just configuring "zookeeper.znode.tableEnableDisable" in 
> the client to be whatever "zookeeper.znode.tableEnableDisabled94" is in the 
> cluster.  A 0.94.2 client would work against both a {0.94.0, 0.94.1} and 
> {0.92.0, 0.92.1} cluster if it had HBASE-6268 applied.  About rolling upgrade 
> from {0.94.0, 0.94.1} to 0.94.2 -- I'd have to think about that.  Do the 
> regionservers ever read the tableEnableDisabled znode?
> On the mailing list, Lars H suggested the following:
> "The only input I'd have is that format we'll use going forward will not have 
> a version attached to it.
> So maybe the 92 version would still be called 
> "zookeeper.znode.tableEnableDisable" and the new node could have a different 
> name "zookeeper.znode.tableEnableDisableNew" (or something)."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6710) 0.92/0.94 compatibility issues due to HBASE-5206

2012-09-12 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454558#comment-13454558
 ] 

Gregory Chanan commented on HBASE-6710:
---

Committed to 0.94.

Thanks for the review, Lars.

> 0.92/0.94 compatibility issues due to HBASE-5206
> 
>
> Key: HBASE-6710
> URL: https://issues.apache.org/jira/browse/HBASE-6710
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Critical
> Fix For: 0.94.2
>
> Attachments: HBASE-6710-v3.patch
>
>
> HBASE-5206 introduces some compatibility issues between {0.94,0.94.1} and
> {0.92.0,0.92.1}.  The release notes of HBASE-5155 describes the issue 
> (HBASE-5206 is a backport of HBASE-5155).
> I think we can make 0.94.2 compatible with both {0.94.0,0.94.1} and 
> {0.92.0,0.92.1}, although one of those sets will require configuration 
> changes.
> The basic problem is that there is a znode for each table 
> "zookeeper.znode.tableEnableDisable" that is handled differently.
> On 0.92.0 and 0.92.1 the states for this table are:
> [ disabled, disabling, enabling ] or deleted if the table is enabled
> On 0.94.1 and 0.94.2 the states for this table are:
> [ disabled, disabling, enabling, enabled ]
> What saves us is that the location of this znode is configurable.  So the 
> basic idea is to have the 0.94.2 master write two different znodes, 
> "zookeeper.znode.tableEnableDisabled92" and 
> "zookeeper.znode.tableEnableDisabled94" where the 92 node is in 92 format, 
> the 94 node is in 94 format.  And internally, the master would only use the 
> 94 format in order to solve the original bug HBASE-5155 solves.
> We can of course make one of these the same default as exists now, so we 
> don't need to make config changes for one of 0.92 or 0.94 clients.  I argue 
> that 0.92 clients shouldn't have to make config changes for the same reason I 
> argued above.  But that is debatable.
> Then, I think the only question left is the question of how to bring along 
> the {0.94.0, 0.94.1} crew.  A {0.94.0, 0.94.1} client would work against a 
> 0.94.2 cluster by just configuring "zookeeper.znode.tableEnableDisable" in 
> the client to be whatever "zookeeper.znode.tableEnableDisabled94" is in the 
> cluster.  A 0.94.2 client would work against both a {0.94.0, 0.94.1} and 
> {0.92.0, 0.92.1} cluster if it had HBASE-6268 applied.  About rolling upgrade 
> from {0.94.0, 0.94.1} to 0.94.2 -- I'd have to think about that.  Do the 
> regionservers ever read the tableEnableDisabled znode?
> On the mailing list, Lars H suggested the following:
> "The only input I'd have is that format we'll use going forward will not have 
> a version attached to it.
> So maybe the 92 version would still be called 
> "zookeeper.znode.tableEnableDisable" and the new node could have a different 
> name "zookeeper.znode.tableEnableDisableNew" (or something)."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454555#comment-13454555
 ] 

Lars Hofhansl commented on HBASE-6769:
--

Sigh... Yeah, I'll hold 0.94.2 for this.

> HRS.multi eats NoSuchColumnFamilyException since HBASE-5021
> ---
>
> Key: HBASE-6769
> URL: https://issues.apache.org/jira/browse/HBASE-6769
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0, 0.94.1
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 0.96.0, 0.94.2
>
>
> I think this is a pretty major usability regression, since HBASE-5021 this is 
> what you get in the client when using a wrong family:
> {noformat}
> 2012-09-11 09:45:29,634 WARN 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
>   at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
> {noformat}
> Then you have to log on the server to understand what failed.
> Since everything is now a multi call, even single puts in the shell fail like 
> this.
> This is present since 0.94.0
> Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6770) Allow scanner setCaching to specify size instead of number of rows

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454552#comment-13454552
 ] 

Lars Hofhansl commented on HBASE-6770:
--

bq. that would take care of both of these use cases.

Not necessarily. It's still important for a caller to know whether it deals 
with whole rows or not.
Also some filters won't work when partial rows pass through the scanner.

> Allow scanner setCaching to specify size instead of number of rows
> --
>
> Key: HBASE-6770
> URL: https://issues.apache.org/jira/browse/HBASE-6770
> Project: HBase
>  Issue Type: Bug
>  Components: client, regionserver
>Reporter: Karthik Ranganathan
>
> Currently, we have the following api's to customize the behavior of scans:
> setCaching() - how many rows to cache on client to speed up scans
> setBatch() - max columns per row to return per row to prevent a very large 
> response.
> Ideally, we should be able to specify a memory buffer size because:
> 1. that would take care of both of these use cases.
> 2. it does not need any knowledge of the size of the rows or cells, as the 
> final thing we are worried about is the available memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6755) HRegion.internalObtainRowLock uses unecessary AtomicInteger

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454540#comment-13454540
 ] 

Lars Hofhansl commented on HBASE-6755:
--

In that case I'd be afraid that we'd incur more actions on the concurrent map 
and the random number generator, which actually would be slower.


> HRegion.internalObtainRowLock uses unecessary AtomicInteger
> ---
>
> Key: HBASE-6755
> URL: https://issues.apache.org/jira/browse/HBASE-6755
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.96.0, 0.94.3
>
> Attachments: 6755-0.96.txt
>
>
> I was looking at HBase's implementation of locks and saw that is 
> unnecessarily uses an AtomicInteger to obtain a unique lockid.
> The observation is that we only need a unique one and don't care if we happen 
> to skip one.
> In a very unscientific test I saw the %system CPU reduced when the 
> AtomicInteger is avoided.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6719) [replication] Data will lose if open a Hlog failed more than maxRetriesMultiplier

2012-09-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6719:
-

Attachment: 6719.txt

Can we rewrite the patch this way?
One concern I have: What if the file is actually gone for some reason? In that 
case it seems we'd never stop retrying.

@J-D: what do you think?

> [replication] Data will lose if open a Hlog failed more than 
> maxRetriesMultiplier
> -
>
> Key: HBASE-6719
> URL: https://issues.apache.org/jira/browse/HBASE-6719
> Project: HBase
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.94.1
>Reporter: terry zhang
>Assignee: terry zhang
>Priority: Critical
> Fix For: 0.94.3
>
> Attachments: 6719.txt, hbase-6719.patch
>
>
> Please Take a look below code
> {code:title=ReplicationSource.java|borderStyle=solid}
> protected boolean openReader(int sleepMultiplier) {
> {
>   ...
>   catch (IOException ioe) {
>   LOG.warn(peerClusterZnode + " Got: ", ioe);
>   // TODO Need a better way to determinate if a file is really gone but
>   // TODO without scanning all logs dir
>   if (sleepMultiplier == this.maxRetriesMultiplier) {
> LOG.warn("Waited too long for this file, considering dumping");
> return !processEndOfFile(); // Open a file failed over 
> maxRetriesMultiplier(default 10)
>   }
> }
> return true;
>   ...
> }
>   protected boolean processEndOfFile() {
> if (this.queue.size() != 0) {// Skipped this Hlog . Data loss
>   this.currentPath = null;
>   this.position = 0;
>   return true;
> } else if (this.queueRecovered) {   // Terminate Failover Replication 
> source thread ,data loss
>   this.manager.closeRecoveredQueue(this);
>   LOG.info("Finished recovering the queue");
>   this.running = false;
>   return true;
> }
> return false;
>   }
> {code} 
> Some Time HDFS will meet some problem but actually Hlog file is OK , So after 
> HDFS back  ,Some data will lose and can not find them back in slave cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6500) hbck comlaining, Exception in thread "main" java.lang.NullPointerException

2012-09-12 Thread Jie Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454536#comment-13454536
 ] 

Jie Huang commented on HBASE-6500:
--

I guess so. Meanwhile, I am wondering if we need to add one more unit test case 
to verfiy this case. Since it is mentioned again and again. I can take that. 
What do you think?

> hbck comlaining, Exception in thread "main" java.lang.NullPointerException 
> ---
>
> Key: HBASE-6500
> URL: https://issues.apache.org/jira/browse/HBASE-6500
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 0.94.0
> Environment: Hadoop 0.20.205.0
> Zookeeper: zookeeper-3.3.5.jar
> Hbase: hbase-0.94.0
>Reporter: liuli
>
> I met problem with starting Hbase:
> I have 5 machines (Ubuntu)
> 109.123.121.23 rsmm-master.example.com
> 109.123.121.24 rsmm-slave-1.example.com
> 109.123.121.25 rsmm-slave-2.example.com
> 109.123.121.26 rsmm-slave-3.example.com
> 109.123.121.27 rsmm-slave-4.example.com
> Hadoop 0.20.205.0
> Zookeeper: zookeeper-3.3.5.jar
> Hbase: hbase-0.94.0
> After starting HBase, running hbck
> hduser@rsmm-master:~/hbase/bin$ ./hbase hbck
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:zookeeper.version=3.3.5-1301095, built on 03/15/2012 19:48 GMT
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:host.name=rsmm-master.example.com
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.version=1.6.0_33
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.vendor=Sun Microsystems Inc.
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.home=/usr/java/jre1.6.0_33
> 28/12/12 17:13:29 INFO zookeeper.ZooKeeper: Client 
> environment:java.class.path=/home/hduser/hbase/conf:/usr/java/jre1.6.0_33/lib/tools.jar:/home/hduser/hbase:/home/hduser/hbase/hbase-0.94.0.jar:/home/hduser/hbase/hbase-0.94.0-tests.jar:/home/hduser/hbase/lib/activation-1.1.jar:/home/hduser/hbase/lib/asm-3.1.jar:/home/hduser/hbase/lib/avro-1.5.3.jar:/home/hduser/hbase/lib/avro-ipc-1.5.3.jar:/home/hduser/hbase/lib/commons-beanutils-1.7.0.jar:/home/hduser/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hbase/lib/commons-cli-1.2.jar:/home/hduser/hbase/lib/commons-codec-1.4.jar:/home/hduser/hbase/lib/commons-collections-3.2.1.jar:/home/hduser/hbase/lib/commons-configuration-1.6.jar:/home/hduser/hbase/lib/commons-digester-1.8.jar:/home/hduser/hbase/lib/commons-el-1.0.jar:/home/hduser/hbase/lib/commons-httpclient-3.1.jar:/home/hduser/hbase/lib/commons-io-2.1.jar:/home/hduser/hbase/lib/commons-lang-2.5.jar:/home/hduser/hbase/lib/commons-logging-1.1.1.jar:/home/hduser/hbase/lib/commons-math-2.1.jar:/home/hduser/hbase/lib/commons-net-1.4.1.jar:/home/hduser/hbase/lib/core-3.1.1.jar:/home/hduser/hbase/lib/guava-r09.jar:/home/hduser/hbase/lib/hadoop-core-0.20.205.0.jar:/home/hduser/hbase/lib/high-scale-lib-1.1.1.jar:/home/hduser/hbase/lib/httpclient-4.1.2.jar:/home/hduser/hbase/lib/httpcore-4.1.3.jar:/home/hduser/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hduser/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hduser/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hduser/hbase/lib/jackson-xc-1.5.5.jar:/home/hduser/hbase/lib/jamon-runtime-2.3.1.jar:/home/hduser/hbase/lib/jasper-compiler-5.5.23.jar:/home/hduser/hbase/lib/jasper-runtime-5.5.23.jar:/home/hduser/hbase/lib/jaxb-api-2.1.jar:/home/hduser/hbase/lib/jaxb-impl-2.1.12.jar:/home/hduser/hbase/lib/jersey-core-1.4.jar:/home/hduser/hbase/lib/jersey-json-1.4.jar:/home/hduser/hbase/lib/jersey-server-1.4.jar:/home/hduser/hbase/lib/jettison-1.1.jar:/home/hduser/hbase/lib/jetty-6.1.26.jar:/home/hduser/hbase/lib/jetty-util-6.1.26.jar:/home/hduser/hbase/lib/jruby-complete-1.6.5.jar:/home/hduser/hbase/lib/jsp-2.1-6.1.14.jar:/home/hduser/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hduser/hbase/lib/libthrift-0.8.0.jar:/home/hduser/hbase/lib/log4j-1.2.16.jar:/home/hduser/hbase/lib/netty-3.2.4.Final.jar:/home/hduser/hbase/lib/protobuf-java-2.4.0a.jar:/home/hduser/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hduser/hbase/lib/slf4j-api-1.5.8.jar:/home/hduser/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hduser/hbase/lib/snappy-java-1.0.3.2.jar:/home/hduser/hbase/lib/stax-api-1.0.1.jar:/home/hduser/hbase/lib/velocity-1.7.jar:/home/hduser/hbase/lib/xmlenc-0.52.jar:/home/hduser/hbase/lib/zookeeper-3.3.5.jar:/home/hduser/hadoop-0.20.205.0/conf:/home/hduser/hadoop-0.20.205.0/libexec/../conf:/usr/java/jre1.6.0_33/lib/tools.jar:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop/hadoop-core-0.20.205.0.jar:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop/lib/asm-3.2.jar:/home/hduser/hadoop-0.20.205.0/libexec/../share/hadoop/lib/aspectjrt-1.6.5.jar:/home/hduser/hadoop-0.20

[jira] [Created] (HBASE-6770) Allow scanner setCaching to specify size instead of number of rows

2012-09-12 Thread Karthik Ranganathan (JIRA)
Karthik Ranganathan created HBASE-6770:
--

 Summary: Allow scanner setCaching to specify size instead of 
number of rows
 Key: HBASE-6770
 URL: https://issues.apache.org/jira/browse/HBASE-6770
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Reporter: Karthik Ranganathan


Currently, we have the following api's to customize the behavior of scans:
setCaching() - how many rows to cache on client to speed up scans
setBatch() - max columns per row to return per row to prevent a very large 
response.

Ideally, we should be able to specify a memory buffer size because:
1. that would take care of both of these use cases.
2. it does not need any knowledge of the size of the rows or cells, as the 
final thing we are worried about is the available memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6765) 'Take a snapshot' interface

2012-09-12 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-6765:
---

Attachment: hbase-6765-v0.patch

Attaching patch and posted to RB: https://reviews.apache.org/r/7072/

> 'Take a snapshot' interface
> ---
>
> Key: HBASE-6765
> URL: https://issues.apache.org/jira/browse/HBASE-6765
> Project: HBase
>  Issue Type: Bug
>  Components: client, master
>Affects Versions: 0.96.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 0.96.0
>
> Attachments: hbase-6765-v0.patch
>
>
> Add interfaces taking a snapshot. This is in hopes of cutting down on the 
> overhead involved in reviewing snapshots.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454527#comment-13454527
 ] 

Lars Hofhansl commented on HBASE-6178:
--

Test failures must be unrelated.

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
> Attachments: hbase-6178-v0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454524#comment-13454524
 ] 

Hadoop QA commented on HBASE-6178:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544899/hbase-6178-v0.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The patch appears to cause mvn compile goal to fail.

-1 findbugs.  The patch appears to cause Findbugs (version 1.3.9) to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestFromClientSide
  org.apache.hadoop.hbase.replication.TestReplication

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2852//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2852//console

This message is automatically generated.

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
> Attachments: hbase-6178-v0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6518) Bytes.toBytesBinary() incorrect trailing backslash escape

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454507#comment-13454507
 ] 

Hudson commented on HBASE-6518:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #170 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/170/])
HBASE-6518 Bytes.toBytesBinary() incorrect trailing backslash escape 
(Revision 1384103)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java


> Bytes.toBytesBinary() incorrect trailing backslash escape
> -
>
> Key: HBASE-6518
> URL: https://issues.apache.org/jira/browse/HBASE-6518
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Reporter: Tudor Scurtu
>Assignee: Tudor Scurtu
>Priority: Trivial
>  Labels: patch
> Fix For: 0.96.0
>
> Attachments: HBASE-6518.patch
>
>
> Bytes.toBytesBinary() converts escaped strings to byte arrays. When 
> encountering a '\' character, it looks at the next one to see if it is an 
> 'x', without checking if it exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6766) Remove the Thread Dump link on Info pages

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454506#comment-13454506
 ] 

Hudson commented on HBASE-6766:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #170 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/170/])
HBASE-6766 Remove the Thread Dump link on Info pages (Revision 1384136)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* 
/hbase/trunk/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RSStatusTmpl.jamon
* /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
* 
/hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp
* /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/zk.jsp


> Remove the Thread Dump link on Info pages
> -
>
> Key: HBASE-6766
> URL: https://issues.apache.org/jira/browse/HBASE-6766
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: noob
> Fix For: 0.96.0
>
> Attachments: HBASE-6766-0.patch
>
>
> The Debug Dump page has the thread dump.  Fewer links on the page would make 
> things a little clearer for new users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5021) Enforce upper bound on timestamp

2012-09-12 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454503#comment-13454503
 ] 

Jean-Daniel Cryans commented on HBASE-5021:
---

Since this commit we don't report NoSuchColumnFamilyException on multi() calls, 
I opened HBASE-6769.

> Enforce upper bound on timestamp
> 
>
> Key: HBASE-5021
> URL: https://issues.apache.org/jira/browse/HBASE-5021
> Project: HBase
>  Issue Type: Improvement
>Reporter: Nicolas Spiegelberg
>Assignee: Nicolas Spiegelberg
>Priority: Critical
> Fix For: 0.94.0
>
> Attachments: 5021-addendum.txt, 
> ASF.LICENSE.NOT.GRANTED--D849.1.patch, ASF.LICENSE.NOT.GRANTED--D849.2.patch, 
> ASF.LICENSE.NOT.GRANTED--D849.3.patch, HBASE-5021-trunk.patch
>
>
> We have been getting hit with performance problems on our time-series 
> database due to invalid timestamps being inserted by the timestamp.  We are 
> working on adding proper checks to app server, but production performance 
> could be severely impacted with significant recovery time if something slips 
> past.  Since timestamps are considered a fundamental part of the HBase schema 
> & multiple optimizations use timestamp information, we should allow the 
> option to sanity check the upper bound on the server-side in HBase.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6769) HRS.multi eats NoSuchColumnFamilyException since HBASE-5021

2012-09-12 Thread Jean-Daniel Cryans (JIRA)
Jean-Daniel Cryans created HBASE-6769:
-

 Summary: HRS.multi eats NoSuchColumnFamilyException since 
HBASE-5021
 Key: HBASE-6769
 URL: https://issues.apache.org/jira/browse/HBASE-6769
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.1, 0.94.0
Reporter: Jean-Daniel Cryans
Assignee: Elliott Clark
Priority: Critical
 Fix For: 0.96.0, 0.94.2


I think this is a pretty major usability regression, since HBASE-5021 this is 
what you get in the client when using a wrong family:

{noformat}
2012-09-11 09:45:29,634 WARN 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
action: DoNotRetryIOException: 1 time, servers with issues: sfor3s44:10304, 
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1601)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1377)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:772)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:747)
{noformat}

Then you have to log on the server to understand what failed.

Since everything is now a multi call, even single puts in the shell fail like 
this.

This is present since 0.94.0

Assigning to Elliott because he asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6710) 0.92/0.94 compatibility issues due to HBASE-5206

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454502#comment-13454502
 ] 

Lars Hofhansl commented on HBASE-6710:
--

+1 for last patch on RB (v3). This is a great patch with thoughtful testing. 
Thanks for working this out Gregory.


> 0.92/0.94 compatibility issues due to HBASE-5206
> 
>
> Key: HBASE-6710
> URL: https://issues.apache.org/jira/browse/HBASE-6710
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Critical
> Fix For: 0.94.2
>
>
> HBASE-5206 introduces some compatibility issues between {0.94,0.94.1} and
> {0.92.0,0.92.1}.  The release notes of HBASE-5155 describes the issue 
> (HBASE-5206 is a backport of HBASE-5155).
> I think we can make 0.94.2 compatible with both {0.94.0,0.94.1} and 
> {0.92.0,0.92.1}, although one of those sets will require configuration 
> changes.
> The basic problem is that there is a znode for each table 
> "zookeeper.znode.tableEnableDisable" that is handled differently.
> On 0.92.0 and 0.92.1 the states for this table are:
> [ disabled, disabling, enabling ] or deleted if the table is enabled
> On 0.94.1 and 0.94.2 the states for this table are:
> [ disabled, disabling, enabling, enabled ]
> What saves us is that the location of this znode is configurable.  So the 
> basic idea is to have the 0.94.2 master write two different znodes, 
> "zookeeper.znode.tableEnableDisabled92" and 
> "zookeeper.znode.tableEnableDisabled94" where the 92 node is in 92 format, 
> the 94 node is in 94 format.  And internally, the master would only use the 
> 94 format in order to solve the original bug HBASE-5155 solves.
> We can of course make one of these the same default as exists now, so we 
> don't need to make config changes for one of 0.92 or 0.94 clients.  I argue 
> that 0.92 clients shouldn't have to make config changes for the same reason I 
> argued above.  But that is debatable.
> Then, I think the only question left is the question of how to bring along 
> the {0.94.0, 0.94.1} crew.  A {0.94.0, 0.94.1} client would work against a 
> 0.94.2 cluster by just configuring "zookeeper.znode.tableEnableDisable" in 
> the client to be whatever "zookeeper.znode.tableEnableDisabled94" is in the 
> cluster.  A 0.94.2 client would work against both a {0.94.0, 0.94.1} and 
> {0.92.0, 0.92.1} cluster if it had HBASE-6268 applied.  About rolling upgrade 
> from {0.94.0, 0.94.1} to 0.94.2 -- I'd have to think about that.  Do the 
> regionservers ever read the tableEnableDisabled znode?
> On the mailing list, Lars H suggested the following:
> "The only input I'd have is that format we'll use going forward will not have 
> a version attached to it.
> So maybe the 92 version would still be called 
> "zookeeper.znode.tableEnableDisable" and the new node could have a different 
> name "zookeeper.znode.tableEnableDisableNew" (or something)."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454491#comment-13454491
 ] 

Jesse Yates commented on HBASE-6178:


@Lars- agree that is probably should just be in the regular jar, but lets do it 
in another issue :)

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
> Attachments: hbase-6178-v0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454490#comment-13454490
 ] 

Elliott Clark commented on HBASE-6178:
--

I'm +1 then as long as we re-visit if a test util ever starts relying on a 
hadoop compat test-jar.

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
> Attachments: hbase-6178-v0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454488#comment-13454488
 ] 

Jesse Yates commented on HBASE-6178:


@Elliott - I don't think its worth adding them right now. We only release the 
server-tests jar since it has the mini-cluster which is useful for people 
testing stuff on hbase and need a minicluster. *-compat and -it don't actually 
have anything useful in them. That will probably change for -it, but lets deal 
with that when we get there.

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
> Attachments: hbase-6178-v0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6766) Remove the Thread Dump link on Info pages

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454489#comment-13454489
 ] 

Hudson commented on HBASE-6766:
---

Integrated in HBase-TRUNK #3325 (See 
[https://builds.apache.org/job/HBase-TRUNK/3325/])
HBASE-6766 Remove the Thread Dump link on Info pages (Revision 1384136)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* 
/hbase/trunk/hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RSStatusTmpl.jamon
* /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
* 
/hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp
* /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/zk.jsp


> Remove the Thread Dump link on Info pages
> -
>
> Key: HBASE-6766
> URL: https://issues.apache.org/jira/browse/HBASE-6766
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: noob
> Fix For: 0.96.0
>
> Attachments: HBASE-6766-0.patch
>
>
> The Debug Dump page has the thread dump.  Fewer links on the page would make 
> things a little clearer for new users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454487#comment-13454487
 ] 

Lars Hofhansl commented on HBASE-6178:
--

In fact, why is PE in the "test" jar anyway? It seems that should be something 
to be included in the HBase jar proper.

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
> Attachments: hbase-6178-v0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454486#comment-13454486
 ] 

Lars Hofhansl commented on HBASE-6178:
--

Heh... We were just debating that. Currently there is nothing interesting in 
those.

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
> Attachments: hbase-6178-v0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-6178:
--

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
> Attachments: hbase-6178-v0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454477#comment-13454477
 ] 

Elliott Clark commented on HBASE-6178:
--

Tested it locally and everything works well.
Should we include the hbase-hadoop-compat and hbase-hadoop{1|2}-compat 
test-jar's as well?

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
> Attachments: hbase-6178-v0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454476#comment-13454476
 ] 

Lars Hofhansl commented on HBASE-6178:
--

Tested compilation locally, verified the server-sests jar is included (which 
has PE in it).
+1

Will commit soon unless there are objections.

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
> Attachments: hbase-6178-v0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-6178:
---

Attachment: hbase-6178-v0.patch

Attaching patch - works locally.

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
> Attachments: hbase-6178-v0.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454462#comment-13454462
 ] 

Jesse Yates commented on HBASE-6178:


Worked out the issue, and also got the fix along with some cleanup to the 
descriptors. Patch coming soon.

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-6178) LoadTest tool no longer packaged after the modularization

2012-09-12 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates reassigned HBASE-6178:
--

Assignee: Jesse Yates

> LoadTest tool no longer packaged after the modularization
> -
>
> Key: HBASE-6178
> URL: https://issues.apache.org/jira/browse/HBASE-6178
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jesse Yates
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6768) HBase Rest server crashes if client tries to retrieve data size > 5 MB

2012-09-12 Thread Mubarak Seyed (JIRA)
Mubarak Seyed created HBASE-6768:


 Summary: HBase Rest server crashes if client tries to retrieve 
data size > 5 MB
 Key: HBASE-6768
 URL: https://issues.apache.org/jira/browse/HBASE-6768
 Project: HBase
  Issue Type: Bug
  Components: rest
Affects Versions: 0.90.5
Reporter: Mubarak Seyed


I have a CF with one qualifier, data size is > 5 MB, when i try to read the raw 
binary data as octet-stream using curl, rest server got crashed and curl throws 
exception as

{code}
 curl -v -H "Accept: application/octet-stream" 
http://abcdefgh-hbase003.test1.test.com:9090/table1/row_key1/cf:qualifer1 > 
/tmp/out

* About to connect() to abcdefgh-hbase003.test1.test.com port 9090
*   Trying xx.xx.xx.xxx... connected
* Connected to abcdefgh-hbase003.test1.test.com (xx.xxx.xx.xxx) port 9090
> GET /table1/row_key1/cf:qualifer1 HTTP/1.1
> User-Agent: curl/7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 
> OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
> Host: abcdefgh-hbase003.test1.test.com:9090
> Accept: application/octet-stream
> 
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:--  0:00:02 --:--:-- 0< 
HTTP/1.1 200 OK
< Content-Length: 5129836
< X-Timestamp: 1347338813129
< Content-Type: application/octet-stream
  0 5009k0 162720 0   7460  0  0:11:27  0:00:02  0:11:25 
13872transfer closed with 1148524 bytes remaining to read
 77 5009k   77 3888k0 0  1765k  0  0:00:02  0:00:02 --:--:-- 3253k* 
Closing connection #0

curl: (18) transfer closed with 1148524 bytes remaining to read

{code}

Couldn't find the exception in rest server log or no core dump either. This 
issue is constantly reproducible. Even i tried with HBase Rest client 
(HRemoteTable) and i could recreate this issue if the data size is > 10 MB 
(even with MIME_PROTOBUF accept header)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6766) Remove the Thread Dump link on Info pages

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454403#comment-13454403
 ] 

Hadoop QA commented on HBASE-6766:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544871/HBASE-6766-0.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 hadoop2.0.  The patch compiles against the hadoop 2.0 profile.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The patch appears to cause mvn compile goal to fail.

-1 findbugs.  The patch appears to cause Findbugs (version 1.3.9) to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

 -1 core tests.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestFromClientSide
  
org.apache.hadoop.hbase.security.access.TestZKPermissionsWatcher

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2851//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2851//console

This message is automatically generated.

> Remove the Thread Dump link on Info pages
> -
>
> Key: HBASE-6766
> URL: https://issues.apache.org/jira/browse/HBASE-6766
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: noob
> Fix For: 0.96.0
>
> Attachments: HBASE-6766-0.patch
>
>
> The Debug Dump page has the thread dump.  Fewer links on the page would make 
> things a little clearer for new users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5448) Support for dynamic coprocessor endpoints with PB-based RPC

2012-09-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454382#comment-13454382
 ] 

stack commented on HBASE-5448:
--

[~ghelmling] Those tests commonly fail on trunk.  Doubt it your patch.  I'd be 
+1 on commit Gary.  Can address stuff like removing all Writable references in 
new JIRAs?  (Could address the Ted comments on commit?)

> Support for dynamic coprocessor endpoints with PB-based RPC
> ---
>
> Key: HBASE-5448
> URL: https://issues.apache.org/jira/browse/HBASE-5448
> Project: HBase
>  Issue Type: Sub-task
>  Components: ipc, master, migration, regionserver
>Reporter: Todd Lipcon
>Assignee: Gary Helmling
> Fix For: 0.96.0
>
> Attachments: HBASE-5448_2.patch, HBASE-5448.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6668) disable in shell may make confused to user

2012-09-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454378#comment-13454378
 ] 

stack commented on HBASE-6668:
--

I don't think the current behavior that bad could be more pleasant but I'd 
suggest that there are better places to dig where the yields could be much 
larger if you would like to improve 'user experience'.

> disable  in shell may make confused to user
> ---
>
> Key: HBASE-6668
> URL: https://issues.apache.org/jira/browse/HBASE-6668
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.94.1
>Reporter: Zhou wenjian
>Assignee: Zhou wenjian
>
> hbase(main):002:0> disable 'logTable'
> 0 row(s) in 2.0910 seconds
> hbase(main):003:0> disable 'logTable'
> 0 row(s) in 0.0260 seconds
> and we can found table are disabled in log when  disable first appears
> but when i disable it again the client just return seemed to be sucessful and 
> I can not find any log described it in the log.
> look into the admin.rb, find below
> 
> #--
> # Disables a table
> def disable(table_name)
>   tableExists(table_name)
>   return if disabled?(table_name)
>   @admin.disableTable(table_name)
> end
> that would confuse us when we found it disabled already but returns nothing 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6766) Remove the Thread Dump link on Info pages

2012-09-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6766:
-

Release Note: Remove the  thread dump link from the top of the UI pages; 
use the dump link instead.  It includes a thread dump among other stats.

> Remove the Thread Dump link on Info pages
> -
>
> Key: HBASE-6766
> URL: https://issues.apache.org/jira/browse/HBASE-6766
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: noob
> Fix For: 0.96.0
>
> Attachments: HBASE-6766-0.patch
>
>
> The Debug Dump page has the thread dump.  Fewer links on the page would make 
> things a little clearer for new users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6766) Remove the Thread Dump link on Info pages

2012-09-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6766:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to the trunk.  Thanks for the clean up Elliott.

> Remove the Thread Dump link on Info pages
> -
>
> Key: HBASE-6766
> URL: https://issues.apache.org/jira/browse/HBASE-6766
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: noob
> Fix For: 0.96.0
>
> Attachments: HBASE-6766-0.patch
>
>
> The Debug Dump page has the thread dump.  Fewer links on the page would make 
> things a little clearer for new users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6518) Bytes.toBytesBinary() incorrect trailing backslash escape

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454356#comment-13454356
 ] 

Hudson commented on HBASE-6518:
---

Integrated in HBase-TRUNK #3324 (See 
[https://builds.apache.org/job/HBase-TRUNK/3324/])
HBASE-6518 Bytes.toBytesBinary() incorrect trailing backslash escape 
(Revision 1384103)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
* 
/hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java


> Bytes.toBytesBinary() incorrect trailing backslash escape
> -
>
> Key: HBASE-6518
> URL: https://issues.apache.org/jira/browse/HBASE-6518
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Reporter: Tudor Scurtu
>Assignee: Tudor Scurtu
>Priority: Trivial
>  Labels: patch
> Fix For: 0.96.0
>
> Attachments: HBASE-6518.patch
>
>
> Bytes.toBytesBinary() converts escaped strings to byte arrays. When 
> encountering a '\' character, it looks at the next one to see if it is an 
> 'x', without checking if it exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6649) [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-1]

2012-09-12 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454347#comment-13454347
 ] 

Devaraj Das commented on HBASE-6649:


This log file belongs to a crashed RS, and yes, it seems like the last record 
wasn't completely written to the file before the RS crashed. That should be 
fine, i.e., no dataloss should happen - in the queueFailover test, the client 
would have got exceptions to the flushCommit call and it would have retried the 
batch of 'put' and the corresponding records would have ended up in another RS.

> [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-1]
> ---
>
> Key: HBASE-6649
> URL: https://issues.apache.org/jira/browse/HBASE-6649
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.96.0, 0.92.3, 0.94.2
>
> Attachments: 6649-0.92.patch, 6649-1.patch, 6649-2.txt, 
> 6649-trunk.patch, 6649-trunk.patch, 6649.txt, HBase-0.92 #495 test - 
> queueFailover [Jenkins].html, HBase-0.92 #502 test - queueFailover 
> [Jenkins].html
>
>
> Have seen it twice in the recent past: http://bit.ly/MPCykB & 
> http://bit.ly/O79Dq7 .. 
> Looking briefly at the logs hints at a pattern - in both the failed test 
> instances, there was an RS crash while the test was running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6766) Remove the Thread Dump link on Info pages

2012-09-12 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-6766:
-

Attachment: HBASE-6766-0.patch

Remove the links from the ui.

> Remove the Thread Dump link on Info pages
> -
>
> Key: HBASE-6766
> URL: https://issues.apache.org/jira/browse/HBASE-6766
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>  Labels: noob
> Fix For: 0.96.0
>
> Attachments: HBASE-6766-0.patch
>
>
> The Debug Dump page has the thread dump.  Fewer links on the page would make 
> things a little clearer for new users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6766) Remove the Thread Dump link on Info pages

2012-09-12 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-6766:
-

Fix Version/s: 0.96.0
   Status: Patch Available  (was: Open)

> Remove the Thread Dump link on Info pages
> -
>
> Key: HBASE-6766
> URL: https://issues.apache.org/jira/browse/HBASE-6766
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>  Labels: noob
> Fix For: 0.96.0
>
> Attachments: HBASE-6766-0.patch
>
>
> The Debug Dump page has the thread dump.  Fewer links on the page would make 
> things a little clearer for new users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-6766) Remove the Thread Dump link on Info pages

2012-09-12 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark reassigned HBASE-6766:


Assignee: Elliott Clark

> Remove the Thread Dump link on Info pages
> -
>
> Key: HBASE-6766
> URL: https://issues.apache.org/jira/browse/HBASE-6766
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: noob
> Fix For: 0.96.0
>
> Attachments: HBASE-6766-0.patch
>
>
> The Debug Dump page has the thread dump.  Fewer links on the page would make 
> things a little clearer for new users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6518) Bytes.toBytesBinary() incorrect trailing backslash escape

2012-09-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6518:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk.  Thanks for patch Tudor and review Michael.

> Bytes.toBytesBinary() incorrect trailing backslash escape
> -
>
> Key: HBASE-6518
> URL: https://issues.apache.org/jira/browse/HBASE-6518
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Reporter: Tudor Scurtu
>Assignee: Tudor Scurtu
>Priority: Trivial
>  Labels: patch
> Fix For: 0.96.0
>
> Attachments: HBASE-6518.patch
>
>
> Bytes.toBytesBinary() converts escaped strings to byte arrays. When 
> encountering a '\' character, it looks at the next one to see if it is an 
> 'x', without checking if it exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-6527) Make custom filters plugin

2012-09-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-6527.
--

Resolution: Won't Fix

Lets just resolve and start again when more specifics on what is wanted.

> Make custom filters plugin
> --
>
> Key: HBASE-6527
> URL: https://issues.apache.org/jira/browse/HBASE-6527
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> More and more custom Filters are created.
> We should provide plugin mechanism for these custom Filters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-6767) Manually calling split without a midpoint errors our

2012-09-12 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HBASE-6767.
--

Resolution: Cannot Reproduce

> Manually calling split without a midpoint errors our
> 
>
> Key: HBASE-6767
> URL: https://issues.apache.org/jira/browse/HBASE-6767
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.92.2
>Reporter: Elliott Clark
> Fix For: 0.92.3
>
>
> in shell I issued:
> {code}split 'TestTable,,1347475507959.8021113e9c87e1b2f7914ff5b1644cc4.'{code}
> that resulted in the following error:
> {code}2012-09-12 18:45:55,950 DEBUG 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: Region 
> TestTable,,1347475507959.8021113e9c87e1b2f7914ff5b1644cc4. not splittable 
> because midkey=null
> 2012-09-12 18:46:04,267 DEBUG 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: Region 
> TestTable,,1347475507959.8021113e9c87e1b2f7914ff5b1644cc4. not splittable 
> because midkey=null
> 2012-09-12 18:47:31,820 DEBUG 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: Region 
> TestTable,,1347475507959.8021113e9c87e1b2f7914ff5b1644cc4. not splittable 
> because midkey=null
> 2012-09-12 18:47:35,028 DEBUG 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: Region 
> TestTable,,1347475507959.8021113e9c87e1b2f7914ff5b1644cc4. not splittable 
> because midkey=null{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6767) Manually calling split without a midpoint errors our

2012-09-12 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-6767:


 Summary: Manually calling split without a midpoint errors our
 Key: HBASE-6767
 URL: https://issues.apache.org/jira/browse/HBASE-6767
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.92.2
Reporter: Elliott Clark
 Fix For: 0.92.3


in shell I issued:
{code}split 'TestTable,,1347475507959.8021113e9c87e1b2f7914ff5b1644cc4.'{code}

that resulted in the following error:
{code}2012-09-12 18:45:55,950 DEBUG 
org.apache.hadoop.hbase.regionserver.CompactSplitThread: Region 
TestTable,,1347475507959.8021113e9c87e1b2f7914ff5b1644cc4. not splittable 
because midkey=null
2012-09-12 18:46:04,267 DEBUG 
org.apache.hadoop.hbase.regionserver.CompactSplitThread: Region 
TestTable,,1347475507959.8021113e9c87e1b2f7914ff5b1644cc4. not splittable 
because midkey=null
2012-09-12 18:47:31,820 DEBUG 
org.apache.hadoop.hbase.regionserver.CompactSplitThread: Region 
TestTable,,1347475507959.8021113e9c87e1b2f7914ff5b1644cc4. not splittable 
because midkey=null
2012-09-12 18:47:35,028 DEBUG 
org.apache.hadoop.hbase.regionserver.CompactSplitThread: Region 
TestTable,,1347475507959.8021113e9c87e1b2f7914ff5b1644cc4. not splittable 
because midkey=null{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-6766) Remove the Thread Dump link on Info pages

2012-09-12 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-6766:


 Summary: Remove the Thread Dump link on Info pages
 Key: HBASE-6766
 URL: https://issues.apache.org/jira/browse/HBASE-6766
 Project: HBase
  Issue Type: Improvement
Reporter: Elliott Clark


The Debug Dump page has the thread dump.  Fewer links on the page would make 
things a little clearer for new users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6710) 0.92/0.94 compatibility issues due to HBASE-5206

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454208#comment-13454208
 ] 

Lars Hofhansl commented on HBASE-6710:
--

Some more comments on RB... Let's get this in, so that I can spin 0.94.2RC0 :)

> 0.92/0.94 compatibility issues due to HBASE-5206
> 
>
> Key: HBASE-6710
> URL: https://issues.apache.org/jira/browse/HBASE-6710
> Project: HBase
>  Issue Type: Bug
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Critical
> Fix For: 0.94.2
>
>
> HBASE-5206 introduces some compatibility issues between {0.94,0.94.1} and
> {0.92.0,0.92.1}.  The release notes of HBASE-5155 describes the issue 
> (HBASE-5206 is a backport of HBASE-5155).
> I think we can make 0.94.2 compatible with both {0.94.0,0.94.1} and 
> {0.92.0,0.92.1}, although one of those sets will require configuration 
> changes.
> The basic problem is that there is a znode for each table 
> "zookeeper.znode.tableEnableDisable" that is handled differently.
> On 0.92.0 and 0.92.1 the states for this table are:
> [ disabled, disabling, enabling ] or deleted if the table is enabled
> On 0.94.1 and 0.94.2 the states for this table are:
> [ disabled, disabling, enabling, enabled ]
> What saves us is that the location of this znode is configurable.  So the 
> basic idea is to have the 0.94.2 master write two different znodes, 
> "zookeeper.znode.tableEnableDisabled92" and 
> "zookeeper.znode.tableEnableDisabled94" where the 92 node is in 92 format, 
> the 94 node is in 94 format.  And internally, the master would only use the 
> 94 format in order to solve the original bug HBASE-5155 solves.
> We can of course make one of these the same default as exists now, so we 
> don't need to make config changes for one of 0.92 or 0.94 clients.  I argue 
> that 0.92 clients shouldn't have to make config changes for the same reason I 
> argued above.  But that is debatable.
> Then, I think the only question left is the question of how to bring along 
> the {0.94.0, 0.94.1} crew.  A {0.94.0, 0.94.1} client would work against a 
> 0.94.2 cluster by just configuring "zookeeper.znode.tableEnableDisable" in 
> the client to be whatever "zookeeper.znode.tableEnableDisabled94" is in the 
> cluster.  A 0.94.2 client would work against both a {0.94.0, 0.94.1} and 
> {0.92.0, 0.92.1} cluster if it had HBASE-6268 applied.  About rolling upgrade 
> from {0.94.0, 0.94.1} to 0.94.2 -- I'd have to think about that.  Do the 
> regionservers ever read the tableEnableDisabled znode?
> On the mailing list, Lars H suggested the following:
> "The only input I'd have is that format we'll use going forward will not have 
> a version attached to it.
> So maybe the 92 version would still be called 
> "zookeeper.znode.tableEnableDisable" and the new node could have a different 
> name "zookeeper.znode.tableEnableDisableNew" (or something)."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-6439) Ignore .archive directory as a table

2012-09-12 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates reassigned HBASE-6439:
--

Assignee: Sameer Vaishampayan

> Ignore .archive directory as a table
> 
>
> Key: HBASE-6439
> URL: https://issues.apache.org/jira/browse/HBASE-6439
> Project: HBase
>  Issue Type: Bug
>  Components: io, regionserver
>Affects Versions: 0.96.0
>Reporter: Jesse Yates
>Assignee: Sameer Vaishampayan
>  Labels: newbie
>
> From a recent test run:
> {quote}
> 2012-07-22 02:27:30,699 WARN  [IPC Server handler 0 on 47087] 
> util.FSTableDescriptors(168): The following folder is in HBase's root 
> directory and doesn't contain a table descriptor, do consider deleting it: 
> .archive
> {quote}
> With the addition of HBASE-5547, table-level folders are no-longer all table 
> folders. FSTableDescriptors needs to then have a 'gold-list' that we can 
> update with directories that aren't tables so we don't have this kind of 
> thing showing up in the logs.
> Currently, we have the following block:
> {quote}
> invocations++;
> if (HTableDescriptor.ROOT_TABLEDESC.getNameAsString().equals(tablename)) {
>   cachehits++;
>   return HTableDescriptor.ROOT_TABLEDESC;
> }
> if (HTableDescriptor.META_TABLEDESC.getNameAsString().equals(tablename)) {
>   cachehits++;
>   return HTableDescriptor.META_TABLEDESC;
> }
> {quote}
> to handle special cases, but that's a bit clunky and not clean in terms of 
> table-level directories that need to be ignored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6649) [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-1]

2012-09-12 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454189#comment-13454189
 ] 

Jean-Daniel Cryans commented on HBASE-6649:
---

What I meant is that the reader gets this 10 times:

{noformat}
java.io.EOFException: 
hdfs://localhost:60044/user/hudson/hbase/.oldlogs/vesta.apache.org%2C57779%2C1345217521341.1345217601487,
 entryStart=40929, pos=40960, end=40960, edit=3
{noformat}

So if I'm reading this correctly it's able to read the file and got 3 edits but 
gets an EOF. Is something half written? Then it gives up on the file:

{noformat}
2012-08-17 15:33:50,099 INFO  
[ReplicationExecutor-0.replicationSource,2-vesta.apache.org,57779,1345217521341]
 regionserver.ReplicationSourceManager(352): Done with the recovered queue 
2-vesta.apache.org,57779,1345217521341
{noformat}

And there's data loss.

> [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-1]
> ---
>
> Key: HBASE-6649
> URL: https://issues.apache.org/jira/browse/HBASE-6649
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.96.0, 0.92.3, 0.94.2
>
> Attachments: 6649-0.92.patch, 6649-1.patch, 6649-2.txt, 
> 6649-trunk.patch, 6649-trunk.patch, 6649.txt, HBase-0.92 #495 test - 
> queueFailover [Jenkins].html, HBase-0.92 #502 test - queueFailover 
> [Jenkins].html
>
>
> Have seen it twice in the recent past: http://bit.ly/MPCykB & 
> http://bit.ly/O79Dq7 .. 
> Looking briefly at the logs hints at a pattern - in both the failed test 
> instances, there was an RS crash while the test was running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6649) [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-1]

2012-09-12 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454181#comment-13454181
 ] 

Devaraj Das commented on HBASE-6649:


bq. Oh I see what you mean. Very good find! I wonder what's that gibberish at 
the end of the file.

Thanks! Are you referring to the log file? I see the following at the end (no 
gibberish):

{noformat}
2012-08-17 15:35:01,161 DEBUG 
[RegionServer:1;vesta.apache.org,40480,1345217521368-EventThread.replicationSource,2]
 regionserver.ReplicationSource(474): Opening log for replication 
vesta.apache.org%2C40480%2C1345217521368.1345217648386 at 258
2012-08-17 15:35:01,164 DEBUG 
[RegionServer:1;vesta.apache.org,40480,1345217521368-EventThread.replicationSource,2]
 regionserver.ReplicationSource(429): currentNbOperations:13022 and 
seenEntries:0 and size: 0
2012-08-17 15:35:01,164 DEBUG 
[RegionServer:1;vesta.apache.org,40480,1345217521368-EventThread.replicationSource,2]
 regionserver.ReplicationSource(549): Nothing to replicate, sleeping 100 times 
10
{noformat}

> [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-1]
> ---
>
> Key: HBASE-6649
> URL: https://issues.apache.org/jira/browse/HBASE-6649
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.96.0, 0.92.3, 0.94.2
>
> Attachments: 6649-0.92.patch, 6649-1.patch, 6649-2.txt, 
> 6649-trunk.patch, 6649-trunk.patch, 6649.txt, HBase-0.92 #495 test - 
> queueFailover [Jenkins].html, HBase-0.92 #502 test - queueFailover 
> [Jenkins].html
>
>
> Have seen it twice in the recent past: http://bit.ly/MPCykB & 
> http://bit.ly/O79Dq7 .. 
> Looking briefly at the logs hints at a pattern - in both the failed test 
> instances, there was an RS crash while the test was running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6055) Snapshots in HBase 0.96

2012-09-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454173#comment-13454173
 ] 

Jesse Yates commented on HBASE-6055:


@Jon yeah, that's the pain of RB, but you can just do 'git checkout HEAD~1; git 
diff trunk' to generate that parent patch - not too much overhead.

> Snapshots in HBase 0.96
> ---
>
> Key: HBASE-6055
> URL: https://issues.apache.org/jira/browse/HBASE-6055
> Project: HBase
>  Issue Type: New Feature
>  Components: client, master, regionserver, zookeeper
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 0.96.0
>
> Attachments: Snapshots in HBase.docx
>
>
> Continuation of HBASE-50 for the current trunk. Since the implementation has 
> drastically changed, opening as a new ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6055) Snapshots in HBase 0.96

2012-09-12 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454169#comment-13454169
 ] 

Jonathan Hsieh commented on HBASE-6055:
---

+1 Sounds good to me.  We might have to do the incremental reviews(generating a 
parent patch and then the main patch) to send up to review board but this 
should work.

> Snapshots in HBase 0.96
> ---
>
> Key: HBASE-6055
> URL: https://issues.apache.org/jira/browse/HBASE-6055
> Project: HBase
>  Issue Type: New Feature
>  Components: client, master, regionserver, zookeeper
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 0.96.0
>
> Attachments: Snapshots in HBase.docx
>
>
> Continuation of HBASE-50 for the current trunk. Since the implementation has 
> drastically changed, opening as a new ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6055) Snapshots in HBase 0.96

2012-09-12 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454168#comment-13454168
 ] 

Jesse Yates commented on HBASE-6055:


Just setup my github repo for a snapshots development branch: 
https://github.com/jyates/hbase/tree/snapshots

We can make it such that any of the future patches for snapshots (HBASE-6765, 
HBASE-6353, HBASE-6571, HBASE-6573) all go into this branch and then we just 
merge the branch into svn with 3 +1's from committers when its ready (as per 
the discussion here: 
http://search-hadoop.com/m/asM982C5FkS1/hbase+branch+git&subj=Thoughts+about+large+feature+dev+branches).

All reviews still go through reviewboard and will receive the same scrutiny, 
but get committed over on github until we want to roll it into trunk.

Thoughts? 

> Snapshots in HBase 0.96
> ---
>
> Key: HBASE-6055
> URL: https://issues.apache.org/jira/browse/HBASE-6055
> Project: HBase
>  Issue Type: New Feature
>  Components: client, master, regionserver, zookeeper
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 0.96.0
>
> Attachments: Snapshots in HBase.docx
>
>
> Continuation of HBASE-50 for the current trunk. Since the implementation has 
> drastically changed, opening as a new ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6649) [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-1]

2012-09-12 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454157#comment-13454157
 ] 

Jean-Daniel Cryans commented on HBASE-6649:
---

Oh I see what you mean. Very good find! I wonder what's that gibberish at the 
end of the file.

> [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-1]
> ---
>
> Key: HBASE-6649
> URL: https://issues.apache.org/jira/browse/HBASE-6649
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.96.0, 0.92.3, 0.94.2
>
> Attachments: 6649-0.92.patch, 6649-1.patch, 6649-2.txt, 
> 6649-trunk.patch, 6649-trunk.patch, 6649.txt, HBase-0.92 #495 test - 
> queueFailover [Jenkins].html, HBase-0.92 #502 test - queueFailover 
> [Jenkins].html
>
>
> Have seen it twice in the recent past: http://bit.ly/MPCykB & 
> http://bit.ly/O79Dq7 .. 
> Looking briefly at the logs hints at a pattern - in both the failed test 
> instances, there was an RS crash while the test was running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6528) Raise the wait time for TestSplitLogWorker#testAcquireTaskAtStartup to reduce the failure probability

2012-09-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454147#comment-13454147
 ] 

Lars Hofhansl commented on HBASE-6528:
--

Patch looks fine. Just checked the test results, and this test has not failed 
in the past 20 odd runs.
@ShiXing: Did you see this failing somewhere specifically?


> Raise the wait time for TestSplitLogWorker#testAcquireTaskAtStartup to reduce 
> the failure probability
> -
>
> Key: HBASE-6528
> URL: https://issues.apache.org/jira/browse/HBASE-6528
> Project: HBase
>  Issue Type: Bug
>Reporter: ShiXing
>Assignee: ShiXing
> Attachments: HBASE-6528-trunk-v1.patch
>
>
> All the code for the TestSplitLogWorker, only the testAcquireTaskAtStartup 
> waits 100ms, other testCase wait 1000ms. The 100ms is short and sometimes 
> causes testAcquireTaskAtStartup failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6438) RegionAlreadyInTransitionException needs to give more info to avoid assignment inconsistencies

2012-09-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6438:
-

Fix Version/s: (was: 0.94.2)
   0.94.3

Let's put this in the next point release (which will be soon).

> RegionAlreadyInTransitionException needs to give more info to avoid 
> assignment inconsistencies
> --
>
> Key: HBASE-6438
> URL: https://issues.apache.org/jira/browse/HBASE-6438
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: rajeshbabu
> Fix For: 0.96.0, 0.92.3, 0.94.3
>
> Attachments: HBASE-6438_2.patch, HBASE-6438_94.patch, 
> HBASE-6438_trunk.patch
>
>
> Seeing some of the recent issues in region assignment, 
> RegionAlreadyInTransitionException is one reason after which the region 
> assignment may or may not happen(in the sense we need to wait for the TM to 
> assign).
> In HBASE-6317 we got one problem due to RegionAlreadyInTransitionException on 
> master restart.
> Consider the following case, due to some reason like master restart or 
> external assign call, we try to assign a region that is already getting 
> opened in a RS.
> Now the next call to assign has already changed the state of the znode and so 
> the current assign that is going on the RS is affected and it fails.  The 
> second assignment that started also fails getting RAITE exception.  Finally 
> both assignments not carrying on.  Idea is to find whether any such RAITE 
> exception can be retried or not.
> Here again we have following cases like where
> -> The znode is yet to transitioned from OFFLINE to OPENING in RS
> -> RS may be in the step of openRegion.
> -> RS may be trying to transition OPENING to OPENED.
> -> RS is yet to add to online regions in the RS side.
> Here in openRegion() and updateMeta() any failures we are moving the znode to 
> FAILED_OPEN.  So in these cases getting an RAITE should be ok.  But in other 
> cases the assignment is stopped.
> The idea is to just add the current state of the region assignment in the RIT 
> map in the RS side and using that info we can determine whether the 
> assignment can be retried or not on getting an RAITE.
> Considering the current work going on in AM, pls do share if this is needed 
> atleast in the 0.92/0.94 versions?  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-6668) disable in shell may make confused to user

2012-09-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-6668.
--

Resolution: Won't Fix

No opinions from the other committers, so I am marking as "Won't fix".
Please reopen if you disagree.

> disable  in shell may make confused to user
> ---
>
> Key: HBASE-6668
> URL: https://issues.apache.org/jira/browse/HBASE-6668
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.94.1
>Reporter: Zhou wenjian
>Assignee: Zhou wenjian
> Fix For: 0.94.3
>
>
> hbase(main):002:0> disable 'logTable'
> 0 row(s) in 2.0910 seconds
> hbase(main):003:0> disable 'logTable'
> 0 row(s) in 0.0260 seconds
> and we can found table are disabled in log when  disable first appears
> but when i disable it again the client just return seemed to be sucessful and 
> I can not find any log described it in the log.
> look into the admin.rb, find below
> 
> #--
> # Disables a table
> def disable(table_name)
>   tableExists(table_name)
>   return if disabled?(table_name)
>   @admin.disableTable(table_name)
> end
> that would confuse us when we found it disabled already but returns nothing 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6668) disable in shell may make confused to user

2012-09-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-6668:
-

Fix Version/s: (was: 0.94.3)

> disable  in shell may make confused to user
> ---
>
> Key: HBASE-6668
> URL: https://issues.apache.org/jira/browse/HBASE-6668
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.94.1
>Reporter: Zhou wenjian
>Assignee: Zhou wenjian
>
> hbase(main):002:0> disable 'logTable'
> 0 row(s) in 2.0910 seconds
> hbase(main):003:0> disable 'logTable'
> 0 row(s) in 0.0260 seconds
> and we can found table are disabled in log when  disable first appears
> but when i disable it again the client just return seemed to be sucessful and 
> I can not find any log described it in the log.
> look into the admin.rb, find below
> 
> #--
> # Disables a table
> def disable(table_name)
>   tableExists(table_name)
>   return if disabled?(table_name)
>   @admin.disableTable(table_name)
> end
> that would confuse us when we found it disabled already but returns nothing 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6611) Forcing region state offline cause double assignment

2012-09-12 Thread Jacques (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454133#comment-13454133
 ] 

Jacques commented on HBASE-6611:


Reminders from the PowWow yesterday... 

JD requested that you verify that force close continues to function despite 
changes.

JD & Andrew both requested that you run some performance tests to ensure that 
region assignment doesn't take substantially longer than 0.94.  Something along 
the lines of bulk assignment of 10,000 regions and also checking to ensure that 
region failover isn't substantially longer.

> Forcing region state offline cause double assignment
> 
>
> Key: HBASE-6611
> URL: https://issues.apache.org/jira/browse/HBASE-6611
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.96.0
>
>
> In assigning a region, assignment manager forces the region state offline if 
> it is not. This could cause double assignment, for example, if the region is 
> already assigned and in the Open state, you should not just change it's state 
> to Offline, and assign it again.
> I think this could be the root cause for all double assignments IF the region 
> state is reliable.
> After this loophole is closed, TestHBaseFsck should come up a different way 
> to create some assignment inconsistencies, for example, calling region server 
> to open a region directly. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6533) [replication] replication will block if WAL compress set differently in master and slave configuration

2012-09-12 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454128#comment-13454128
 ] 

Jean-Daniel Cryans commented on HBASE-6533:
---

Currently those two features are just incompatible, you use one or the other.

Maybe we should add a check in HBaseConfiguration to make sure both aren't 
enabled, no need to throw the exception that deep in the code (and you'd have 
to do it inside WAL compression for replication too).

In any case the real fix is described in HBASE-5778, the rest is just hacks.

> [replication] replication will block if WAL compress set differently in 
> master and slave configuration
> --
>
> Key: HBASE-6533
> URL: https://issues.apache.org/jira/browse/HBASE-6533
> Project: HBase
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.94.0
>Reporter: terry zhang
>Assignee: terry zhang
>Priority: Critical
> Fix For: 0.94.3
>
> Attachments: hbase-6533.patch
>
>
> as we know in hbase 0.94.0 we have a configuration below
>   
> hbase.regionserver.wal.enablecompression
>  true
>   
> if we enable it in master cluster and disable it in slave cluster . Then 
> replication will not work. It will throw unwrapRemoteException again and 
> again in master cluster.
> 2012-08-09 12:49:55,892 WARN 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Can't 
> replicate because of an error
>  on the remote cluster: 
> java.io.IOException: IPC server unable to read call parameters: Error in 
> readFields
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:635)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:365)
> Caused by: org.apache.hadoop.ipc.RemoteException: IPC server unable to read 
> call parameters: Error in readFields
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:921)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:151)
> at $Proxy13.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:616)
> ... 1 more 
> This is because Slave cluster can not parse the hlog entry .
> 2012-08-09 14:46:05,891 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
> read call parameters for client 10.232.98.89
> java.io.IOException: Error in readFields
> at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:685)
> at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:586)
> at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:635)
> at 
> org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1292)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1207)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:735)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:524)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:499)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readFully(DataInputStream.java:180)
> at org.apache.hadoop.hbase.KeyValue.readFields(KeyValue.java:2254)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALEdit.readFields(WALEdit.java:146)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLog$Entry.readFields(HLog.java:1767)
> at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObj

[jira] [Commented] (HBASE-5448) Support for dynamic coprocessor endpoints with PB-based RPC

2012-09-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454099#comment-13454099
 ] 

Ted Yu commented on HBASE-5448:
---

I left a few minor comments on:
https://reviews.apache.org/r/7000/

> Support for dynamic coprocessor endpoints with PB-based RPC
> ---
>
> Key: HBASE-5448
> URL: https://issues.apache.org/jira/browse/HBASE-5448
> Project: HBase
>  Issue Type: Sub-task
>  Components: ipc, master, migration, regionserver
>Reporter: Todd Lipcon
>Assignee: Gary Helmling
> Fix For: 0.96.0
>
> Attachments: HBASE-5448_2.patch, HBASE-5448.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6381) AssignmentManager should use the same logic for clean startup and failover

2012-09-12 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-6381:
---

Status: Open  (was: Patch Available)

Uploaded the new rebased patch + minor changes to RB: 
https://reviews.apache.org/r/6535/

> AssignmentManager should use the same logic for clean startup and failover
> --
>
> Key: HBASE-6381
> URL: https://issues.apache.org/jira/browse/HBASE-6381
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-6381-notes.pdf, hbase-6381.pdf, trunk-6381_v5.patch
>
>
> Currently AssignmentManager handles clean startup and failover very 
> differently.
> Different logic is mingled together so it is hard to find out which is for 
> which.
> We should clean it up and share the same logic so that AssignmentManager 
> handles
> both cases the same way.  This way, the code will much easier to understand 
> and
> maintain.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6527) Make custom filters plugin

2012-09-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453992#comment-13453992
 ] 

Ted Yu commented on HBASE-6527:
---

@Jon:
Can you add some details to this JIRA ?

> Make custom filters plugin
> --
>
> Key: HBASE-6527
> URL: https://issues.apache.org/jira/browse/HBASE-6527
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> More and more custom Filters are created.
> We should provide plugin mechanism for these custom Filters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6528) Raise the wait time for TestSplitLogWorker#testAcquireTaskAtStartup to reduce the failure probability

2012-09-12 Thread Michael Drzal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453981#comment-13453981
 ] 

Michael Drzal commented on HBASE-6528:
--

[~lhofhansl] I don't know enough about the test infrastructure to comment on 
this.  Do you have any thoughts or know who would be the best person to look at 
this?

> Raise the wait time for TestSplitLogWorker#testAcquireTaskAtStartup to reduce 
> the failure probability
> -
>
> Key: HBASE-6528
> URL: https://issues.apache.org/jira/browse/HBASE-6528
> Project: HBase
>  Issue Type: Bug
>Reporter: ShiXing
>Assignee: ShiXing
> Attachments: HBASE-6528-trunk-v1.patch
>
>
> All the code for the TestSplitLogWorker, only the testAcquireTaskAtStartup 
> waits 100ms, other testCase wait 1000ms. The 100ms is short and sometimes 
> causes testAcquireTaskAtStartup failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6527) Make custom filters plugin

2012-09-12 Thread Michael Drzal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453980#comment-13453980
 ] 

Michael Drzal commented on HBASE-6527:
--

[~zhi...@ebaysf.com] Can you or Jonathan add more detail around this or should 
we just close it out like [~saint@gmail.com] requests?

> Make custom filters plugin
> --
>
> Key: HBASE-6527
> URL: https://issues.apache.org/jira/browse/HBASE-6527
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> More and more custom Filters are created.
> We should provide plugin mechanism for these custom Filters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6518) Bytes.toBytesBinary() incorrect trailing backslash escape

2012-09-12 Thread Michael Drzal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453977#comment-13453977
 ] 

Michael Drzal commented on HBASE-6518:
--

+1 on the patch

> Bytes.toBytesBinary() incorrect trailing backslash escape
> -
>
> Key: HBASE-6518
> URL: https://issues.apache.org/jira/browse/HBASE-6518
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Reporter: Tudor Scurtu
>Assignee: Tudor Scurtu
>Priority: Trivial
>  Labels: patch
> Attachments: HBASE-6518.patch
>
>
> Bytes.toBytesBinary() converts escaped strings to byte arrays. When 
> encountering a '\' character, it looks at the next one to see if it is an 
> 'x', without checking if it exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6517) Print thread dump when a test times out

2012-09-12 Thread Michael Drzal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453976#comment-13453976
 ] 

Michael Drzal commented on HBASE-6517:
--

Added a link to the hadoop jira so we can see when that gets committed.

> Print thread dump when a test times out
> ---
>
> Key: HBASE-6517
> URL: https://issues.apache.org/jira/browse/HBASE-6517
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.96.0
>Reporter: Andrew Purtell
>Priority: Minor
>  Labels: noob
>
> Hadoop common is adding a JUnit run listener which prints full thread dump 
> into System.err when a test is failed due to timeout. See HDFS-3762.
> Suggest pulling in their {{TestTimedOutListener}} once it is committed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6504) Adding GC details prevents HBase from starting in non-distributed mode

2012-09-12 Thread Michael Drzal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453971#comment-13453971
 ] 

Michael Drzal commented on HBASE-6504:
--

I can pick this up in the next day or two.  This should be a quick fix.

> Adding GC details prevents HBase from starting in non-distributed mode
> --
>
> Key: HBASE-6504
> URL: https://issues.apache.org/jira/browse/HBASE-6504
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.0
>Reporter: Benoit Sigoure
>Assignee: Michael Drzal
>Priority: Trivial
>  Labels: noob
>
> The {{conf/hbase-env.sh}} that ships with HBase contains a few commented out 
> examples of variables that could be useful, such as adding 
> {{-XX:+PrintGCDetails -XX:+PrintGCDateStamps}} to {{HBASE_OPTS}}.  This has 
> the annoying side effect that the JVM prints a summary of memory usage when 
> it exits, and it does so on stdout:
> {code}
> $ ./bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool 
> hbase.cluster.distributed
> false
> Heap
>  par new generation   total 19136K, used 4908K [0x00073a20, 
> 0x00073b6c, 0x00075186)
>   eden space 17024K,  28% used [0x00073a20, 0x00073a6cb0a8, 
> 0x00073b2a)
>   from space 2112K,   0% used [0x00073b2a, 0x00073b2a, 
> 0x00073b4b)
>   to   space 2112K,   0% used [0x00073b4b, 0x00073b4b, 
> 0x00073b6c)
>  concurrent mark-sweep generation total 63872K, used 0K [0x00075186, 
> 0x0007556c, 0x0007f5a0)
>  concurrent-mark-sweep perm gen total 21248K, used 6994K [0x0007f5a0, 
> 0x0007f6ec, 0x0008)
> $ ./bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool 
> hbase.cluster.distributed >/dev/null
> (nothing printed)
> {code}
> And this confuses {{bin/start-hbase.sh}} when it does
> {{distMode=`$bin/hbase --config "$HBASE_CONF_DIR" 
> org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed`}}, 
> because then the {{distMode}} variable is not just set to {{false}}, it also 
> contains all this JVM spam.
> If you don't pay enough attention and realize that 3 processes are getting 
> started (ZK, HM, RS) instead of just one (HM), then you end up with this 
> confusing error message:
> {{Could not start ZK at requested port of 2181.  ZK was started at port: 
> 2182.  Aborting as clients (e.g. shell) will not be able to find this ZK 
> quorum.}}, which is even more puzzling because when you run {{netstat}} to 
> see who owns that port, then you won't find any rogue process other than the 
> one you just started.
> I'm wondering if the fix is not to just change the {{if [ "$distMode" == 
> 'false' ]}} to a {{switch $distMode case (false*)}} type of test, to work 
> around this annoying JVM misfeature that pollutes stdout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >