[jira] [Commented] (HBASE-11723) Document all options of bin/hbase command

2014-08-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093849#comment-14093849
 ] 

Hadoop QA commented on HBASE-11723:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12661143/HBASE-11723.patch
  against trunk revision .
  ATTACHMENT ID: 12661143

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10397//console

This message is automatically generated.

> Document all options of bin/hbase command
> -
>
> Key: HBASE-11723
> URL: https://issues.apache.org/jira/browse/HBASE-11723
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-11723.patch
>
>
> The bin/hbase command is not documented fully in the Ref Guide: 
> http://hbase.apache.org/book.html#tools
> Specifically a few new options were added in HBASE-11649 and need to be 
> documented. Also the generic usage instructions need to be there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11655) Document how to use Bash with HBase Shell

2014-08-12 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093850#comment-14093850
 ] 

Matteo Bertozzi commented on HBASE-11655:
-

sounds good for me.
There is a "succewss" to fix, but can be done on commit.

> Document how to use Bash with HBase Shell
> -
>
> Key: HBASE-11655
> URL: https://issues.apache.org/jira/browse/HBASE-11655
> Project: HBase
>  Issue Type: Task
>  Components: documentation, shell
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-11655.patch, HBASE-11655.patch, HBASE-11655.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11685) Incr/decr on the reference count of HConnectionImplementation need be atomic

2014-08-12 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-11685:


Attachment: HBASE-11685-trunk-v6.diff

Fix the failed test TestMetaTableAccessorNoCluster.
And It seems that the other failed test TestZKSecretWatcher has no relations 
with this patch.

> Incr/decr on the reference count of HConnectionImplementation need be atomic 
> -
>
> Key: HBASE-11685
> URL: https://issues.apache.org/jira/browse/HBASE-11685
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-11685-trunk-v1.diff, HBASE-11685-trunk-v2.diff, 
> HBASE-11685-trunk-v3.diff, HBASE-11685-trunk-v4.diff, 
> HBASE-11685-trunk-v5.diff, HBASE-11685-trunk-v6.diff
>
>
> Currently, the incr/decr operation on the ref count of 
> HConnectionImplementation are not atomic. This may cause that the ref count 
> always be larger than 0 and  the connection never be closed.
> {code}
> /**
>  * Increment this client's reference count.
>  */
> void incCount() {
>   ++refCount;
> }
> /**
>  * Decrement this client's reference count.
>  */
> void decCount() {
>   if (refCount > 0) {
> --refCount;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11656) Document how to script snapshots

2014-08-12 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11656:


Attachment: HBASE-11656.patch

Fixed the mistake pointed out by [~busbey]

> Document how to script snapshots
> 
>
> Key: HBASE-11656
> URL: https://issues.apache.org/jira/browse/HBASE-11656
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, shell
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-11656.patch, HBASE-11656.patch, HBASE-11656.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11655) Document how to use Bash with HBase Shell

2014-08-12 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093856#comment-14093856
 ] 

Misty Stanley-Jones commented on HBASE-11655:
-

Sorry about the typo. I can regenerate the patch but if it's just as easy to 
fix on commit, that's fine with me.

> Document how to use Bash with HBase Shell
> -
>
> Key: HBASE-11655
> URL: https://issues.apache.org/jira/browse/HBASE-11655
> Project: HBase
>  Issue Type: Task
>  Components: documentation, shell
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-11655.patch, HBASE-11655.patch, HBASE-11655.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11656) Document how to script snapshots

2014-08-12 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093860#comment-14093860
 ] 

Matteo Bertozzi commented on HBASE-11656:
-

+1 looks good to me,

I still don't see the point of having this specific snapshot case,
to me seems more a way to confuse the user. "oh wait, is this different from 
the normal scripting described in the shell doc?"

> Document how to script snapshots
> 
>
> Key: HBASE-11656
> URL: https://issues.apache.org/jira/browse/HBASE-11656
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, shell
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-11656.patch, HBASE-11656.patch, HBASE-11656.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11718) Remove some logs in RpcClient.java

2014-08-12 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-11718:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed, thanks for the review!

> Remove some logs in RpcClient.java
> --
>
> Key: HBASE-11718
> URL: https://issues.apache.org/jira/browse/HBASE-11718
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 1.0.0, 2.0.0
>
> Attachments: 11718.v1.patch
>
>
> All the debug level logs there are about connection start/stop, except a few 
> that are per request. As a result, we log too much when we're at the debug 
> log level. It could be changed to "trace", but when you're at the trace level 
> you usually end up with a debugger to have all the info you need, so I think 
> it's better to remove them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11719) Remove some unused paths in AsyncClient

2014-08-12 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-11719:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed, thanks for the review!

> Remove some unused paths in AsyncClient
> ---
>
> Key: HBASE-11719
> URL: https://issues.apache.org/jira/browse/HBASE-11719
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 1.0.0, 2.0.0
>
> Attachments: simplifyMultiReplica.patch
>
>
> [~sershe] you're ok with these changes?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11707) Using Map instead of list in FailedServers of RpcClient

2014-08-12 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093885#comment-14093885
 ] 

Nicolas Liochon commented on HBASE-11707:
-

+1 :-)

> Using Map instead of list in FailedServers of RpcClient
> ---
>
> Key: HBASE-11707
> URL: https://issues.apache.org/jira/browse/HBASE-11707
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-11707-trunk-v1.diff, HBASE-11707-trunk-v2.diff
>
>
> Currently, FailedServers uses a list to record the black list of servers and 
> iterate the list to check if a server is in list. It's not efficient when the 
> list is very large. And the list is not thread safe for the add and iteration 
> operations.
> RpcClient.java#175
> {code}
>   // iterate, looking for the search entry and cleaning expired entries
>   Iterator> it = failedServers.iterator();
>   while (it.hasNext()) {
> Pair cur = it.next();
>  if (cur.getFirst() < now) {
>   it.remove();
> } else {
>   if (lookup.equals(cur.getSecond())) {
> return true;
>   }
> }
> {code}
> A simple change is to change this list to ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-08-12 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11639:
---

Fix Version/s: 2.0.0

> [Visibility controller] Replicate the visibility of Cells as strings
> 
>
> Key: HBASE-11639
> URL: https://issues.apache.org/jira/browse/HBASE-11639
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.98.4
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>  Labels: VisibilityLabels
> Fix For: 0.99.0, 2.0.0, 0.98.6
>
>
> This issue is aimed at persisting the visibility labels as strings in the WAL 
> rather than Label ordinals.  This would help in replicating the label 
> ordinals to the replication cluster as strings directly and also that after 
> HBASE-11553 would help because the replication cluster could have an 
> implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-08-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093893#comment-14093893
 ] 

Anoop Sam John commented on HBASE-11639:


We can work on that POC patch (There were some more things to be done) and see 
how we can merge the two patches together.  Assigning to Ram.  Hope you are ok 
with that Ram.

> [Visibility controller] Replicate the visibility of Cells as strings
> 
>
> Key: HBASE-11639
> URL: https://issues.apache.org/jira/browse/HBASE-11639
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.98.4
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>  Labels: VisibilityLabels
> Fix For: 0.99.0, 2.0.0, 0.98.6
>
>
> This issue is aimed at persisting the visibility labels as strings in the WAL 
> rather than Label ordinals.  This would help in replicating the label 
> ordinals to the replication cluster as strings directly and also that after 
> HBASE-11553 would help because the replication cluster could have an 
> implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-08-12 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11639:
---

Component/s: security

> [Visibility controller] Replicate the visibility of Cells as strings
> 
>
> Key: HBASE-11639
> URL: https://issues.apache.org/jira/browse/HBASE-11639
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.98.4
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>  Labels: VisibilityLabels
> Fix For: 0.99.0, 2.0.0, 0.98.6
>
>
> This issue is aimed at persisting the visibility labels as strings in the WAL 
> rather than Label ordinals.  This would help in replicating the label 
> ordinals to the replication cluster as strings directly and also that after 
> HBASE-11553 would help because the replication cluster could have an 
> implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-08-12 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11639:
---

Labels: VisibilityLabels  (was: )

> [Visibility controller] Replicate the visibility of Cells as strings
> 
>
> Key: HBASE-11639
> URL: https://issues.apache.org/jira/browse/HBASE-11639
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.98.4
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>  Labels: VisibilityLabels
> Fix For: 0.99.0, 2.0.0, 0.98.6
>
>
> This issue is aimed at persisting the visibility labels as strings in the WAL 
> rather than Label ordinals.  This would help in replicating the label 
> ordinals to the replication cluster as strings directly and also that after 
> HBASE-11553 would help because the replication cluster could have an 
> implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-08-12 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11639:
---

Assignee: ramkrishna.s.vasudevan

> [Visibility controller] Replicate the visibility of Cells as strings
> 
>
> Key: HBASE-11639
> URL: https://issues.apache.org/jira/browse/HBASE-11639
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.98.4
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>  Labels: VisibilityLabels
> Fix For: 0.99.0, 2.0.0, 0.98.6
>
>
> This issue is aimed at persisting the visibility labels as strings in the WAL 
> rather than Label ordinals.  This would help in replicating the label 
> ordinals to the replication cluster as strings directly and also that after 
> HBASE-11553 would help because the replication cluster could have an 
> implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11713) Adding hbase shell unit test coverage for visibility labels.

2014-08-12 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093918#comment-14093918
 ] 

Anoop Sam John commented on HBASE-11713:


{code}
+  private static void append(Configuration conf, String property, String 
value) {
+conf.set(property, conf.get(property) + "," + value);
+  }
{code}
Add check before adding ",". If conf already have a value then only add ","


> Adding hbase shell unit test coverage for visibility labels.
> 
>
> Key: HBASE-11713
> URL: https://issues.apache.org/jira/browse/HBASE-11713
> Project: HBase
>  Issue Type: Test
>  Components: security, shell
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Attachments: HBASE-11713.patch, HBASE-11713_v2.patch
>
>
> Adding test coverage for visibility labels to hbase shell. Also, refactoring 
> existing tests so that all the unit tests related to visibility can be found 
> in one place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6184) HRegionInfo was null or empty in Meta

2014-08-12 Thread Guo Ruijing (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093922#comment-14093922
 ] 

Guo Ruijing commented on HBASE-6184:


The issue also happened in hadoop-2.0.5_alpha + hbase-0.94.8 as:

java.io.IOException: HRegionInfo was null or empty in Meta for hbase:namespace, 
row=hbase:namespace,,99
at 
org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:152)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1095)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1155)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1047)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1004)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:325)
at org.apache.hadoop.hbase.client.HTable.(HTable.java:191)
at 
org.apache.hadoop.hbase.master.TableNamespaceManager.isTableAvailableAndInitialized(TableNamespaceManager.java:260)
at 
org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:106)
at 
org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1044)
at 
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:916)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:603)
at java.lang.Thread.run(Thread.java:744)


> HRegionInfo was null or empty in Meta 
> --
>
> Key: HBASE-6184
> URL: https://issues.apache.org/jira/browse/HBASE-6184
> Project: HBase
>  Issue Type: Bug
>  Components: Client, io
>Affects Versions: 0.94.0
>Reporter: jiafeng.zhang
> Attachments: HBASE-6184.patch
>
>
> insert data
> hadoop-0.23.2 + hbase-0.94.0
> 2012-06-07 13:09:38,573 WARN  
> [org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation] 
> Encountered problems when prefetch META table: 
> java.io.IOException: HRegionInfo was null or empty in Meta for hbase_one_col, 
> row=hbase_one_col,09115303780247449149,99
> at 
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:160)
> at 
> org.apache.hadoop.hbase.client.MetaScanner.access$000(MetaScanner.java:48)
> at 
> org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:126)
> at 
> org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:123)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager.execute(HConnectionManager.java:359)
> at 
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:123)
> at 
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:99)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:894)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:948)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:836)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1482)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1367)
> at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:945)
> at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:801)
> at org.apache.hadoop.hbase.client.HTable.put(HTable.java:776)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.put(HTablePool.java:397)
> at com.dinglicom.hbase.HbaseImport.insertData(HbaseImport.java:177)
> at com.dinglicom.hbase.HbaseImport.run(HbaseImport.java:210)
> at java.lang.Thread.run(Thread.java:662)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11685) Incr/decr on the reference count of HConnectionImplementation need be atomic

2014-08-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093975#comment-14093975
 ] 

Hadoop QA commented on HBASE-11685:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12661161/HBASE-11685-trunk-v6.diff
  against trunk revision .
  ATTACHMENT ID: 12661161

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestRegionRebalancing

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at org.apache.hadoop.hbase.client.TestHCM.testClusterStatus(TestHCM.java:250)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10398//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10398//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10398//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10398//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10398//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10398//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10398//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10398//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10398//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10398//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10398//console

This message is automatically generated.

> Incr/decr on the reference count of HConnectionImplementation need be atomic 
> -
>
> Key: HBASE-11685
> URL: https://issues.apache.org/jira/browse/HBASE-11685
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-11685-trunk-v1.diff, HBASE-11685-trunk-v2.diff, 
> HBASE-11685-trunk-v3.diff, HBASE-11685-trunk-v4.diff, 
> HBASE-11685-trunk-v5.diff, HBASE-11685-trunk-v6.diff
>
>
> Currently, the incr/decr operation on the ref count of 
> HConnectionImplementation are not atomic. This may cause that the ref count 
> always be larger than 0 and  the connection never be closed.
> {code}
> /**
>  * Increment this client's reference count.
>  */
> void incCount() {
>   ++refCount;
> }
> /**
>  * Decrement this client's reference count.
>  */
> void decCount() {
>   if (refCount > 0) {
> --refCount;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11719) Remove some unused paths in AsyncClient

2014-08-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093978#comment-14093978
 ] 

Hudson commented on HBASE-11719:


FAILURE: Integrated in HBase-TRUNK #5391 (See 
[https://builds.apache.org/job/HBase-TRUNK/5391/])
HBASE-11719 Remove some unused paths in AsyncClient (nkeywal: rev 
fadb0900a08b749cac61e48f3ab322dff5525f29)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java


> Remove some unused paths in AsyncClient
> ---
>
> Key: HBASE-11719
> URL: https://issues.apache.org/jira/browse/HBASE-11719
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 1.0.0, 2.0.0
>
> Attachments: simplifyMultiReplica.patch
>
>
> [~sershe] you're ok with these changes?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11718) Remove some logs in RpcClient.java

2014-08-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093977#comment-14093977
 ] 

Hudson commented on HBASE-11718:


FAILURE: Integrated in HBase-TRUNK #5391 (See 
[https://builds.apache.org/job/HBase-TRUNK/5391/])
HBASE-11718 Remove some logs in RpcClient.java (nkeywal: rev 
2c3340c00ad5934d450f5677f2ee7e21a2855793)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java


> Remove some logs in RpcClient.java
> --
>
> Key: HBASE-11718
> URL: https://issues.apache.org/jira/browse/HBASE-11718
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 1.0.0, 2.0.0
>
> Attachments: 11718.v1.patch
>
>
> All the debug level logs there are about connection start/stop, except a few 
> that are per request. As a result, we log too much when we're at the debug 
> log level. It could be changed to "trace", but when you're at the trace level 
> you usually end up with a debugger to have all the info you need, so I think 
> it's better to remove them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11656) Document how to script snapshots

2014-08-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093991#comment-14093991
 ] 

Hadoop QA commented on HBASE-11656:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12661162/HBASE-11656.patch
  against trunk revision .
  ATTACHMENT ID: 12661162

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestRegionPlacement

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10399//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10399//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10399//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10399//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10399//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10399//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10399//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10399//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10399//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10399//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10399//console

This message is automatically generated.

> Document how to script snapshots
> 
>
> Key: HBASE-11656
> URL: https://issues.apache.org/jira/browse/HBASE-11656
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, shell
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-11656.patch, HBASE-11656.patch, HBASE-11656.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11719) Remove some unused paths in AsyncClient

2014-08-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094048#comment-14094048
 ] 

Hudson commented on HBASE-11719:


FAILURE: Integrated in HBase-1.0 #95 (See 
[https://builds.apache.org/job/HBase-1.0/95/])
HBASE-11719 Remove some unused paths in AsyncClient (nkeywal: rev 
2b9123f9382c367b14ed885a9996c4d8efb873bc)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java


> Remove some unused paths in AsyncClient
> ---
>
> Key: HBASE-11719
> URL: https://issues.apache.org/jira/browse/HBASE-11719
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Trivial
> Fix For: 1.0.0, 2.0.0
>
> Attachments: simplifyMultiReplica.patch
>
>
> [~sershe] you're ok with these changes?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11718) Remove some logs in RpcClient.java

2014-08-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094047#comment-14094047
 ] 

Hudson commented on HBASE-11718:


FAILURE: Integrated in HBase-1.0 #95 (See 
[https://builds.apache.org/job/HBase-1.0/95/])
HBASE-11718 Remove some logs in RpcClient.java (nkeywal: rev 
8b80819a6f1fdb7ae51d9c170d1cb89eab17afc6)
* hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java


> Remove some logs in RpcClient.java
> --
>
> Key: HBASE-11718
> URL: https://issues.apache.org/jira/browse/HBASE-11718
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 1.0.0, 2.0.0
>
> Attachments: 11718.v1.patch
>
>
> All the debug level logs there are about connection start/stop, except a few 
> that are per request. As a result, we log too much when we're at the debug 
> log level. It could be changed to "trace", but when you're at the trace level 
> you usually end up with a debugger to have all the info you need, so I think 
> it's better to remove them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11657) Put HTable region methods in an interface

2014-08-12 Thread Carter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carter updated HBASE-11657:
---

Status: Patch Available  (was: Open)

> Put HTable region methods in an interface
> -
>
> Key: HBASE-11657
> URL: https://issues.apache.org/jira/browse/HBASE-11657
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.99.0
>Reporter: Carter
>Assignee: Carter
> Fix For: 0.99.0
>
> Attachments: HBASE_11657.patch, HBASE_11657_v2.patch, 
> HBASE_11657_v3.patch, HBASE_11657_v3.patch, HBASE_11657_v4.patch
>
>
> Most of the HTable methods are now abstracted by HTableInterface, with the 
> notable exception of the following methods that pertain to region metadata:
> {code}
> HRegionLocation getRegionLocation(final String row)
> HRegionLocation getRegionLocation(final byte [] row)
> HRegionLocation getRegionLocation(final byte [] row, boolean reload)
> byte [][] getStartKeys()
> byte[][] getEndKeys()
> Pair getStartEndKeys()
> void clearRegionCache()
> {code}
> and a default scope method which maybe should be bundled with the others:
> {code}
> List listRegionLocations()
> {code}
> Since the consensus seems to be that these would muddy HTableInterface with 
> non-core functionality, where should it go?  MapReduce looks up the region 
> boundaries, so it needs to be exposed somewhere.
> Let me throw out a straw man to start the conversation.  I propose:
> {code}
> org.apache.hadoop.hbase.client.HRegionInterface
> {code}
> Have HTable implement this interface.  Also add these methods to HConnection:
> {code}
> HRegionInterface getTableRegion(TableName tableName)
> HRegionInterface getTableRegion(TableName tableName, ExecutorService pool)
> {code}
> [~stack], [~ndimiduk], [~enis], thoughts?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11708) RegionSplitter incorrectly calculates splitcount

2014-08-12 Thread louis hust (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

louis hust updated HBASE-11708:
---

Description: 
>From discussion on HBASE-11627:

{quote}
And I also find another bug about the caculation of the variable splitCount 
which is cause by the wrong caculation of variable splitCount.
{quote}

  was:
>From discussion on HBASE-11627:

{quote}
And I also find another bug about the caculation of the variable splitCount 
which is cause by the wrong caculation of variable finished.
{quote}


> RegionSplitter incorrectly calculates splitcount
> 
>
> Key: HBASE-11708
> URL: https://issues.apache.org/jira/browse/HBASE-11708
> Project: HBase
>  Issue Type: Bug
>  Components: Admin, util
>Affects Versions: 0.96.2, 0.98.1
>Reporter: Sean Busbey
>Assignee: louis hust
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE_11708-v2.patch, HBASE_11708-v3.patch, 
> HBASE_11708.patch
>
>
> From discussion on HBASE-11627:
> {quote}
> And I also find another bug about the caculation of the variable splitCount 
> which is cause by the wrong caculation of variable splitCount.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HBASE-11462) MetaTableAccessor shouldn't use ZooKeeeper

2014-08-12 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-11462 started by Andrey Stepachev.

> MetaTableAccessor shouldn't use ZooKeeeper
> --
>
> Key: HBASE-11462
> URL: https://issues.apache.org/jira/browse/HBASE-11462
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Zookeeper
>Affects Versions: 2.0.0
>Reporter: Mikhail Antonov
>Assignee: Andrey Stepachev
> Fix For: 2.0.0
>
>
> After committing patch for HBASE-4495, there's an further improvement which 
> can be made (discussed originally on review board to that jira).
> We have MetaTableAccessor and MetaTableLocator classes. First one is used to 
> access information stored in hbase:meta table. Second one is used to deal 
> with ZooKeeper state to find out region server hosting hbase:meta, wait for 
> it to become available and so on.
> MetaTableAccessor, in turn, should only operate on the meta table content, so 
> shouldn't need ZK. The only reason why MetaTableAccessor is using ZK - when 
> callers request assignment information, they can request location of meta 
> table itself, which we can't read from meta, so in that case 
> MetaTableAccessor relays the call to MetaTableLocator.  May be the solution 
> here is to declare that clients of MetaTableAccessor shall not use it to work 
> with meta table itself (not it's content).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11405) Multiple invocations of hbck in parallel disables balancer permanently

2014-08-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094167#comment-14094167
 ] 

Sean Busbey commented on HBASE-11405:
-

{noformat}
busbey2-MBA:hbase busbey$ git status
On branch master
Your branch is up-to-date with 'origin/master'.

nothing to commit, working directory clean
busbey2-MBA:hbase busbey$ git apply --check 
~/Downloads/HBASE-11405-trunk.patch.1 
error: patch failed: 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java:105
error: hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java: 
patch does not apply
error: patch failed: 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java:36
error: 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java: 
patch does not apply
{noformat}

Patch no longer applies to master. [~bharathv] can you rebase?

Could you then also upload to ReviewBoard so it's easier to give review 
feedback?

> Multiple invocations of hbck in parallel disables balancer permanently 
> ---
>
> Key: HBASE-11405
> URL: https://issues.apache.org/jira/browse/HBASE-11405
> Project: HBase
>  Issue Type: Bug
>  Components: Balancer, hbck
>Affects Versions: 0.99.0
>Reporter: bharath v
>Assignee: bharath v
> Attachments: HBASE-11405-trunk.patch, HBASE-11405-trunk.patch.1
>
>
> This is because of the following piece of code in hbck
> {code:borderStyle=solid}
>   boolean oldBalancer = admin.setBalancerRunning(false, true);
> try {
>   onlineConsistencyRepair();
> }
> finally {
>   admin.setBalancerRunning(oldBalancer, false);
> }
> {code}
> Newer invocations set oldBalancer to false as it was disabled by previous 
> invocations and this disables balancer permanently unless its manually turned 
> on by the user. Easy to reproduce, just run hbck 100 times in a loop in 2 
> different sessions and you can see that balancer is set to false in the 
> HMaster logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11724) Add to RWQueueRpcExecutor the ability to split get and scan handlers

2014-08-12 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-11724:
---

 Summary: Add to RWQueueRpcExecutor the ability to split get and 
scan handlers
 Key: HBASE-11724
 URL: https://issues.apache.org/jira/browse/HBASE-11724
 Project: HBase
  Issue Type: New Feature
  Components: IPC/RPC
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 2.0.0
 Attachments: HBASE-11724-v0.patch

RWQueueRpcExecutor has the devision between reads and writes requests, but we 
can split also small-reads and long-reads. This can be useful to force a 
deprioritization of scans on the RS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11724) Add to RWQueueRpcExecutor the ability to split get and scan handlers

2014-08-12 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-11724:


Attachment: HBASE-11724-v0.patch

> Add to RWQueueRpcExecutor the ability to split get and scan handlers
> 
>
> Key: HBASE-11724
> URL: https://issues.apache.org/jira/browse/HBASE-11724
> Project: HBase
>  Issue Type: New Feature
>  Components: IPC/RPC
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-11724-v0.patch
>
>
> RWQueueRpcExecutor has the devision between reads and writes requests, but we 
> can split also small-reads and long-reads. This can be useful to force a 
> deprioritization of scans on the RS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11724) Add to RWQueueRpcExecutor the ability to split get and scan handlers

2014-08-12 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-11724:


Status: Patch Available  (was: Open)

> Add to RWQueueRpcExecutor the ability to split get and scan handlers
> 
>
> Key: HBASE-11724
> URL: https://issues.apache.org/jira/browse/HBASE-11724
> Project: HBase
>  Issue Type: New Feature
>  Components: IPC/RPC
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-11724-v0.patch
>
>
> RWQueueRpcExecutor has the devision between reads and writes requests, but we 
> can split also small-reads and long-reads. This can be useful to force a 
> deprioritization of scans on the RS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11657) Put HTable region methods in an interface

2014-08-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094194#comment-14094194
 ] 

Hadoop QA commented on HBASE-11657:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12661013/HBASE_11657_v4.patch
  against trunk revision .
  ATTACHMENT ID: 12661013

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
  
org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes
  org.apache.hadoop.hbase.TestRegionRebalancing

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10400//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10400//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10400//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10400//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10400//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10400//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10400//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10400//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10400//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10400//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10400//console

This message is automatically generated.

> Put HTable region methods in an interface
> -
>
> Key: HBASE-11657
> URL: https://issues.apache.org/jira/browse/HBASE-11657
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.99.0
>Reporter: Carter
>Assignee: Carter
> Fix For: 0.99.0
>
> Attachments: HBASE_11657.patch, HBASE_11657_v2.patch, 
> HBASE_11657_v3.patch, HBASE_11657_v3.patch, HBASE_11657_v4.patch
>
>
> Most of the HTable methods are now abstracted by HTableInterface, with the 
> notable exception of the following methods that pertain to region metadata:
> {code}
> HRegionLocation getRegionLocation(final String row)
> HRegionLocation getRegionLocation(final byte [] row)
> HRegionLocation getRegionLocation(final byte [] row, boolean reload)
> byte [][] getStartKeys()
> byte[][] getEndKeys()
> Pair getStartEndKeys()
> void clearRegionCache()
> {code}
> and a default scope method which maybe should be bundled with the others:
> {code}
> List listRegionLocations()
> {code}
> Since the consensus seems to be that these would muddy HTableInterface with 
> non-core functionality, where should it go?  MapReduce looks up the region 
> boundaries, so it needs to be exposed somewhere.
> Let me throw out a straw man to start the conversation.  I propose:
> {code}
> org.apache.hadoop.hbase.client.HRegionInterface
> {code}
> Have HTable implement this interface.  Also add these methods to HConnection:
> {code}
> HRegionInterface getTableRegion(TableName tableName)
> HRegionInterface getTableRegion(TableName tableName, ExecutorService pool)
> {code}
> [~stack], [~ndimiduk], [~enis], thoughts?



--
This message was sent by Atlassia

[jira] [Commented] (HBASE-11604) Disable co-locating meta/master by default

2014-08-12 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094263#comment-14094263
 ] 

Jimmy Xiang commented on HBASE-11604:
-

The problem is that the default RPC port is occupied by the master so the RS 
can't start.

> Disable co-locating meta/master by default
> --
>
> Key: HBASE-11604
> URL: https://issues.apache.org/jira/browse/HBASE-11604
> Project: HBase
>  Issue Type: Task
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 1.0.0
>
> Attachments: hbase-11604.patch, hbase-11604_v2.patch
>
>
> To avoid possible confusing, it's better to keep the original deployment 
> scheme in 1.0. ZK-less region assignment is off by default in 1.0 already. We 
> should, by default, not assign any region to master or backup master.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11709) TestMasterShutdown can fail sometime

2014-08-12 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094293#comment-14094293
 ] 

Matteo Bertozzi commented on HBASE-11709:
-

+1

> TestMasterShutdown can fail sometime 
> -
>
> Key: HBASE-11709
> URL: https://issues.apache.org/jira/browse/HBASE-11709
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Attachments: hbase-11709.patch, hbase-11709_v2.patch
>
>
> This applies to 1.0 and master, not previous versions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11072) Abstract WAL splitting from ZK

2014-08-12 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-11072:


Status: Open  (was: Patch Available)

> Abstract WAL splitting from ZK
> --
>
> Key: HBASE-11072
> URL: https://issues.apache.org/jira/browse/HBASE-11072
> Project: HBase
>  Issue Type: Sub-task
>  Components: Consensus, Zookeeper
>Affects Versions: 0.99.0
>Reporter: Mikhail Antonov
>Assignee: Sergey Soldatov
> Attachments: HBASE-11072-1_v2.patch, HBASE-11072-1_v3.patch, 
> HBASE-11072-1_v4.patch, HBASE-11072-2_v2.patch, HBASE-11072-v1.patch, 
> HBASE-11072-v2.patch, HBASE-11072-v3.patch, HBASE-11072-v4.patch, 
> HBASE-11072-v5.patch, HBASE-11072-v6.patch, HBASE_11072-1.patch
>
>
> HM side:
>  - SplitLogManager
> RS side:
>  - SplitLogWorker
>  - HLogSplitter and a few handler classes.
> This jira may need to be split further apart into smaller ones.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11072) Abstract WAL splitting from ZK

2014-08-12 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-11072:


Attachment: HBASE-11072-v7.patch

rebased

> Abstract WAL splitting from ZK
> --
>
> Key: HBASE-11072
> URL: https://issues.apache.org/jira/browse/HBASE-11072
> Project: HBase
>  Issue Type: Sub-task
>  Components: Consensus, Zookeeper
>Affects Versions: 0.99.0
>Reporter: Mikhail Antonov
>Assignee: Sergey Soldatov
> Attachments: HBASE-11072-1_v2.patch, HBASE-11072-1_v3.patch, 
> HBASE-11072-1_v4.patch, HBASE-11072-2_v2.patch, HBASE-11072-v1.patch, 
> HBASE-11072-v2.patch, HBASE-11072-v3.patch, HBASE-11072-v4.patch, 
> HBASE-11072-v5.patch, HBASE-11072-v6.patch, HBASE-11072-v7.patch, 
> HBASE_11072-1.patch
>
>
> HM side:
>  - SplitLogManager
> RS side:
>  - SplitLogWorker
>  - HLogSplitter and a few handler classes.
> This jira may need to be split further apart into smaller ones.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11072) Abstract WAL splitting from ZK

2014-08-12 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-11072:


Status: Patch Available  (was: Open)

> Abstract WAL splitting from ZK
> --
>
> Key: HBASE-11072
> URL: https://issues.apache.org/jira/browse/HBASE-11072
> Project: HBase
>  Issue Type: Sub-task
>  Components: Consensus, Zookeeper
>Affects Versions: 0.99.0
>Reporter: Mikhail Antonov
>Assignee: Sergey Soldatov
> Attachments: HBASE-11072-1_v2.patch, HBASE-11072-1_v3.patch, 
> HBASE-11072-1_v4.patch, HBASE-11072-2_v2.patch, HBASE-11072-v1.patch, 
> HBASE-11072-v2.patch, HBASE-11072-v3.patch, HBASE-11072-v4.patch, 
> HBASE-11072-v5.patch, HBASE-11072-v6.patch, HBASE-11072-v7.patch, 
> HBASE_11072-1.patch
>
>
> HM side:
>  - SplitLogManager
> RS side:
>  - SplitLogWorker
>  - HLogSplitter and a few handler classes.
> This jira may need to be split further apart into smaller ones.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11703) Meta region state could be corrupted

2014-08-12 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094305#comment-14094305
 ] 

Matteo Bertozzi commented on HBASE-11703:
-

you lost the hris.remove(HRegionInfo.FIRST_META_REGIONINFO) is it filtered 
elsewhere?

> Meta region state could be corrupted
> 
>
> Key: HBASE-11703
> URL: https://issues.apache.org/jira/browse/HBASE-11703
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-11703.patch
>
>
> Internal meta region state could be corrupted if the meta is not on master:
> 1. the meta region server (not master) shuts down,
> 2. meta SSH offlines it without updating the dead server's region list,
> 3. meta is transitioned to pending_open and the previous server (the dead 
> server) of meta is lost,
> 4. meta is assigned somewhere else without updating its previous server,
> 5. normal SSH processes the dead server and offlines all of it's dead regions 
> including the meta, so the meta internal state is corrupted



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11703) Meta region state could be corrupted

2014-08-12 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094324#comment-14094324
 ] 

Jimmy Xiang commented on HBASE-11703:
-

This dead server should not have meta on it. If so, the isCarryMeta() should 
return true before that, and handle by meta SSH. In region states, when meta is 
offline, it's removed from the server holding map.

> Meta region state could be corrupted
> 
>
> Key: HBASE-11703
> URL: https://issues.apache.org/jira/browse/HBASE-11703
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-11703.patch
>
>
> Internal meta region state could be corrupted if the meta is not on master:
> 1. the meta region server (not master) shuts down,
> 2. meta SSH offlines it without updating the dead server's region list,
> 3. meta is transitioned to pending_open and the previous server (the dead 
> server) of meta is lost,
> 4. meta is assigned somewhere else without updating its previous server,
> 5. normal SSH processes the dead server and offlines all of it's dead regions 
> including the meta, so the meta internal state is corrupted



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9531) a command line (hbase shell) interface to retreive the replication metrics and show replication lag

2014-08-12 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094323#comment-14094323
 ] 

Demai Ni commented on HBASE-9531:
-

again, the '-1 lineLengths' are for the generated protobuff code and jruby 
script, should be ok. 

[~apurtell],[~enis], does the new patch match your suggestions? thanks... Demai

> a command line (hbase shell) interface to retreive the replication metrics 
> and show replication lag
> ---
>
> Key: HBASE-9531
> URL: https://issues.apache.org/jira/browse/HBASE-9531
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Affects Versions: 0.99.0
>Reporter: Demai Ni
>Assignee: Demai Ni
> Fix For: 0.99.0, 0.98.6
>
> Attachments: HBASE-9531-master-v1.patch, HBASE-9531-master-v1.patch, 
> HBASE-9531-master-v1.patch, HBASE-9531-master-v2.patch, 
> HBASE-9531-master-v3.patch, HBASE-9531-trunk-v0.patch, 
> HBASE-9531-trunk-v0.patch
>
>
> This jira is to provide a command line (hbase shell) interface to retreive 
> the replication metrics info such as:ageOfLastShippedOp, 
> timeStampsOfLastShippedOp, sizeOfLogQueue ageOfLastAppliedOp, and 
> timeStampsOfLastAppliedOp. And also to provide a point of time info of the 
> lag of replication(source only)
> Understand that hbase is using Hadoop 
> metrics(http://hbase.apache.org/metrics.html), which is a common way to 
> monitor metric info. This Jira is to serve as a light-weight client 
> interface, comparing to a completed(certainly better, but heavier)GUI 
> monitoring package. I made the code works on 0.94.9 now, and like to use this 
> jira to get opinions about whether the feature is valuable to other 
> users/workshop. If so, I will build a trunk patch. 
> All inputs are greatly appreciated. Thank you!
> The overall design is to reuse the existing logic which supports hbase shell 
> command 'status', and invent a new module, called ReplicationLoad.  In 
> HRegionServer.buildServerLoad() , use the local replication service objects 
> to get their loads  which could be wrapped in a ReplicationLoad object and 
> then simply pass it to the ServerLoad. In ReplicationSourceMetrics and 
> ReplicationSinkMetrics, a few getters and setters will be created, and ask 
> Replication to build a "ReplicationLoad".  (many thanks to Jean-Daniel for 
> his kindly suggestions through dev email list)
> the replication lag will be calculated for source only, and use this formula: 
> {code:title=Replication lag|borderStyle=solid}
>   if sizeOfLogQueue != 0 then max(ageOfLastShippedOp, (current time - 
> timeStampsOfLastShippedOp)) //err on the large side
>   else if (current time - timeStampsOfLastShippedOp) < 2* 
> ageOfLastShippedOp then lag = ageOfLastShippedOp // last shipped happen 
> recently 
> else lag = 0 // last shipped may happens last night, so NO real lag 
> although ageOfLastShippedOp is non-zero
> {code}
> External will look something like:
> {code:title=status 'replication'|borderStyle=solid}
> hbase(main):001:0> status 'replication'
> version 0.94.9
> 3 live servers
>     hdtest017.svl.ibm.com:
>     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
> timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
>     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
> 14:48:48 PDT 2013
>     hdtest018.svl.ibm.com:
>     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
> timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
>     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
> 14:50:59 PDT 2013
>     hdtest015.svl.ibm.com:
>     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
> timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
>     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
> 14:48:48 PDT 2013
> hbase(main):002:0> status 'replication','source'
> version 0.94.9
> 3 live servers
>     hdtest017.svl.ibm.com:
>     SOURCE:PeerID=1, ageOfLastShippedOp=14, sizeOfLogQueue=0, 
> timeStampsOfLastShippedOp=Wed Sep 04 14:49:48 PDT 2013
>     hdtest018.svl.ibm.com:
>     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
> timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
>     hdtest015.svl.ibm.com:
>     SOURCE:PeerID=1, ageOfLastShippedOp=0, sizeOfLogQueue=0, 
> timeStampsOfLastShippedOp=Wed Sep 04 14:48:48 PDT 2013
> hbase(main):003:0> status 'replication','sink'
> version 0.94.9
> 3 live servers
>     hdtest017.svl.ibm.com:
>     SINK  :AgeOfLastAppliedOp=0, TimeStampsOfLastAppliedOp=Wed Sep 04 
> 14:48:48 PDT 2013
>     hdtest018.svl.ibm.com:
>     SINK  :AgeOfLastAppliedOp=14, TimeStampsOfLastAppliedOp=Wed Sep 04 
> 14:50:59 PDT 2013
>     hdtest015.svl.ibm.

[jira] [Commented] (HBASE-11703) Meta region state could be corrupted

2014-08-12 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094326#comment-14094326
 ] 

Matteo Bertozzi commented on HBASE-11703:
-

ok, +1 

> Meta region state could be corrupted
> 
>
> Key: HBASE-11703
> URL: https://issues.apache.org/jira/browse/HBASE-11703
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-11703.patch
>
>
> Internal meta region state could be corrupted if the meta is not on master:
> 1. the meta region server (not master) shuts down,
> 2. meta SSH offlines it without updating the dead server's region list,
> 3. meta is transitioned to pending_open and the previous server (the dead 
> server) of meta is lost,
> 4. meta is assigned somewhere else without updating its previous server,
> 5. normal SSH processes the dead server and offlines all of it's dead regions 
> including the meta, so the meta internal state is corrupted



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-08-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094328#comment-14094328
 ] 

Andrew Purtell commented on HBASE-11639:


Thanks. Will circle back to this at RC time

> [Visibility controller] Replicate the visibility of Cells as strings
> 
>
> Key: HBASE-11639
> URL: https://issues.apache.org/jira/browse/HBASE-11639
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.98.4
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>  Labels: VisibilityLabels
> Fix For: 0.99.0, 2.0.0, 0.98.6
>
>
> This issue is aimed at persisting the visibility labels as strings in the WAL 
> rather than Label ordinals.  This would help in replicating the label 
> ordinals to the replication cluster as strings directly and also that after 
> HBASE-11553 would help because the replication cluster could have an 
> implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10247) Client promises about timestamps

2014-08-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094352#comment-14094352
 ] 

Andrew Purtell commented on HBASE-10247:


bq. I suppose we could allow timestamp that are strictly older than the next 
timestamp the server would hand out...

Right, for deleting or replacing specific versions, if using SERVER_TS treat 
the timestamp as a logical value (as is it's nature). Allow the client to 
specify on mutation ops specific version(s) that might refer to existing data.

bq. Disallow TTL with no wall clock type TSPOLICY

Just a note: With something like HLC 
(http://www.cse.buffalo.edu/tech-reports/2014-04.pdf) timestamps would be in a 
regime that allows continued use of wall-clock-like TTLs. 



> Client promises about timestamps
> 
>
> Key: HBASE-10247
> URL: https://issues.apache.org/jira/browse/HBASE-10247
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Priority: Minor
> Fix For: 0.99.0, 2.0.0
>
> Attachments: 10247-do-not-try-may-eat-your-first-born-v2.txt, 
> 10247.txt
>
>
> This is to start a discussion about timestamp promises declared per table of 
> CF.
> For example if a client promises only monotonically increasing timestamps (or 
> no custom set timestamps) and VERSIONS=1, we can aggressively and easily 
> remove old versions of the same row/fam/col from the memstore before we 
> flush, just by supplying a comparator that ignores the timestamp (i.e. two KV 
> just differing by TS would be considered equal).
> That would increase the performance of counters significantly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11724) Add to RWQueueRpcExecutor the ability to split get and scan handlers

2014-08-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094359#comment-14094359
 ] 

Hadoop QA commented on HBASE-11724:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12661226/HBASE-11724-v0.patch
  against trunk revision .
  ATTACHMENT ID: 12661226

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestTableLockManager

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10401//console

This message is automatically generated.

> Add to RWQueueRpcExecutor the ability to split get and scan handlers
> 
>
> Key: HBASE-11724
> URL: https://issues.apache.org/jira/browse/HBASE-11724
> Project: HBase
>  Issue Type: New Feature
>  Components: IPC/RPC
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-11724-v0.patch
>
>
> RWQueueRpcExecutor has the devision between reads and writes requests, but we 
> can split also small-reads and long-reads. This can be useful to force a 
> deprioritization of scans on the RS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10247) Client promises about timestamps

2014-08-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094389#comment-14094389
 ] 

Lars Hofhansl commented on HBASE-10247:
---

Turns out there are a bunch of other problems with this:
# I envised CLIENT_MONOTONIC for use cases where we have an external 
transaction id oracle. The idea is that each transaction gets a write TS and 
writes all changes with that TS. That won't work with multiple transactions in 
parallel, we'd be back to getting timestamps out of order and can no longer 
guarantee that the ordering of the HFiles by time implies the ordering of any 
version of KVs inside (say T2 writes something, then a flush happens, and then 
T1 writes something). So I think there is no point in having this policy 
(unless there is another use case where this is valid), as there is no 
advantage over MIXED.
# More importantly. Server-side-only TSs *cannot* work with replication! When 
region servers fail in a source cluster edit may reach the slave cluster out of 
order. Now we can (a) assign TSs again by a slave server and we'd have 
different ordering of the KVs and hence different data, or we can (b) allow 
using the TSs from KVs and we're back to where we were, edits can be back and 
future dated.

I do not see a way out of the dilemma in #2.


> Client promises about timestamps
> 
>
> Key: HBASE-10247
> URL: https://issues.apache.org/jira/browse/HBASE-10247
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Priority: Minor
> Fix For: 0.99.0, 2.0.0
>
> Attachments: 10247-do-not-try-may-eat-your-first-born-v2.txt, 
> 10247.txt
>
>
> This is to start a discussion about timestamp promises declared per table of 
> CF.
> For example if a client promises only monotonically increasing timestamps (or 
> no custom set timestamps) and VERSIONS=1, we can aggressively and easily 
> remove old versions of the same row/fam/col from the memstore before we 
> flush, just by supplying a comparator that ignores the timestamp (i.e. two KV 
> just differing by TS would be considered equal).
> That would increase the performance of counters significantly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11709) TestMasterShutdown can fail sometime

2014-08-12 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11709:


   Resolution: Fixed
Fix Version/s: 2.0.0
   1.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Matteo for the review. Integrated into branch 1 and master.

> TestMasterShutdown can fail sometime 
> -
>
> Key: HBASE-11709
> URL: https://issues.apache.org/jira/browse/HBASE-11709
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 1.0.0, 2.0.0
>
> Attachments: hbase-11709.patch, hbase-11709_v2.patch
>
>
> This applies to 1.0 and master, not previous versions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-5534) HBase shell's return value is almost always 0

2014-08-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094412#comment-14094412
 ] 

Sean Busbey commented on HBASE-5534:


[~posix4e] do the changes in HBASE-11658 sufficiently fix this for you?

> HBase shell's return value is almost always 0
> -
>
> Key: HBASE-5534
> URL: https://issues.apache.org/jira/browse/HBASE-5534
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Alex Newman
>
> So I was trying to write some simple scripts to verify client connections to 
> HBase using the shell and I noticed that the HBase shell always returns 0 
> even when it can't connect to an HBase server. I'm not sure if this is the 
> best option. What would be neat is if you had some capability to run commands 
> like
> hbase shell --command='disable table;\ndrop table;' and it would error out if 
> any of the commands fail to succeed. echo "disable table" | hbase shell could 
> continue to work as it does now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-5534) HBase shell's return value is almost always 0

2014-08-12 Thread Alex Newman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Newman resolved HBASE-5534.


Resolution: Fixed

> HBase shell's return value is almost always 0
> -
>
> Key: HBASE-5534
> URL: https://issues.apache.org/jira/browse/HBASE-5534
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Alex Newman
>
> So I was trying to write some simple scripts to verify client connections to 
> HBase using the shell and I noticed that the HBase shell always returns 0 
> even when it can't connect to an HBase server. I'm not sure if this is the 
> best option. What would be neat is if you had some capability to run commands 
> like
> hbase shell --command='disable table;\ndrop table;' and it would error out if 
> any of the commands fail to succeed. echo "disable table" | hbase shell could 
> continue to work as it does now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-5534) HBase shell's return value is almost always 0

2014-08-12 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094462#comment-14094462
 ] 

Alex Newman commented on HBASE-5534:


lgtm

> HBase shell's return value is almost always 0
> -
>
> Key: HBASE-5534
> URL: https://issues.apache.org/jira/browse/HBASE-5534
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Alex Newman
>
> So I was trying to write some simple scripts to verify client connections to 
> HBase using the shell and I noticed that the HBase shell always returns 0 
> even when it can't connect to an HBase server. I'm not sure if this is the 
> best option. What would be neat is if you had some capability to run commands 
> like
> hbase shell --command='disable table;\ndrop table;' and it would error out if 
> any of the commands fail to succeed. echo "disable table" | hbase shell could 
> continue to work as it does now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11072) Abstract WAL splitting from ZK

2014-08-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094477#comment-14094477
 ] 

Hadoop QA commented on HBASE-11072:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12661239/HBASE-11072-v7.patch
  against trunk revision .
  ATTACHMENT ID: 12661239

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 21 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10402//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10402//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10402//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10402//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10402//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10402//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10402//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10402//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10402//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10402//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/10402//console

This message is automatically generated.

> Abstract WAL splitting from ZK
> --
>
> Key: HBASE-11072
> URL: https://issues.apache.org/jira/browse/HBASE-11072
> Project: HBase
>  Issue Type: Sub-task
>  Components: Consensus, Zookeeper
>Affects Versions: 0.99.0
>Reporter: Mikhail Antonov
>Assignee: Sergey Soldatov
> Attachments: HBASE-11072-1_v2.patch, HBASE-11072-1_v3.patch, 
> HBASE-11072-1_v4.patch, HBASE-11072-2_v2.patch, HBASE-11072-v1.patch, 
> HBASE-11072-v2.patch, HBASE-11072-v3.patch, HBASE-11072-v4.patch, 
> HBASE-11072-v5.patch, HBASE-11072-v6.patch, HBASE-11072-v7.patch, 
> HBASE_11072-1.patch
>
>
> HM side:
>  - SplitLogManager
> RS side:
>  - SplitLogWorker
>  - HLogSplitter and a few handler classes.
> This jira may need to be split further apart into smaller ones.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10247) Client promises about timestamps

2014-08-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094472#comment-14094472
 ] 

Andrew Purtell commented on HBASE-10247:


bq. More importantly. Server-side-only TSs cannot work with replication! When 
region servers fail in a source cluster edit may reach the slave cluster out of 
order. 

At some point we might have synchronous replication cross cluster. Discussed 
elsewhere on other JIRAs. (Or if not actively we should file one.) In the 
meantime we will have this class of problem in many areas if replication is 
active *and* both clusters are hosting active applications. I don't think that 
invalidates the single cluster use case for server-side-only TSes. It also 
doesn't invalidate, or at least it doesn't change current semantics, of use 
cases where a passive remote cluster collects update for disaster recovery.

> Client promises about timestamps
> 
>
> Key: HBASE-10247
> URL: https://issues.apache.org/jira/browse/HBASE-10247
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Priority: Minor
> Fix For: 0.99.0, 2.0.0
>
> Attachments: 10247-do-not-try-may-eat-your-first-born-v2.txt, 
> 10247.txt
>
>
> This is to start a discussion about timestamp promises declared per table of 
> CF.
> For example if a client promises only monotonically increasing timestamps (or 
> no custom set timestamps) and VERSIONS=1, we can aggressively and easily 
> remove old versions of the same row/fam/col from the memstore before we 
> flush, just by supplying a comparator that ignores the timestamp (i.e. two KV 
> just differing by TS would be considered equal).
> That would increase the performance of counters significantly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11685) Incr/decr on the reference count of HConnectionImplementation need be atomic

2014-08-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094496#comment-14094496
 ] 

Andrew Purtell commented on HBASE-11685:


v6 patch lgtm. Any comment or concerns [~nkeywal] or [~stack]?

> Incr/decr on the reference count of HConnectionImplementation need be atomic 
> -
>
> Key: HBASE-11685
> URL: https://issues.apache.org/jira/browse/HBASE-11685
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-11685-trunk-v1.diff, HBASE-11685-trunk-v2.diff, 
> HBASE-11685-trunk-v3.diff, HBASE-11685-trunk-v4.diff, 
> HBASE-11685-trunk-v5.diff, HBASE-11685-trunk-v6.diff
>
>
> Currently, the incr/decr operation on the ref count of 
> HConnectionImplementation are not atomic. This may cause that the ref count 
> always be larger than 0 and  the connection never be closed.
> {code}
> /**
>  * Increment this client's reference count.
>  */
> void incCount() {
>   ++refCount;
> }
> /**
>  * Decrement this client's reference count.
>  */
> void decCount() {
>   if (refCount > 0) {
> --refCount;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11699) Region servers exclusion list to HMaster.

2014-08-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094489#comment-14094489
 ] 

Andrew Purtell commented on HBASE-11699:


The latest patch doesn't address issues in a masterless deploy given local file 
based management of the exclusion list.  Any thoughts on a different solution 
[~goms]?

> Region servers exclusion list to HMaster.
> -
>
> Key: HBASE-11699
> URL: https://issues.apache.org/jira/browse/HBASE-11699
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Client, regionserver, Zookeeper
>Affects Versions: 0.98.3
>Reporter: Gomathivinayagam Muthuvinayagam
>Priority: Minor
>  Labels: patch
> Fix For: 0.99.0, 2.0.0, 0.98.6
>
> Attachments: HBASE_11699.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently HBase does not support adding set of region servers to be in the 
> exclusion list. So that administrators can prevent accidental startups of 
> some region servers to join the cluster. There was initially some work done, 
> and it is available in https://issues.apache.org/jira/browse/HBASE-3833. It 
> was not done after that. 
> I am planning to contribute it as a patch, and I would like to do some 
> improvements as well. Instead of storing the exclusion entries on a file, I 
> am planning to store it on zookeeper. Can anyone suggest thoughts on this? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11703) Meta region state could be corrupted

2014-08-12 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11703:


   Resolution: Fixed
Fix Version/s: 2.0.0
   1.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Matteo for the review. Integrated into branch 1 and master.

> Meta region state could be corrupted
> 
>
> Key: HBASE-11703
> URL: https://issues.apache.org/jira/browse/HBASE-11703
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 1.0.0, 2.0.0
>
> Attachments: hbase-11703.patch
>
>
> Internal meta region state could be corrupted if the meta is not on master:
> 1. the meta region server (not master) shuts down,
> 2. meta SSH offlines it without updating the dead server's region list,
> 3. meta is transitioned to pending_open and the previous server (the dead 
> server) of meta is lost,
> 4. meta is assigned somewhere else without updating its previous server,
> 5. normal SSH processes the dead server and offlines all of it's dead regions 
> including the meta, so the meta internal state is corrupted



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11725) Backport failover checking change to 1.0

2014-08-12 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-11725:
---

 Summary: Backport failover checking change to 1.0
 Key: HBASE-11725
 URL: https://issues.apache.org/jira/browse/HBASE-11725
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang


In HBASE-11611, we fixed a failover checking bug. We need to backport it to 
branch 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11725) Backport failover checking change to 1.0

2014-08-12 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11725:


Attachment: hbase-11725.patch

> Backport failover checking change to 1.0
> 
>
> Key: HBASE-11725
> URL: https://issues.apache.org/jira/browse/HBASE-11725
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-11725.patch
>
>
> In HBASE-11611, we fixed a failover checking bug. We need to backport it to 
> branch 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11725) Backport failover checking change to 1.0

2014-08-12 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11725:


Status: Patch Available  (was: Open)

> Backport failover checking change to 1.0
> 
>
> Key: HBASE-11725
> URL: https://issues.apache.org/jira/browse/HBASE-11725
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-11725.patch
>
>
> In HBASE-11611, we fixed a failover checking bug. We need to backport it to 
> branch 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-2357) Coprocessors: Add read-only region replicas (slaves) for availability and fast region recovery

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-2357.
---

Resolution: Won't Fix

This has been effectively superseded by HBASE-10070

> Coprocessors: Add read-only region replicas (slaves) for availability and 
> fast region recovery
> --
>
> Key: HBASE-2357
> URL: https://issues.apache.org/jira/browse/HBASE-2357
> Project: HBase
>  Issue Type: New Feature
>  Components: master, regionserver
>Reporter: Todd Lipcon
>
> I dont plan on working on this in the short term, but the idea is to extend 
> region ownership to have two modes. Each region has one primary region server 
> and N slave region servers. The slaves would follow the master (probably by 
> streaming the relevant HLog entries directly from it) and be able to serve 
> stale reads. The benefit is twofold: (a) provides the ability to spread read 
> load, (b) enables very fast region failover/rebalance since the memstore is 
> already nearly up to date on the slave RS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-3250) Clean up HBaseAdmin APIs

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-3250.
---

   Resolution: Duplicate
Fix Version/s: (was: 1.0.0)

This has been superseded by more recent issues.

> Clean up HBaseAdmin APIs
> 
>
> Key: HBASE-3250
> URL: https://issues.apache.org/jira/browse/HBASE-3250
> Project: HBase
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>
> Some discussion on IRC about this - HBaseAdmin is a bit of a mess currently, 
> it has a lot of different calls. They tend to fall into these categories:
> - Things that actually affect the data from a user perspective (eg 
> adding/dropping/enable/disable tables, setting up CFs, etc)
> - Things that affect the underlying data storage (eg force split, flush, 
> compact)
> - Cluster status (isMasterRunning, checkHBaseAvailable)
> I'd propose we separate these into different classes. HBaseAdmin would be 
> reserved for "DDL" operations (eg create/alter/drop table). A new class, eg 
> HBaseRegionTools or HBaseStorageAdmin or something, would handle the calls 
> that affect "implementation details" like split/flush/compact. The cluster 
> status stuff should maybe just go in a new HBaseCluster class which would 
> also have getAdmin(), getTable(), etc?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-3131) Coprocessors: Server side embedding of Cascading/Cascalog operators

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-3131.
---

Resolution: Later

> Coprocessors: Server side embedding of Cascading/Cascalog operators
> ---
>
> Key: HBASE-3131
> URL: https://issues.apache.org/jira/browse/HBASE-3131
> Project: HBase
>  Issue Type: Brainstorming
>  Components: Coprocessors
>Reporter: Andrew Purtell
>
> We had several discussions at Hadoop World NYC 2010 about what sort of 
> distributed computation framework made sense to build into HBase on top of 
> coprocessors. An interesting suggestion was to provide server side support 
> for Cascading (http://www.cascading.org) and possibly also Cascalog 
> (http://github.com/nathanmarz/cascalog) operators in a coprocessor; then to 
> extend the cascade topological scheduler with a new target for executing flow 
> assemblies in process in parallel up on the HBase cluster, with dependencies 
> considered automatically. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-3206) Detect slow GC loops of death

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-3206.
---

Resolution: Not a Problem

These days we have o.a.h.h.util.JvmPauseMonitor

> Detect slow GC loops of death
> -
>
> Key: HBASE-3206
> URL: https://issues.apache.org/jira/browse/HBASE-3206
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jean-Daniel Cryans
>
> Something that has been bothering me for a while was to understand when a 
> region server was being slow because of frequent and small GC pauses. I 
> usually go into that RS's GC output, watch it going for a while then decide 
> if it's under some kind of memory pressure. Here's an example (grepped "Full" 
> from the GC log):
> {noformat}
> 12:03:42.460-0800: [Full GC [CMS2010-11-08T12:03:43.081-0800: 
> [CMS-concurrent-mark: 4.381/5.819 secs] [Times: user=60.51 sys=2.54, 
> real=5.82 secs] 
> 12:04:06.916-0800: [Full GC [CMS2010-11-08T12:04:07.316-0800: 
> [CMS-concurrent-mark: 4.006/5.080 secs] [Times: user=55.16 sys=2.13, 
> real=5.08 secs] 
> 12:04:32.559-0800: [Full GC [CMS2010-11-08T12:04:33.286-0800: 
> [CMS-concurrent-mark: 4.133/5.303 secs] [Times: user=53.61 sys=2.40, 
> real=5.30 secs] 
> 12:05:24.299-0800: [Full GC [CMS2010-11-08T12:05:25.397-0800: 
> [CMS-concurrent-sweep: 1.325/1.388 secs] [Times: user=4.66 sys=0.15, 
> real=1.38 secs] 
> 12:05:50.069-0800: [Full GC [CMS2010-11-08T12:05:50.240-0800: 
> [CMS-concurrent-mark: 4.831/6.346 secs] [Times: user=69.43 sys=2.76, 
> real=6.35 secs] 
> 12:06:16.146-0800: [Full GC [CMS2010-11-08T12:06:16.631-0800: 
> [CMS-concurrent-mark: 4.942/7.010 secs] [Times: user=69.25 sys=2.69, 
> real=7.01 secs] 
> 12:07:08.899-0800: [Full GC [CMS2010-11-08T12:07:10.033-0800: 
> [CMS-concurrent-sweep: 1.197/1.202 secs] [Times: user=1.96 sys=0.04, 
> real=1.20 secs] 
> 12:08:01.871-0800: [Full GC [CMS2010-11-08T12:08:01.949-0800: 
> [CMS-concurrent-mark: 4.154/5.443 secs] [Times: user=61.11 sys=2.29, 
> real=5.44 secs] 
> 12:08:53.343-0800: [Full GC [CMS2010-11-08T12:08:53.549-0800: 
> [CMS-concurrent-mark: 4.447/5.713 secs] [Times: user=65.19 sys=2.42, 
> real=5.72 secs] 
> 12:09:42.841-0800: [Full GC [CMS2010-11-08T12:09:43.664-0800: 
> [CMS-concurrent-mark: 4.025/5.053 secs] [Times: user=51.40 sys=2.02, 
> real=5.06 secs]
> {noformat}
> In this case, that RS's TT was down so it was getting all the non-local maps 
> at the end of the job at the same time... generating a >1000% CPU usage. With 
> scanner caching set to 10k, it's easy to understand that there's memory 
> pressure since we have all those objects in flight that we don't account for.
> One solution I was thinking of was to have a sleeper thread that sleeps for 1 
> sec all the time and outputs when it sees that it slept for a bit more than 1 
> sec. Then let's say the region server records that it saw a few of those 
> under x minutes and decides to somehow throttle the traffic.
> What I often saw is that if this situation is kept unnoticed, we end up GCing 
> more and more and in some cases I saw a region server going almost zombie for 
> 2 hours before finally getting it's lease expired.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-3480) Reduce the size of Result serialization

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-3480.
---

Resolution: Invalid

Didn't pan out

> Reduce the size of Result serialization
> ---
>
> Key: HBASE-3480
> URL: https://issues.apache.org/jira/browse/HBASE-3480
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.90.0
>Reporter: ryan rawson
> Attachments: HBASE-3480-lzf.txt, HBASE-3480.txt
>
>
> When faced with a gigabit ethernet network connection, things are pretty slow 
> actually.  For example, let's take a 2 MB reply, using a 120MB/sec line rate, 
> we are talking about about 16ms to transfer that data across a gige line.  
> This is a pretty significant amount of time.
> So this JIRA is about reducing the size of the Result[] serialization.  By 
> exploiting family and qualifier and rowkey duplication, I created a simple 
> encoding scheme to use a dictionary instead of literal strings.  
> in my testing, I am seeing some success with the sizes.  Average serialized 
> size is about 1/2 of previous, but time to serialize on the regionserver side 
> is way up, by a factor of 10x.  This might be due to the simplistic first 
> implementation however.
> Here is the post change size:
> grep 'Serialized size' * | perl -ne '/Serialized size: (\d+?) in (\d+?) ns/ ; 
> print $1, " ", $2, "\n" if $1 > 1;' | cut -f1 -d' ' | perl -ne '$sum += 
> $_; $count++; END {print $sum/$count, "\n"}'
> 377047.1125
> Here is the pre change size:
> grep 'Serialized size' * | perl -ne '/Serialized size: (\d+?) in (\d+?) ns/ ; 
> print $1, " ", $2, "\n" if $1 > 1;' | cut -f1 -d' ' | perl -ne '$sum += 
> $_; $count++; END {print $sum/$count, "\n"}'
> 601078.505882353
> That is about a 60% improvement in size.
> But times are not so good, here are some samples of the old, in (size) (time 
> in ns)
> 3874599 10685836
> 5582725 11525888
> so that is about 11ms to serialize 3-5mb of data.
> In the new implementation:
> 1898788 118504672
> 1630058 91133003
> this is 118-91ms for serialized sizes of 1.6-1.8 MB.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-3529) Add search to HBase

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-3529:
--

Resolution: Later
Status: Resolved  (was: Patch Available)

No pressing use case for embedding search. NGDATA's hbase-indexer 
(https://github.com/NGDATA/hbase-indexer) is an interesting option that 
masquerades as a replication endpoint and feeds incoming updates into Solr.

> Add search to HBase
> ---
>
> Key: HBASE-3529
> URL: https://issues.apache.org/jira/browse/HBASE-3529
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.0
>Reporter: Jason Rutherglen
> Attachments: HBASE-3529.patch, HDFS-APPEND-0.20-LOCAL-FILE.patch
>
>
> Using the Apache Lucene library we can add freetext search to HBase.  The 
> advantages of this are:
> * HBase is highly scalable and distributed
> * HBase is realtime
> * Lucene is a fast inverted index and will soon be realtime (see LUCENE-2312)
> * Lucene offers many types of queries not currently available in HBase (eg, 
> AND, OR, NOT, phrase, etc)
> * It's easier to build scalable realtime systems on top of already 
> architecturally sound, scalable realtime data system, eg, HBase.
> * Scaling realtime search will be as simple as scaling HBase.
> Phase 1 - Indexing:
> * Integrate Lucene into HBase such that an index mirrors a given region.  
> This means cascading add, update, and deletes between a Lucene index and an 
> HBase region (and vice versa).
> * Define meta-data to mark a region as indexed, and use a Solr schema to 
> allow the user to define the fields and analyzers.
> * Integrate with the HLog to ensure that index recovery can occur properly 
> (eg, on region server failure)
> * Mirror region splits with indexes (use Lucene's IndexSplitter?)
> * When a region is written to HDFS, also write the corresponding Lucene index 
> to HDFS.
> * A row key will be the ID of a given Lucene document.  The Lucene docstore 
> will explicitly not be used because the document/row data is stored in HBase. 
>  We will need to solve what the best data structure for efficiently mapping a 
> docid -> row key is.  It could be a docstore, field cache, column stride 
> fields, or some other mechanism.
> * Write unit tests for the above
> Phase 2 - Queries:
> * Enable distributed Lucene queries
> * Regions that have Lucene indexes are inherently available and may be 
> searched on, meaning there's no need for a separate search related system in 
> Zookeeper.
> * Integrate search with HBase's RPC mechanis



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-3413) DNS Configs may completely break HBase cluster

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-3413.
---

Resolution: Cannot Reproduce

Stale issue. Consider reopening or filing a new issue with updated findings 
from clusters using a current release

> DNS Configs may completely break HBase cluster
> --
>
> Key: HBASE-3413
> URL: https://issues.apache.org/jira/browse/HBASE-3413
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.90.0
> Environment: all
>Reporter: Mathias Herberts
>
> I recently experienced a cluster malfunction which was caused by a change in 
> DNS config for services co-hosted on the machines running region servers.
> The RS are specified using IP addresses in the 'regionservers' file. Those 
> machines are 1.example.com to N.example.com (there are A RRs for those names 
> to each of the N IP addresses in 'regionservers').
> Until recently, the PTR RRs for the RS IPs were those x.example.com names.
> Then a service was deployed on some of the x.example.com machines, and new A 
> RRs were added for svc.example.com which point to each of the IPs used for 
> the service.
> Jointly new PTR records were added too for the given IPs. Those PTR records 
> have 'svc.example.com' as their PTRDATA, and this is causing the HBase 
> cluster to get completely confused.
> Since it is perfectly legal to have multiple PTR records, it seems important 
> to make the canonicalization of RS more robust to DNS tweaks.
> Maybe generating a UUID when a RS is started would help, this UUID could be 
> used to register the RS in ZK and we would not rely on DNS for obtaining a 
> stable canonical name (which may not even exist...).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-2249) The PerformanceEvaluation read tests don't take the MemStore into account.

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-2249.
---

Resolution: Later

> The PerformanceEvaluation read tests don't take the MemStore into account.
> --
>
> Key: HBASE-2249
> URL: https://issues.apache.org/jira/browse/HBASE-2249
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.20.3
>Reporter: Dan Washusen
>  Labels: moved_from_0_20_5
> Attachments: 2249-TRUNK.patch, HBASE-2249.patch, HBASE-2249_v2.patch
>
>
> The write tests in the PerformanceEvaluation flush the MemStore after they 
> complete, as a result the read tests don't take the MemStore into account 
> (because it's always empty).  An optional flag could be provided when the 
> performance evaluation starts that dictates if the table should be flushed 
> after a test completes.  This optional flag would allow changes to the 
> MemStore to be performance tested...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-3451) Cluster migration best practices

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-3451.
---

Resolution: Not a Problem

Stale brainstorming issue, closing

> Cluster migration best practices
> 
>
> Key: HBASE-3451
> URL: https://issues.apache.org/jira/browse/HBASE-3451
> Project: HBase
>  Issue Type: Brainstorming
>Affects Versions: 0.20.6, 0.89.20100924
>Reporter: Daniel Einspanjer
>Priority: Critical
>
> Mozilla is currently in the process of trying to migrate our HBase cluster to 
> a new datacenter.
> We have our existing 25 node cluster in our SJC datacenter.  It is serving 
> production traffic 24/7.  While we can take downtimes, it is very costly and 
> difficult to take them for more than a few hours in the evening.
> We have two new 30 node clusters in our PHX datacenter.  We are wanting to 
> cut production over to one of these this week.
> The old cluster is running 0.20.6.  The new clusters are running CDH3b3 with 
> HBase 0.89.
> We have tried running a pull distcp using hftp URLs.  If HBase is running, 
> this causes SAX XML Parsing exceptions when a directory is removed during the 
> scan.
> If HBase is stopped, it takes hours for the directory compare to finish 
> before it even begins copying data.
> We have tried a custom backup MR job.  This job uses the map phase to 
> evaluate and copy changed files. It can run while HBase is live, but that 
> results in a dirty copy of the data.
> We have tried running the custom backup job while HBase is shut down as well. 
>  When we do this, even on two back to back runs, it still copies over some 
> data and seems to not be an entirely clean copy.
> When we have gotten what we thought was an entire copy onto the new cluster, 
> we ran add_table on it, but the resulting hbase table had holes.  
> Investigating the holes revealed there were directories that were not 
> transfered.
> We had a meeting to brainstorm ideas and two further suggestions that came up 
> were:
> 1. Build a file list of files to transfer on the SJC side, transfer that file 
> list to PHX and then run distcp on it.
> 2. Try a full copy instead of incremental, skipping the expensive file 
> compare step
> 3. Evaluate copying from SJC to S3 then from S3 to PHX.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-2902) Improve our default shipping GC config. and doc -- along the way do a bit of GC myth-busting

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-2902.
---

Resolution: Duplicate

Stale issue. Superseded by recent blockcache / bucket cache related work.

> Improve our default shipping GC config. and doc -- along the way do a bit of 
> GC myth-busting
> 
>
> Key: HBASE-2902
> URL: https://issues.apache.org/jira/browse/HBASE-2902
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: stack
> Attachments: Fragger.java
>
>
> This issue is about improving the near-term story, working with our current 
> lot, the slowly evolving (?) 1.6x JVMs and CMS (Longer-term, another issue in 
> hbase tracks the G1 story and longer term, Todd is making a bit of traction 
> over on the GC hotspot list).  
> At the moment we ship with CMS and i-CMS enabled by default.   At a minimum, 
> i-cms does not apply on most hw hbase is deployed on -- i-cms is for hw w/ 2 
> or less processors -- and it seems as though we do not use multiple threads 
> doing YG collections; i.e. -XX:UseParNewGC "Use parallel threads in the new 
> generation" (Here's what I see...it seems to be off in jdk6 according to 
> http://www.md.pp.ru/~eu/jdk6options.html#UseParNewGC  but then this says its 
> on by default when use CMS -> 
> http://blogs.sun.com/jonthecollector/category/Java ... but then this says 
> enable it http://www.austinjug.org/presentations/JDK6PerfUpdate_Dec2009.pdf.  
> I see this when its enabled: [Rescan (parallel) ... so it seems like its off. 
>  Need to review the src code).
> We should make the above changes or at least doc them.
> We should consider enabling GC logging by default.  Its low cost apparently 
> (citation below).  We'd just need to do something about the log management.  
> Not sure you can roll them -- investigate -- and anyways we should roll on 
> startup at least so we don't lose GC logs across restarts.
> We should play with initiating ratios; maybe starting CMS earlier will push 
> out the fragmented heap that brings on the killer stop-the-world collection.
> I read somewhere recently that invoking System.gc will run a CMS GC if CMS is 
> enabled.  We should investigate.  If it ran the serial collector, we could at 
> least doc. that users could run a defragmenting stop-the-world serial 
> collection on 'off' times or at least make it so the stop-the-world happened 
> when expected instead of at some random time.
> While here, lets do a bit of myth-busting.  Here's a few postulates:
> + Keep the young generation small or at least, cap its size else it grows to 
> occupy a large part of the heap
> The above is a Ryanism.  Doing the above -- along w/ massive heap size -- has 
> put off the fragmentation that others run into at SU at least.
> Interestingly, this document -- 
> http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBcQFjAA&url=http%3A%2F%2Fmediacast.sun.com%2Fusers%2FLudovic%2Fmedia%2FGCTuningPresentationFISL10.pdf&ei=ZPtaTOiLL5bcsAa7gsl1&usg=AFQjCNHP691SIIE-6NSKccM4mZtm1U6Ahw&sig2=2cjvcaeyn1aISL2THEENjQ
>  -- would seem to recommend near the opposite in that it suggests that when 
> using CMS, do all you can to keep stuff in the YG.  Avoid having stuff age up 
> to the tenured heap if you can.  This would seem imply using a larger YG.
> Chatting w/ Ryan, the reason to keep the YG small is so we don't have long 
> pauses doing YG collections.  According to the above citation, its not big 
> YGs that cause long YG pauses but the copying of data (not sure if its 
> copying of data inside the YG or if it meant copying up to tenured -- 
> chatting w/ Ryan we thought there'd be no difference -- but we should 
> investigate)
> I look a look at a running upload with a small heap admittedly.  What I was 
> seeing was that using our defaults, rare was anything in YG of age > 1 GC; 
> i.e. near everything in YG was being promoted.  This may have been a symptom 
> of my small (default) heap but we should look into this and try and ensure 
> objects are promoted because they are old, not because there is not enough 
> space in YG. 
> + We should write a slab allocator or allocate memory outside of the JVM heap
> Thinking on this, slab allocator, while a lot of work, I can see it helping 
> us w/ block cache, but what if memstore is the fragmented-heap maker?  In 
> this case, slab-allocator is only part of the fix.  It should be easy to see 
> which is the fragmented heap maker since we can turn off the cache easy 
> enough (though it seems like its accessed anyways even if disabled -- need to 
> make sure its not doing allocations to the cache in this case)
> Other things while on this topic.  We need to come up w/ a loading that 
> brings on the CMS fault that comes 

[jira] [Resolved] (HBASE-3107) Breakup HLogSplitTest unit tests.

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-3107.
---

Resolution: Not a Problem

Stale, closing

> Breakup HLogSplitTest unit tests.
> -
>
> Key: HBASE-3107
> URL: https://issues.apache.org/jira/browse/HBASE-3107
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Alex Newman
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-3558) Warnings if RS times are out of sync

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-3558.
---

Resolution: Not a Problem

Current releases include logic for logging this warning

> Warnings if RS times are out of sync
> 
>
> Key: HBASE-3558
> URL: https://issues.apache.org/jira/browse/HBASE-3558
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.89.20100924
>Reporter: Sean Sechrist
>Priority: Minor
>
> Last night we ran into a problem with the times on RSs being out of sync by 1 
> minute. The times were being reset by ~70s often because we were getting 
> different responses from pool.ntpd.org.
> This caused lost ZK sessions and problems writing to datanodes,  so all the 
> RSs kept shutting down.
> I think it would be useful to have HBaseFsck check to see if the times on the 
> region servers are out of sync. Or maybe put a warning on the master web ui 
> or something. 
> This seems related to HBASE-3168, but applies when region servers become out 
> of sync once they already joined the cluster (due to NTP issues or something 
> else).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-3087) Metrics are not respecting period configuration

2014-08-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-3087.
---

Resolution: Cannot Reproduce
  Assignee: (was: Hairong Kuang)

Didn't go anywhere. Stale on account of the migration to Hadoop 2 metrics

> Metrics are not respecting period configuration
> ---
>
> Key: HBASE-3087
> URL: https://issues.apache.org/jira/browse/HBASE-3087
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.90.0
>Reporter: Jonathan Gray
>
> This was discussed on an email thread and then summarized in a comment from 
> stack over in HBASE-2888 (which is a more broad jira):
> {quote}
> Setting hbase.period in hadoop-metrics.properties doesn't seem to have an 
> effect; counts are off. Here's what I noticed digging in code:
> 'hadoop-metrics.properties' gets read up into a metrics attributes map but 
> nothing seems to be done w/ them subsequently. Reading up in hadoop, in 
> branch-0.20/src/core/org/apache/hadoop/metrics/package.html, it seems to 
> imply that we need to getAttribute and set them after we make a metrics 
> Context; i.e. in this case, call setPeriod in RegionServerMetrics, etc.?
> More broadly, need to make sure settings in hadoop-metrics.properties take 
> effect when changed.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11725) Backport failover checking change to 1.0

2014-08-12 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094566#comment-14094566
 ] 

Esteban Gutierrez commented on HBASE-11725:
---

+1

> Backport failover checking change to 1.0
> 
>
> Key: HBASE-11725
> URL: https://issues.apache.org/jira/browse/HBASE-11725
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-11725.patch
>
>
> In HBASE-11611, we fixed a failover checking bug. We need to backport it to 
> branch 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11699) Region servers exclusion list to HMaster.

2014-08-12 Thread Gomathivinayagam Muthuvinayagam (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094572#comment-14094572
 ] 

Gomathivinayagam Muthuvinayagam commented on HBASE-11699:
-

I am planning to have the information stored in meta table. I am looking into 
the metatableaccessor and other relevant classes.
Probably the rowkey value will be combination of [ [prefix-partial-key] 
[allow/deny] [hostname]], and there will be no value for this rowkey. I will 
scan the included and excluded hosts information from this information. Let me 
find out whether it will work. 

> Region servers exclusion list to HMaster.
> -
>
> Key: HBASE-11699
> URL: https://issues.apache.org/jira/browse/HBASE-11699
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Client, regionserver, Zookeeper
>Affects Versions: 0.98.3
>Reporter: Gomathivinayagam Muthuvinayagam
>Priority: Minor
>  Labels: patch
> Fix For: 0.99.0, 2.0.0, 0.98.6
>
> Attachments: HBASE_11699.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently HBase does not support adding set of region servers to be in the 
> exclusion list. So that administrators can prevent accidental startups of 
> some region servers to join the cluster. There was initially some work done, 
> and it is available in https://issues.apache.org/jira/browse/HBASE-3833. It 
> was not done after that. 
> I am planning to contribute it as a patch, and I would like to do some 
> improvements as well. Instead of storing the exclusion entries on a file, I 
> am planning to store it on zookeeper. Can anyone suggest thoughts on this? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11726) Master should fail-safe if starting with a pre 0.96 layout

2014-08-12 Thread Esteban Gutierrez (JIRA)
Esteban Gutierrez created HBASE-11726:
-

 Summary: Master should fail-safe if starting with a pre 0.96 layout
 Key: HBASE-11726
 URL: https://issues.apache.org/jira/browse/HBASE-11726
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.5, 0.96.2, 0.99.0, 2.0.0
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
Priority: Critical


We recently saw this: If user inadvertently starts the HBase Master after 
deploying new HBase binaries (any version that supports namespaces), the 
HMaster will start the migration to PBs the the {{hbase.version}} file per 
HBASE-5453 and that will write a new version file PB-serialized but with the 
old version number. Further restarts of the master will fail because the hbase 
version file has been migrated to PBs and there will be version mismatch. The 
right approach should be to fail safe the master if we find an old 
{{hbase.version}} file in order to force user to run upgrade tool.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11467) New impl of Registry interface not using ZK + new RPCs on master protocol

2014-08-12 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094579#comment-14094579
 ] 

Andrey Stepachev commented on HBASE-11467:
--

[~mantonov], great work!

How about using some ClusterIdProvider which will have a couple 
implementations. Even so, it is not bad to have zk for configuration proposes, 
if it already exists in infrastructure. So as a result we can make a bunch of 
such providers and address them by some human readable cluster name.

As a sketch, lets uri will look like this:
zk://zk.host1:port,zk.host2:port,zk.host3:port/mycluster
If such uri will be configured on both, server and client, server will be able 
to write all needed data into zk.

Alternatively uri conf://mycluster can be provided, and in that case all 
configuration can be read from config (it can be some property names scheme to 
find out masters and such)

> New impl of Registry interface not using ZK + new RPCs on master protocol
> -
>
> Key: HBASE-11467
> URL: https://issues.apache.org/jira/browse/HBASE-11467
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, Consensus, Zookeeper
>Affects Versions: 2.0.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0
>
> Attachments: HBASE-11467.patch, HBASE-11467.patch
>
>
> Currently there' only one implementation of Registry interface, which is 
> using ZK to get info about meta. Need to create implementation which will be 
> using  RPC calls to master the client is connected to.
> Review of early version of patch is here: https://reviews.apache.org/r/24296/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11727) Assignment wait time error in case of ServerNotRunningYetException

2014-08-12 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-11727:
---

 Summary: Assignment wait time error in case of 
ServerNotRunningYetException
 Key: HBASE-11727
 URL: https://issues.apache.org/jira/browse/HBASE-11727
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang


{quote}
maxWaitTime = this.server.getConfiguration().
  getLong("hbase.regionserver.rpc.startup.waittime", 6);
{quote}

It should add the current time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11467) New impl of Registry interface not using ZK + new RPCs on master protocol

2014-08-12 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094583#comment-14094583
 ] 

Andrey Stepachev commented on HBASE-11467:
--

even http://some.load.balancer:60010/clusterId will work, 

> New impl of Registry interface not using ZK + new RPCs on master protocol
> -
>
> Key: HBASE-11467
> URL: https://issues.apache.org/jira/browse/HBASE-11467
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, Consensus, Zookeeper
>Affects Versions: 2.0.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0
>
> Attachments: HBASE-11467.patch, HBASE-11467.patch
>
>
> Currently there' only one implementation of Registry interface, which is 
> using ZK to get info about meta. Need to create implementation which will be 
> using  RPC calls to master the client is connected to.
> Review of early version of patch is here: https://reviews.apache.org/r/24296/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11727) Assignment wait time error in case of ServerNotRunningYetException

2014-08-12 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11727:


Status: Patch Available  (was: Open)

> Assignment wait time error in case of ServerNotRunningYetException
> --
>
> Key: HBASE-11727
> URL: https://issues.apache.org/jira/browse/HBASE-11727
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-11727.patch
>
>
> {quote}
> maxWaitTime = this.server.getConfiguration().
>   getLong("hbase.regionserver.rpc.startup.waittime", 6);
> {quote}
> It should add the current time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11727) Assignment wait time error in case of ServerNotRunningYetException

2014-08-12 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11727:


Attachment: hbase-11727.patch

> Assignment wait time error in case of ServerNotRunningYetException
> --
>
> Key: HBASE-11727
> URL: https://issues.apache.org/jira/browse/HBASE-11727
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-11727.patch
>
>
> {quote}
> maxWaitTime = this.server.getConfiguration().
>   getLong("hbase.regionserver.rpc.startup.waittime", 6);
> {quote}
> It should add the current time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11727) Assignment wait time error in case of ServerNotRunningYetException

2014-08-12 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094586#comment-14094586
 ] 

Esteban Gutierrez commented on HBASE-11727:
---

+1, its a trivial bug.

> Assignment wait time error in case of ServerNotRunningYetException
> --
>
> Key: HBASE-11727
> URL: https://issues.apache.org/jira/browse/HBASE-11727
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Attachments: hbase-11727.patch
>
>
> {quote}
> maxWaitTime = this.server.getConfiguration().
>   getLong("hbase.regionserver.rpc.startup.waittime", 6);
> {quote}
> It should add the current time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11072) Abstract WAL splitting from ZK

2014-08-12 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094595#comment-14094595
 ] 

Mikhail Antonov commented on HBASE-11072:
-

When I look at the results of last patch QA run, I don't really see where the 
build has failed. It reported no tests failures.

{code}
Running 
org.apache.hadoop.hbase.replication.regionserver.TestReplicationHLogReaderManager
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 253.88 sec

Results :

Tests run: 1993, Failures: 0, Errors: 0, Skipped: 19

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] HBase . SUCCESS [  2.309 s]
[INFO] HBase - Common  SUCCESS [ 31.927 s]
[INFO] HBase - Protocol .. SUCCESS [  7.404 s]
[INFO] HBase - Client  SUCCESS [ 53.972 s]
[INFO] HBase - Hadoop Compatibility .. SUCCESS [  5.198 s]
[INFO] HBase - Hadoop Two Compatibility .. SUCCESS [  1.634 s]
[INFO] HBase - Prefix Tree ... SUCCESS [  3.124 s]
[INFO] HBase - Server  FAILURE [57:25 min]
[INFO] HBase - Testing Util .. SKIPPED
[INFO] HBase - Thrift  SKIPPED
[INFO] HBase - Shell . SKIPPED
[INFO] HBase - Integration Tests . SKIPPED
[INFO] HBase - Examples .. SKIPPED
[INFO] HBase - Assembly .. SKIPPED
{code}

[~stack] what do you think about this version of patch?

> Abstract WAL splitting from ZK
> --
>
> Key: HBASE-11072
> URL: https://issues.apache.org/jira/browse/HBASE-11072
> Project: HBase
>  Issue Type: Sub-task
>  Components: Consensus, Zookeeper
>Affects Versions: 0.99.0
>Reporter: Mikhail Antonov
>Assignee: Sergey Soldatov
> Attachments: HBASE-11072-1_v2.patch, HBASE-11072-1_v3.patch, 
> HBASE-11072-1_v4.patch, HBASE-11072-2_v2.patch, HBASE-11072-v1.patch, 
> HBASE-11072-v2.patch, HBASE-11072-v3.patch, HBASE-11072-v4.patch, 
> HBASE-11072-v5.patch, HBASE-11072-v6.patch, HBASE-11072-v7.patch, 
> HBASE_11072-1.patch
>
>
> HM side:
>  - SplitLogManager
> RS side:
>  - SplitLogWorker
>  - HLogSplitter and a few handler classes.
> This jira may need to be split further apart into smaller ones.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11725) Backport failover checking change to 1.0

2014-08-12 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11725:


   Resolution: Fixed
Fix Version/s: 1.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Esteban for the review. Integrated into branch 1.

> Backport failover checking change to 1.0
> 
>
> Key: HBASE-11725
> URL: https://issues.apache.org/jira/browse/HBASE-11725
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 1.0.0
>
> Attachments: hbase-11725.patch
>
>
> In HBASE-11611, we fixed a failover checking bug. We need to backport it to 
> branch 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10247) Client promises about timestamps

2014-08-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094614#comment-14094614
 ] 

Lars Hofhansl commented on HBASE-10247:
---

Yep, the current semantics are sound.

Actually the issue is not that both cluster need to be active... We'll have the 
issue above even with a single writer cluster only: Either we accept out of 
order timestamps at the slave (when we use the timestamps of the primary) or we 
accept out of order edits at the slave (when we use the slave to assign local 
server TSs). The former goes against what I want to achieve the latter leads to 
incorrect data.

A way around this would be to strictly order *all* edit as they are replicated, 
which is very difficult as replication is decentralized (which is especially an 
issue when RSs die and other take over their replication work).  Synchronous 
replication would by definition do that.


> Client promises about timestamps
> 
>
> Key: HBASE-10247
> URL: https://issues.apache.org/jira/browse/HBASE-10247
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Priority: Minor
> Fix For: 0.99.0, 2.0.0
>
> Attachments: 10247-do-not-try-may-eat-your-first-born-v2.txt, 
> 10247.txt
>
>
> This is to start a discussion about timestamp promises declared per table of 
> CF.
> For example if a client promises only monotonically increasing timestamps (or 
> no custom set timestamps) and VERSIONS=1, we can aggressively and easily 
> remove old versions of the same row/fam/col from the memstore before we 
> flush, just by supplying a comparator that ignores the timestamp (i.e. two KV 
> just differing by TS would be considered equal).
> That would increase the performance of counters significantly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10127) support balance table

2014-08-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094622#comment-14094622
 ] 

Sean Busbey commented on HBASE-10127:
-

Should this be closed as wontfix?

> support balance table
> -
>
> Key: HBASE-10127
> URL: https://issues.apache.org/jira/browse/HBASE-10127
> Project: HBase
>  Issue Type: Improvement
>  Components: master, shell
>Affects Versions: 0.94.14
>Reporter: cuijianwei
> Attachments: HBASE-10127-0.94-v1.patch
>
>
> HMaster provides a rpc interface : 'balance()' to balance all the regions 
> among region servers in the cluster. Sometimes, we might want to balance all 
> the regions belonging to a table while keeping the region assignments of 
> other tables. This demand may reveal in a shared cluster where we want to 
> balance regions for one application's table without affecting other 
> applications. Therefore, will it be useful if we extend the current 
> 'balance()' interface to only balance regions of the same table? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11711) In some environment HBase MiniCluster fails to load because Master info port clobbering

2014-08-12 Thread Alex Newman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Newman updated HBASE-11711:


Description: 
Currently TestZooKeeper attempts to launch to masters and the info ports 
collide. This disables the info port for this test.


  was:
Currently TestZooKeeper attempts to launch to masters and the info ports 
collide. This disables the info port for this test.

2014-08-11 17:33:36,863 WARN  [main] log.Slf4jLog(76): failed Server@d919544: 
java.net.BindException: Address already in use
2014-08-11 17:33:36,864 ERROR [main] hbase.MiniHBaseCluster(230): Error 
starting cluster
java.lang.RuntimeException: Failed construction of Master: class 
org.apache.hadoop.hbase.master.HMasterAddress already in use
at 
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:147)
at 
org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:214)
at 
org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:152)
at 
org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:214)
at 
org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:896)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:865)
at org.apache.hadoop.hbase.TestZooKeeper.setUp(TestZooKeeper.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runners.Suite.runChild(Suite.java:127)
at org.junit.runners.Suite.runChild(Suite.java:26)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:211)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:67)
Caused by: java.io.IOException: Failed to start redirecting jetty server
at 
org.apache.hadoop.hbase.master.HMaster.putUpJettyServer(HMaster.java:330)
at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:304)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at 
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:142)
... 39 more
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.jav

[jira] [Commented] (HBASE-11711) In some environment HBase MiniCluster fails to load because Master info port clobbering

2014-08-12 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094633#comment-14094633
 ] 

Alex Newman commented on HBASE-11711:
-

[~ted_yu] 
2014-08-11 17:33:36,863 WARN  [main] log.Slf4jLog(76): failed Server@d919544: 
java.net.BindException: Address already in use
2014-08-11 17:33:36,864 ERROR [main] hbase.MiniHBaseCluster(230): Error 
starting cluster
java.lang.RuntimeException: Failed construction of Master: class 
org.apache.hadoop.hbase.master.HMasterAddress already in use
at 
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:147)
at 
org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:214)
at 
org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:152)
at 
org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:214)
at 
org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:896)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:865)
at org.apache.hadoop.hbase.TestZooKeeper.setUp(TestZooKeeper.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runners.Suite.runChild(Suite.java:127)
at org.junit.runners.Suite.runChild(Suite.java:26)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:211)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:67)
Caused by: java.io.IOException: Failed to start redirecting jetty server
at 
org.apache.hadoop.hbase.master.HMaster.putUpJettyServer(HMaster.java:330)
at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:304)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at 
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:142)
... 39 more
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:414)
at sun.nio.ch.Net.bind(Net.java:406)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74

[jira] [Commented] (HBASE-10247) Client promises about timestamps

2014-08-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094632#comment-14094632
 ] 

Andrew Purtell commented on HBASE-10247:


bq. Either we accept out of order timestamps at the slave (when we use the 
timestamps of the primary)

Yes, but this can be acceptable for DR use cases. Also, if backups are handled 
through some other means, such as snapshots+copy and replication is not active, 
having an option to restrict timestamps to server generated stamps can still be 
useful. Until we're bridging clusters with Raft etc.

> Client promises about timestamps
> 
>
> Key: HBASE-10247
> URL: https://issues.apache.org/jira/browse/HBASE-10247
> Project: HBase
>  Issue Type: Brainstorming
>Reporter: Lars Hofhansl
>Priority: Minor
> Fix For: 0.99.0, 2.0.0
>
> Attachments: 10247-do-not-try-may-eat-your-first-born-v2.txt, 
> 10247.txt
>
>
> This is to start a discussion about timestamp promises declared per table of 
> CF.
> For example if a client promises only monotonically increasing timestamps (or 
> no custom set timestamps) and VERSIONS=1, we can aggressively and easily 
> remove old versions of the same row/fam/col from the memstore before we 
> flush, just by supplying a comparator that ignores the timestamp (i.e. two KV 
> just differing by TS would be considered equal).
> That would increase the performance of counters significantly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-5841) hbase shell translate_hbase_exceptions() rely on table name as first argument

2014-08-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-5841.


Resolution: Duplicate

Fixed by HBASE-10533

> hbase shell translate_hbase_exceptions() rely on table name as first argument
> -
>
> Key: HBASE-5841
> URL: https://issues.apache.org/jira/browse/HBASE-5841
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.92.1, 0.94.0, 0.95.2
>Reporter: Matteo Bertozzi
>  Labels: shell
>
> shell/commands.rb translate_hbase_exceptions() rely on the fact that the 
> table name is the first argument.
> This is true for many of the commands but for example:
>  - grant(user, rights, table_name, family=nil, qualifier=nil
>  - revoke(user, table_name, family=nil, qualifier=nil)
> has user as first argument, so if you specify a table that doesn't exists, or 
> where you don't have access you end up with a message like "Unknown table 
> {username}" and so on...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11727) Assignment wait time error in case of ServerNotRunningYetException

2014-08-12 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11727:


   Resolution: Fixed
Fix Version/s: 0.98.6
   2.0.0
   1.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Esteban for the review. Integrated the fix into 0.98, 1.0, and master.

> Assignment wait time error in case of ServerNotRunningYetException
> --
>
> Key: HBASE-11727
> URL: https://issues.apache.org/jira/browse/HBASE-11727
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 1.0.0, 2.0.0, 0.98.6
>
> Attachments: hbase-11727.patch
>
>
> {quote}
> maxWaitTime = this.server.getConfiguration().
>   getLong("hbase.regionserver.rpc.startup.waittime", 6);
> {quote}
> It should add the current time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11709) TestMasterShutdown can fail sometime

2014-08-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094653#comment-14094653
 ] 

Hudson commented on HBASE-11709:


FAILURE: Integrated in HBase-TRUNK #5392 (See 
[https://builds.apache.org/job/HBase-TRUNK/5392/])
HBASE-11709 TestMasterShutdown can fail sometime (jxiang: rev 
9abe2da9e80b83ca41f9789bbb0a269631492b6b)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java


> TestMasterShutdown can fail sometime 
> -
>
> Key: HBASE-11709
> URL: https://issues.apache.org/jira/browse/HBASE-11709
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 1.0.0, 2.0.0
>
> Attachments: hbase-11709.patch, hbase-11709_v2.patch
>
>
> This applies to 1.0 and master, not previous versions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11703) Meta region state could be corrupted

2014-08-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094652#comment-14094652
 ] 

Hudson commented on HBASE-11703:


FAILURE: Integrated in HBase-TRUNK #5392 (See 
[https://builds.apache.org/job/HBase-TRUNK/5392/])
HBASE-11703 Meta region state could be corrupted (jxiang: rev 
1262f1e2d49c731eba866fc0382956bfe3dd33dc)
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java


> Meta region state could be corrupted
> 
>
> Key: HBASE-11703
> URL: https://issues.apache.org/jira/browse/HBASE-11703
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 1.0.0, 2.0.0
>
> Attachments: hbase-11703.patch
>
>
> Internal meta region state could be corrupted if the meta is not on master:
> 1. the meta region server (not master) shuts down,
> 2. meta SSH offlines it without updating the dead server's region list,
> 3. meta is transitioned to pending_open and the previous server (the dead 
> server) of meta is lost,
> 4. meta is assigned somewhere else without updating its previous server,
> 5. normal SSH processes the dead server and offlines all of it's dead regions 
> including the meta, so the meta internal state is corrupted



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11702) Better introspection of long running compactions

2014-08-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094657#comment-14094657
 ] 

Lars Hofhansl commented on HBASE-11702:
---

Hopefully {{if (bytesWritten > closeCheckInterval) {}} is not true too 
frequently. System.currentTimeMillis() is not free.

+1


> Better introspection of long running compactions
> 
>
> Key: HBASE-11702
> URL: https://issues.apache.org/jira/browse/HBASE-11702
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 0.99.0, 2.0.0, 0.98.6
>
> Attachments: HBASE-11702.patch
>
>
> For better introspection of long running compactions, periodically print 
> compaction progress for a file at DEBUG level (thread name, file path, total 
> compacted KVs, total compacted bytes, completion percent, rate).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11702) Better introspection of long running compactions

2014-08-12 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094664#comment-14094664
 ] 

Lars Hofhansl commented on HBASE-11702:
---

bq. Not worth the effort of making RS-wide metrics out of that.
Seems like a very useful metric to me. If we had total bytes/KVs flushed, total 
bytes/KVs minor, and total bytes/KVs major compacted per region server that 
seem to be very useful metrics.
Would only need to collect the metric locally per flush/compaction and then 
update the RS metric with the total for the flush/compaction.

> Better introspection of long running compactions
> 
>
> Key: HBASE-11702
> URL: https://issues.apache.org/jira/browse/HBASE-11702
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 0.99.0, 2.0.0, 0.98.6
>
> Attachments: HBASE-11702.patch
>
>
> For better introspection of long running compactions, periodically print 
> compaction progress for a file at DEBUG level (thread name, file path, total 
> compacted KVs, total compacted bytes, completion percent, rate).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11711) In some environment HBase MiniCluster fails to load because Master info port clobbering

2014-08-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094670#comment-14094670
 ] 

Ted Yu commented on HBASE-11711:


I have not seen the above stack trace before.

I don't think the 'Address already in use' was caused by the same test - 
otherwise Jenkins should have failed.

> In some environment HBase MiniCluster fails to load because Master info port 
> clobbering
> ---
>
> Key: HBASE-11711
> URL: https://issues.apache.org/jira/browse/HBASE-11711
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
> Fix For: 2.0.0
>
> Attachments: HBASE-11711-v1.patch
>
>
> Currently TestZooKeeper attempts to launch to masters and the info ports 
> collide. This disables the info port for this test.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11702) Better introspection of long running compactions

2014-08-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094676#comment-14094676
 ] 

Andrew Purtell commented on HBASE-11702:


bq. Would only need to collect the metric locally per flush/compaction and then 
update the RS metric with the total for the flush/compaction.

Ah, sure I can do that. Easy enough. I was thinking of reporting in progress. 
Back with a new patch shortly.

> Better introspection of long running compactions
> 
>
> Key: HBASE-11702
> URL: https://issues.apache.org/jira/browse/HBASE-11702
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 0.99.0, 2.0.0, 0.98.6
>
> Attachments: HBASE-11702.patch
>
>
> For better introspection of long running compactions, periodically print 
> compaction progress for a file at DEBUG level (thread name, file path, total 
> compacted KVs, total compacted bytes, completion percent, rate).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11711) In some environment HBase MiniCluster fails to load because Master info port clobbering

2014-08-12 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094680#comment-14094680
 ] 

Alex Newman commented on HBASE-11711:
-

By default I assume the hbase info port isn't launched in the minicluster or a 
random port is used?

> In some environment HBase MiniCluster fails to load because Master info port 
> clobbering
> ---
>
> Key: HBASE-11711
> URL: https://issues.apache.org/jira/browse/HBASE-11711
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Newman
>Assignee: Alex Newman
> Fix For: 2.0.0
>
> Attachments: HBASE-11711-v1.patch
>
>
> Currently TestZooKeeper attempts to launch to masters and the info ports 
> collide. This disables the info port for this test.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11703) Meta region state could be corrupted

2014-08-12 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-11703:
--

Fix Version/s: (was: 1.0.0)
   0.99.0

> Meta region state could be corrupted
> 
>
> Key: HBASE-11703
> URL: https://issues.apache.org/jira/browse/HBASE-11703
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0, 2.0.0
>
> Attachments: hbase-11703.patch
>
>
> Internal meta region state could be corrupted if the meta is not on master:
> 1. the meta region server (not master) shuts down,
> 2. meta SSH offlines it without updating the dead server's region list,
> 3. meta is transitioned to pending_open and the previous server (the dead 
> server) of meta is lost,
> 4. meta is assigned somewhere else without updating its previous server,
> 5. normal SSH processes the dead server and offlines all of it's dead regions 
> including the meta, so the meta internal state is corrupted



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11725) Backport failover checking change to 1.0

2014-08-12 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-11725:
--

Fix Version/s: (was: 1.0.0)
   0.99.0

> Backport failover checking change to 1.0
> 
>
> Key: HBASE-11725
> URL: https://issues.apache.org/jira/browse/HBASE-11725
> Project: HBase
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 0.99.0
>
> Attachments: hbase-11725.patch
>
>
> In HBASE-11611, we fixed a failover checking bug. We need to backport it to 
> branch 1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11685) Incr/decr on the reference count of HConnectionImplementation need be atomic

2014-08-12 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094691#comment-14094691
 ] 

Nicolas Liochon commented on HBASE-11685:
-

+  refCount = new AtomicInteger(1);
Could be replaced by a simple set, this would allow refCount to be final. It's 
theoretically better.

I wonder if the "throw new RuntimeException("Negative ref count of connection: 
" + this;" cannot have a race condition with this set to 1 in the finalize?
It could be simple to simply log a warning?


> Incr/decr on the reference count of HConnectionImplementation need be atomic 
> -
>
> Key: HBASE-11685
> URL: https://issues.apache.org/jira/browse/HBASE-11685
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-11685-trunk-v1.diff, HBASE-11685-trunk-v2.diff, 
> HBASE-11685-trunk-v3.diff, HBASE-11685-trunk-v4.diff, 
> HBASE-11685-trunk-v5.diff, HBASE-11685-trunk-v6.diff
>
>
> Currently, the incr/decr operation on the ref count of 
> HConnectionImplementation are not atomic. This may cause that the ref count 
> always be larger than 0 and  the connection never be closed.
> {code}
> /**
>  * Increment this client's reference count.
>  */
> void incCount() {
>   ++refCount;
> }
> /**
>  * Decrement this client's reference count.
>  */
> void decCount() {
>   if (refCount > 0) {
> --refCount;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11713) Adding hbase shell unit test coverage for visibility labels.

2014-08-12 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-11713:


Attachment: HBASE-11713_v3.patch

Uploaded the new patch as per comments.

> Adding hbase shell unit test coverage for visibility labels.
> 
>
> Key: HBASE-11713
> URL: https://issues.apache.org/jira/browse/HBASE-11713
> Project: HBase
>  Issue Type: Test
>  Components: security, shell
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Attachments: HBASE-11713.patch, HBASE-11713_v2.patch, 
> HBASE-11713_v3.patch
>
>
> Adding test coverage for visibility labels to hbase shell. Also, refactoring 
> existing tests so that all the unit tests related to visibility can be found 
> in one place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11072) Abstract WAL splitting from ZK

2014-08-12 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094705#comment-14094705
 ] 

Sergey Soldatov commented on HBASE-11072:
-

That's strange. I have clean build with all tests passed on different boxes 
(ubuntu 12.04/14.04, Centos 6.5). 

> Abstract WAL splitting from ZK
> --
>
> Key: HBASE-11072
> URL: https://issues.apache.org/jira/browse/HBASE-11072
> Project: HBase
>  Issue Type: Sub-task
>  Components: Consensus, Zookeeper
>Affects Versions: 0.99.0
>Reporter: Mikhail Antonov
>Assignee: Sergey Soldatov
> Attachments: HBASE-11072-1_v2.patch, HBASE-11072-1_v3.patch, 
> HBASE-11072-1_v4.patch, HBASE-11072-2_v2.patch, HBASE-11072-v1.patch, 
> HBASE-11072-v2.patch, HBASE-11072-v3.patch, HBASE-11072-v4.patch, 
> HBASE-11072-v5.patch, HBASE-11072-v6.patch, HBASE-11072-v7.patch, 
> HBASE_11072-1.patch
>
>
> HM side:
>  - SplitLogManager
> RS side:
>  - SplitLogWorker
>  - HLogSplitter and a few handler classes.
> This jira may need to be split further apart into smaller ones.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11467) New impl of Registry interface not using ZK + new RPCs on master protocol

2014-08-12 Thread ryan rawson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094704#comment-14094704
 ] 

ryan rawson commented on HBASE-11467:
-

Having an extra RPC to figure out who you're talking to isn't horrifyingly bad. 

Thinking broader, how does SSH do this?  Could we emulate that?  I believe that 
in SSH you can tell it via config that this key should be used for this 
host(s), and absent that, it walks thru the set of keys it has at its disposal.

What is the simplest approach for the user?  Magic strings tend to be not liked 
much

> New impl of Registry interface not using ZK + new RPCs on master protocol
> -
>
> Key: HBASE-11467
> URL: https://issues.apache.org/jira/browse/HBASE-11467
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, Consensus, Zookeeper
>Affects Versions: 2.0.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0
>
> Attachments: HBASE-11467.patch, HBASE-11467.patch
>
>
> Currently there' only one implementation of Registry interface, which is 
> using ZK to get info about meta. Need to create implementation which will be 
> using  RPC calls to master the client is connected to.
> Review of early version of patch is here: https://reviews.apache.org/r/24296/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11462) MetaTableAccessor shouldn't use ZooKeeeper

2014-08-12 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-11462:
-

Assignee: Mikhail Antonov  (was: Andrey Stepachev)

> MetaTableAccessor shouldn't use ZooKeeeper
> --
>
> Key: HBASE-11462
> URL: https://issues.apache.org/jira/browse/HBASE-11462
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Zookeeper
>Affects Versions: 2.0.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0
>
>
> After committing patch for HBASE-4495, there's an further improvement which 
> can be made (discussed originally on review board to that jira).
> We have MetaTableAccessor and MetaTableLocator classes. First one is used to 
> access information stored in hbase:meta table. Second one is used to deal 
> with ZooKeeper state to find out region server hosting hbase:meta, wait for 
> it to become available and so on.
> MetaTableAccessor, in turn, should only operate on the meta table content, so 
> shouldn't need ZK. The only reason why MetaTableAccessor is using ZK - when 
> callers request assignment information, they can request location of meta 
> table itself, which we can't read from meta, so in that case 
> MetaTableAccessor relays the call to MetaTableLocator.  May be the solution 
> here is to declare that clients of MetaTableAccessor shall not use it to work 
> with meta table itself (not it's content).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >