[jira] [Commented] (HBASE-11090) Backport HBASE-11083 ExportSnapshot should provide capability to limit bandwidth consumption

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990336#comment-13990336
 ] 

Hadoop QA commented on HBASE-11090:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643324/11090-trunk-v2.txt
  against trunk revision .
  ATTACHMENT ID: 12643324

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9464//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9464//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9464//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9464//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9464//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9464//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9464//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9464//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9464//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9464//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9464//console

This message is automatically generated.

 Backport HBASE-11083 ExportSnapshot should provide capability to limit 
 bandwidth consumption
 

 Key: HBASE-11090
 URL: https://issues.apache.org/jira/browse/HBASE-11090
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.99.0, 0.98.3

 Attachments: 11090-0.98-v1.txt, 11090-trunk-v2.txt, 11090-trunk.txt


 HBASE-11083 allows ExportSnapshot to limit bandwidth usage.
 Here is *one* approach for backporting:
 Create the following classes (class name is tentative):
 hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/util/ThrottledInputStream.java
 hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/util/ThrottledInputStream.java
 each of which extends the corresponding ThrottledInputStream in hadoop-1 / 
 hadoop-2
 ExportSnapshot would reference util.ThrottledInputStream, depending on which 
 compatibility module gets bundled.
 ThrottledInputStream.java in hadoop-1 branch was backported through 
 MAPREDUCE-5081 which went into 1.2.0 release.
 We need to decide how hadoop releases earlier than 1.2.0 should be supported.
 *Second* approach for backporting is to make a copy of ThrottledInputStream 
 and include it in hbase codebase.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10985) Decouple Split Transaction from Zookeeper

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990350#comment-13990350
 ] 

Hadoop QA commented on HBASE-10985:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643475/HBASE_10985-v2.patch
  against trunk revision .
  ATTACHMENT ID: 12643475

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 48 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 8 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestTableMapReduceBase.testMultiRegionTable(TestTableMapReduceBase.java:96)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9465//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9465//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9465//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9465//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9465//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9465//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9465//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9465//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9465//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9465//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9465//console

This message is automatically generated.

 Decouple Split Transaction from Zookeeper
 -

 Key: HBASE-10985
 URL: https://issues.apache.org/jira/browse/HBASE-10985
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Reporter: Sergey Soldatov
 Attachments: HBASE-10985.patch, HBASE-10985.patch, HBASE-10985.patch, 
 HBASE_10985-v2.patch


 As part of  HBASE-10296 SplitTransaction should be decoupled from Zookeeper. 
 This is an initial patch for review. At the moment the consensus provider  
 placed directly to SplitTransaction to minimize affected code. In the ideal 
 world it should be done in HServer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-2502) HBase won't bind to designated interface when more than one network interface is available

2014-05-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990402#comment-13990402
 ] 

Robert Jäschke commented on HBASE-2502:
---

Is there a workaround or are there any plans to fix this issue? It is really a 
big problem for cluster setups with several network interfaces.

 HBase won't bind to designated interface when more than one network interface 
 is available
 --

 Key: HBASE-2502
 URL: https://issues.apache.org/jira/browse/HBASE-2502
 Project: HBase
  Issue Type: Bug
Reporter: stack

 See this message by Michael Segel up on the list: 
 http://www.mail-archive.com/hbase-user@hadoop.apache.org/msg10042.html
 This comes up from time to time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9345) Add support for specifying filters in scan

2014-05-06 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990413#comment-13990413
 ] 

Virag Kothari commented on HBASE-9345:
--

[~ndimiduk] Can you please trigger the build? It dint run last time on changing 
the status to Patch available. Also can you add me as a contributor? Thxs

 Add support for specifying filters in scan
 --

 Key: HBASE-9345
 URL: https://issues.apache.org/jira/browse/HBASE-9345
 Project: HBase
  Issue Type: Improvement
  Components: REST
Affects Versions: 0.94.11
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Attachments: HBASE-9345_trunk.patch


 In the implementation of stateless scanner from HBase-9343, the support for 
 specifying filters is missing. This JIRA aims to implement support for filter 
 specification.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11117) [AccessController] checkAndPut/Delete hook should check only Read permission

2014-05-06 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-7:
--

 Summary: [AccessController] checkAndPut/Delete hook should check 
only Read permission
 Key: HBASE-7
 URL: https://issues.apache.org/jira/browse/HBASE-7
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0, 0.98.3


We check for Read and Write permissions in checkAndPut/Delete hooks. Here we 
check for the condition part alone and so can check for Read permission alone. 
Later prePut/Delete hook is getting called with Put/Delete mutation and in that 
we properly check for the Write permission




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11117) [AccessController] checkAndPut/Delete hook should check only Read permission

2014-05-06 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-7:
---

Status: Patch Available  (was: Open)

 [AccessController] checkAndPut/Delete hook should check only Read permission
 

 Key: HBASE-7
 URL: https://issues.apache.org/jira/browse/HBASE-7
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-7.patch


 We check for Read and Write permissions in checkAndPut/Delete hooks. Here we 
 check for the condition part alone and so can check for Read permission 
 alone. Later prePut/Delete hook is getting called with Put/Delete mutation 
 and in that we properly check for the Write permission



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11117) [AccessController] checkAndPut/Delete hook should check only Read permission

2014-05-06 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-7:
---

Attachment: HBASE-7.patch

 [AccessController] checkAndPut/Delete hook should check only Read permission
 

 Key: HBASE-7
 URL: https://issues.apache.org/jira/browse/HBASE-7
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-7.patch


 We check for Read and Write permissions in checkAndPut/Delete hooks. Here we 
 check for the condition part alone and so can check for Read permission 
 alone. Later prePut/Delete hook is getting called with Put/Delete mutation 
 and in that we properly check for the Write permission



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11117) [AccessController] checkAndPut/Delete hook should check only Read permission

2014-05-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990444#comment-13990444
 ] 

Anoop Sam John commented on HBASE-7:


{code}
-  authResult.setAllowed(checkCoveringPermission(OpType.APPEND, env, 
increment.getRow(),
+  authResult.setAllowed(checkCoveringPermission(OpType.INCREMENT, env, 
increment.getRow(),
{code}
I guess a copy paste mistake with previous commits. Correcting in this patch.

 [AccessController] checkAndPut/Delete hook should check only Read permission
 

 Key: HBASE-7
 URL: https://issues.apache.org/jira/browse/HBASE-7
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-7.patch


 We check for Read and Write permissions in checkAndPut/Delete hooks. Here we 
 check for the condition part alone and so can check for Read permission 
 alone. Later prePut/Delete hook is getting called with Put/Delete mutation 
 and in that we properly check for the Write permission



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10985) Decouple Split Transaction from Zookeeper

2014-05-06 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-10985:


Attachment: HBASE_10985-v3.patch

fixed javadocs issues.

 Decouple Split Transaction from Zookeeper
 -

 Key: HBASE-10985
 URL: https://issues.apache.org/jira/browse/HBASE-10985
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Reporter: Sergey Soldatov
 Attachments: HBASE-10985.patch, HBASE-10985.patch, HBASE-10985.patch, 
 HBASE_10985-v2.patch, HBASE_10985-v3.patch


 As part of  HBASE-10296 SplitTransaction should be decoupled from Zookeeper. 
 This is an initial patch for review. At the moment the consensus provider  
 placed directly to SplitTransaction to minimize affected code. In the ideal 
 world it should be done in HServer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10985) Decouple Split Transaction from Zookeeper

2014-05-06 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-10985:


Status: Open  (was: Patch Available)

 Decouple Split Transaction from Zookeeper
 -

 Key: HBASE-10985
 URL: https://issues.apache.org/jira/browse/HBASE-10985
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Reporter: Sergey Soldatov
 Attachments: HBASE-10985.patch, HBASE-10985.patch, HBASE-10985.patch, 
 HBASE_10985-v2.patch, HBASE_10985-v3.patch


 As part of  HBASE-10296 SplitTransaction should be decoupled from Zookeeper. 
 This is an initial patch for review. At the moment the consensus provider  
 placed directly to SplitTransaction to minimize affected code. In the ideal 
 world it should be done in HServer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10985) Decouple Split Transaction from Zookeeper

2014-05-06 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-10985:


Status: Patch Available  (was: Open)

 Decouple Split Transaction from Zookeeper
 -

 Key: HBASE-10985
 URL: https://issues.apache.org/jira/browse/HBASE-10985
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Reporter: Sergey Soldatov
 Attachments: HBASE-10985.patch, HBASE-10985.patch, HBASE-10985.patch, 
 HBASE_10985-v2.patch, HBASE_10985-v3.patch


 As part of  HBASE-10296 SplitTransaction should be decoupled from Zookeeper. 
 This is an initial patch for review. At the moment the consensus provider  
 placed directly to SplitTransaction to minimize affected code. In the ideal 
 world it should be done in HServer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10993) Deprioritize long-running scanners

2014-05-06 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10993:


Attachment: HBASE-10993-v4.patch

 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10993-v0.patch, HBASE-10993-v1.patch, 
 HBASE-10993-v2.patch, HBASE-10993-v3.patch, HBASE-10993-v4.patch, 
 HBASE-10993-v4.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10993) Deprioritize long-running scanners

2014-05-06 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10993:


Attachment: (was: HBASE-10993-v4.patch)

 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10993-v0.patch, HBASE-10993-v1.patch, 
 HBASE-10993-v2.patch, HBASE-10993-v3.patch, HBASE-10993-v4.patch, 
 HBASE-10993-v4.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11107) Provide utility method equivalent to 0.92's Result.getBytes().getSize()

2014-05-06 Thread Gustavo Anatoly (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990648#comment-13990648
 ] 

Gustavo Anatoly commented on HBASE-11107:
-

Hi, [~rekhajoshm]

I would like know if is possible that I provide a patch, if you're not working 
yet with this issue.

Thanks.

 Provide utility method equivalent to 0.92's Result.getBytes().getSize()
 ---

 Key: HBASE-11107
 URL: https://issues.apache.org/jira/browse/HBASE-11107
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Trivial

 Currently user has to write code similar to the following for replacement of 
 Result.getBytes().getSize() :
 {code}
 +Cell[] cellValues = resultRow.rawCells();
 +
 +long size = 0L;
 +if (null != cellValues) {
 +  for (Cell cellValue : cellValues) {
 +size += KeyValueUtil.ensureKeyValue(cellValue).heapSize();
 +  } 
 +}
 {code}
 In ClientScanner, we have:
 {code}
   for (Cell kv : rs.rawCells()) {
 // TODO make method in Cell or CellUtil
 remainingResultSize -= 
 KeyValueUtil.ensureKeyValue(kv).heapSize();
   }
 {code}
 A utility method should be provided which computes summation of Cell sizes in 
 a Result.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-10417) index is not incremented in PutSortReducer#reduce()

2014-05-06 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly reassigned HBASE-10417:
---

Assignee: Gustavo Anatoly

 index is not incremented in PutSortReducer#reduce()
 ---

 Key: HBASE-10417
 URL: https://issues.apache.org/jira/browse/HBASE-10417
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Minor

 Starting at line 76:
 {code}
   int index = 0;
   for (KeyValue kv : map) {
 context.write(row, kv);
 if (index  0  index % 100 == 0)
   context.setStatus(Wrote  + index);
 {code}
 index is a variable inside while loop that is never incremented.
 The condition index  0 cannot be true.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-11076) Update refguide on getting 0.94.x to run on hadoop 2.2.0+

2014-05-06 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly reassigned HBASE-11076:
---

Assignee: Gustavo Anatoly

 Update refguide on getting 0.94.x to run on hadoop 2.2.0+
 -

 Key: HBASE-11076
 URL: https://issues.apache.org/jira/browse/HBASE-11076
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Gustavo Anatoly

 http://hbase.apache.org/book.html#d248e643 contains steps for rebuilding 0.94 
 code base to run on hadoop 2.2.0+
 However, the files under 
 src/main/java/org/apache/hadoop/hbase/protobuf/generated were produced by 
 protoc 2.4.0
 These files need to be regenerated.
 See 
 http://search-hadoop.com/m/DHED4j7Um02/HBase+0.94+on+hadoop+2.2.0subj=Re+HBase+0+94+on+hadoop+2+2+0+2+4+0+
 This issue is to update refguide with this regeneration step.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-11105) Add bandwidth limit documentation to snapshot export section of refguide

2014-05-06 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly reassigned HBASE-11105:
---

Assignee: Gustavo Anatoly

 Add bandwidth limit documentation to snapshot export section of refguide
 

 Key: HBASE-11105
 URL: https://issues.apache.org/jira/browse/HBASE-11105
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Minor

 http://hbase.apache.org/book.html#ops.snapshots.export lists command line 
 arguments for ExportSnapshot
 Parameters for bandwidth limitation (HBASE-11083) should be documented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10965) Automate detection of presence of Filter#filterRow()

2014-05-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10965:
---

Description: 
There is potential inconsistency between the return value of 
Filter#hasFilterRow() and presence of Filter#filterRow().

Filters may override Filter#filterRow() while leaving return value of 
Filter#hasFilterRow() being false (inherited from FilterBase).

Downside to purely depending on hasFilterRow() telling us whether custom filter 
overrides filterRow(List) or filterRow() is that the check below may be 
rendered ineffective:
{code}
  if (nextKv == KV_LIMIT) {
if (this.filter != null  filter.hasFilterRow()) {
  throw new IncompatibleFilterException(
Filter whose hasFilterRow() returns true is incompatible with 
scan with limit!);
}
{code}
When user forgets to override hasFilterRow(), the above check becomes not 
useful.

Another limitation is that we cannot optimize FilterList#filterRow() through 
short circuit when FilterList#hasFilterRow() turns false.
See 
https://issues.apache.org/jira/browse/HBASE-11093?focusedCommentId=13985149page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13985149

This JIRA aims to remove the inconsistency by automatically detecting the 
presence of overridden Filter#filterRow(). For FilterBase-derived classes, if 
filterRow() is implemented and not inherited from FilterBase, it is equivalent 
to having hasFilterRow() return true.

Henceforth, 
{code}
  return filter != null  (!filter.hasFilterRow())
   filter.filterRow();
{code}

  was:
There is potential inconsistency between the return value of 
Filter#hasFilterRow() and presence of Filter#filterRow().

Filters may override Filter#filterRow() while leaving return value of 
Filter#hasFilterRow() being false (inherited from FilterBase).

Downside to purely depending on hasFilterRow() telling us whether custom filter 
overrides filterRow(List) or filterRow() is that the check below may be 
rendered ineffective:
{code}
  if (nextKv == KV_LIMIT) {
if (this.filter != null  filter.hasFilterRow()) {
  throw new IncompatibleFilterException(
Filter whose hasFilterRow() returns true is incompatible with 
scan with limit!);
}
{code}
When user forgets to override hasFilterRow(), the above check becomes not 
useful.

Another limitation is that we cannot optimize FilterList#filterRow() through 
short circuit when FilterList#hasFilterRow() turns false.
See 
https://issues.apache.org/jira/browse/HBASE-11093?focusedCommentId=13985149page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13985149

This JIRA aims to remove the inconsistency by automatically detecting the 
presence of overridden Filter#filterRow(). If filterRow() is implemented and 
not inherited from FilterBase, it is equivalent to having hasFilterRow() return 
true.


 Automate detection of presence of Filter#filterRow()
 

 Key: HBASE-10965
 URL: https://issues.apache.org/jira/browse/HBASE-10965
 Project: HBase
  Issue Type: Task
  Components: Filters
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10965-v1.txt, 10965-v2.txt, 10965-v3.txt, 10965-v4.txt, 
 10965-v6.txt, 10965-v7.txt


 There is potential inconsistency between the return value of 
 Filter#hasFilterRow() and presence of Filter#filterRow().
 Filters may override Filter#filterRow() while leaving return value of 
 Filter#hasFilterRow() being false (inherited from FilterBase).
 Downside to purely depending on hasFilterRow() telling us whether custom 
 filter overrides filterRow(List) or filterRow() is that the check below may 
 be rendered ineffective:
 {code}
   if (nextKv == KV_LIMIT) {
 if (this.filter != null  filter.hasFilterRow()) {
   throw new IncompatibleFilterException(
 Filter whose hasFilterRow() returns true is incompatible 
 with scan with limit!);
 }
 {code}
 When user forgets to override hasFilterRow(), the above check becomes not 
 useful.
 Another limitation is that we cannot optimize FilterList#filterRow() through 
 short circuit when FilterList#hasFilterRow() turns false.
 See 
 https://issues.apache.org/jira/browse/HBASE-11093?focusedCommentId=13985149page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13985149
 This JIRA aims to remove the inconsistency by automatically detecting the 
 presence of overridden Filter#filterRow(). For FilterBase-derived classes, if 
 filterRow() is implemented and not inherited from FilterBase, it is 
 equivalent to having hasFilterRow() return true.
 Henceforth, 
 {code}
   return filter != null  (!filter.hasFilterRow())
filter.filterRow();

[jira] [Updated] (HBASE-10965) Automate detection of presence of Filter#filterRow()

2014-05-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10965:
---

Description: 
There is potential inconsistency between the return value of 
Filter#hasFilterRow() and presence of Filter#filterRow().

Filters may override Filter#filterRow() while leaving return value of 
Filter#hasFilterRow() being false (inherited from FilterBase).

Downside to purely depending on hasFilterRow() telling us whether custom filter 
overrides filterRow(List) or filterRow() is that the check below may be 
rendered ineffective:
{code}
  if (nextKv == KV_LIMIT) {
if (this.filter != null  filter.hasFilterRow()) {
  throw new IncompatibleFilterException(
Filter whose hasFilterRow() returns true is incompatible with 
scan with limit!);
}
{code}
When user forgets to override hasFilterRow(), the above check becomes not 
useful.

Another limitation is that we cannot optimize FilterList#filterRow() through 
short circuit when FilterList#hasFilterRow() turns false.
See 
https://issues.apache.org/jira/browse/HBASE-11093?focusedCommentId=13985149page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13985149

This JIRA aims to remove the inconsistency by automatically detecting the 
presence of overridden Filter#filterRow(). For FilterBase-derived classes, if 
filterRow() is implemented and not inherited from FilterBase, it is equivalent 
to having hasFilterRow() return true.

With precise detection of presence of Filter#filterRow(), the following code 
from HRegion is no longer needed while backward compatibility is kept.
{code}
  return filter != null  (!filter.hasFilterRow())
   filter.filterRow();
{code}

  was:
There is potential inconsistency between the return value of 
Filter#hasFilterRow() and presence of Filter#filterRow().

Filters may override Filter#filterRow() while leaving return value of 
Filter#hasFilterRow() being false (inherited from FilterBase).

Downside to purely depending on hasFilterRow() telling us whether custom filter 
overrides filterRow(List) or filterRow() is that the check below may be 
rendered ineffective:
{code}
  if (nextKv == KV_LIMIT) {
if (this.filter != null  filter.hasFilterRow()) {
  throw new IncompatibleFilterException(
Filter whose hasFilterRow() returns true is incompatible with 
scan with limit!);
}
{code}
When user forgets to override hasFilterRow(), the above check becomes not 
useful.

Another limitation is that we cannot optimize FilterList#filterRow() through 
short circuit when FilterList#hasFilterRow() turns false.
See 
https://issues.apache.org/jira/browse/HBASE-11093?focusedCommentId=13985149page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13985149

This JIRA aims to remove the inconsistency by automatically detecting the 
presence of overridden Filter#filterRow(). For FilterBase-derived classes, if 
filterRow() is implemented and not inherited from FilterBase, it is equivalent 
to having hasFilterRow() return true.

Henceforth, 
{code}
  return filter != null  (!filter.hasFilterRow())
   filter.filterRow();
{code}


 Automate detection of presence of Filter#filterRow()
 

 Key: HBASE-10965
 URL: https://issues.apache.org/jira/browse/HBASE-10965
 Project: HBase
  Issue Type: Task
  Components: Filters
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10965-v1.txt, 10965-v2.txt, 10965-v3.txt, 10965-v4.txt, 
 10965-v6.txt, 10965-v7.txt


 There is potential inconsistency between the return value of 
 Filter#hasFilterRow() and presence of Filter#filterRow().
 Filters may override Filter#filterRow() while leaving return value of 
 Filter#hasFilterRow() being false (inherited from FilterBase).
 Downside to purely depending on hasFilterRow() telling us whether custom 
 filter overrides filterRow(List) or filterRow() is that the check below may 
 be rendered ineffective:
 {code}
   if (nextKv == KV_LIMIT) {
 if (this.filter != null  filter.hasFilterRow()) {
   throw new IncompatibleFilterException(
 Filter whose hasFilterRow() returns true is incompatible 
 with scan with limit!);
 }
 {code}
 When user forgets to override hasFilterRow(), the above check becomes not 
 useful.
 Another limitation is that we cannot optimize FilterList#filterRow() through 
 short circuit when FilterList#hasFilterRow() turns false.
 See 
 https://issues.apache.org/jira/browse/HBASE-11093?focusedCommentId=13985149page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13985149
 This JIRA aims to remove the inconsistency by automatically detecting the 
 presence of overridden 

[jira] [Created] (HBASE-11118) non environment variable solution for IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralBy

2014-05-06 Thread JIRA
André Kelpe created HBASE-8:
---

 Summary: non environment variable solution for 
IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot 
access its superclass com.google.protobuf.LiteralByteString
 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.2
Reporter: André Kelpe


I am running into the problem described in 
https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a newer 
version within cascading.hbase (https://github.com/cascading/cascading.hbase).

One of the features of cascading.hbase is that you can use it from lingual 
(http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. lingual 
has a notion of providers, which are fat jars that we pull down dynamically at 
runtime. Those jars give users the ability to talk to any system or format from 
SQL. They are added to the classpath  programmatically before we submit jobs to 
a hadoop cluster.

Since lingual does not know upfront , which providers are going to be used in a 
given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
clunky and breaks the ease of use we had before. No other provider requires 
this right now.

It would be great to have a programmatical way to fix this, when using fat jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11117) [AccessController] checkAndPut/Delete hook should check only Read permission

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990824#comment-13990824
 ] 

Hadoop QA commented on HBASE-7:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643511/HBASE-7.patch
  against trunk revision .
  ATTACHMENT ID: 12643511

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9466//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9466//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9466//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9466//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9466//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9466//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9466//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9466//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9466//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9466//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9466//console

This message is automatically generated.

 [AccessController] checkAndPut/Delete hook should check only Read permission
 

 Key: HBASE-7
 URL: https://issues.apache.org/jira/browse/HBASE-7
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-7.patch


 We check for Read and Write permissions in checkAndPut/Delete hooks. Here we 
 check for the condition part alone and so can check for Read permission 
 alone. Later prePut/Delete hook is getting called with Put/Delete mutation 
 and in that we properly check for the Write permission



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11118) non environment variable solution for IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralBy

2014-05-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-8:
---

Fix Version/s: 0.98.3

 non environment variable solution for IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.2
Reporter: André Kelpe
 Fix For: 0.98.3


 I am running into the problem described in 
 https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
 newer version within cascading.hbase 
 (https://github.com/cascading/cascading.hbase).
 One of the features of cascading.hbase is that you can use it from lingual 
 (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
 lingual has a notion of providers, which are fat jars that we pull down 
 dynamically at runtime. Those jars give users the ability to talk to any 
 system or format from SQL. They are added to the classpath  programmatically 
 before we submit jobs to a hadoop cluster.
 Since lingual does not know upfront , which providers are going to be used in 
 a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
 clunky and breaks the ease of use we had before. No other provider requires 
 this right now.
 It would be great to have a programmatical way to fix this, when using fat 
 jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-05-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990892#comment-13990892
 ] 

Andrew Purtell commented on HBASE-8:


Let's look at this again for 0.98.3. [~ndimiduk], [~stack], thoughts?

 non environment variable solution for IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.2
Reporter: André Kelpe
 Fix For: 0.98.3


 I am running into the problem described in 
 https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
 newer version within cascading.hbase 
 (https://github.com/cascading/cascading.hbase).
 One of the features of cascading.hbase is that you can use it from lingual 
 (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
 lingual has a notion of providers, which are fat jars that we pull down 
 dynamically at runtime. Those jars give users the ability to talk to any 
 system or format from SQL. They are added to the classpath  programmatically 
 before we submit jobs to a hadoop cluster.
 Since lingual does not know upfront , which providers are going to be used in 
 a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
 clunky and breaks the ease of use we had before. No other provider requires 
 this right now.
 It would be great to have a programmatical way to fix this, when using fat 
 jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11117) [AccessController] checkAndPut/Delete hook should check only Read permission

2014-05-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990895#comment-13990895
 ] 

Ted Yu commented on HBASE-7:


lgtm

 [AccessController] checkAndPut/Delete hook should check only Read permission
 

 Key: HBASE-7
 URL: https://issues.apache.org/jira/browse/HBASE-7
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-7.patch


 We check for Read and Write permissions in checkAndPut/Delete hooks. Here we 
 check for the condition part alone and so can check for Read permission 
 alone. Later prePut/Delete hook is getting called with Put/Delete mutation 
 and in that we properly check for the Write permission



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-7987) Snapshot Manifest file instead of multiple empty files

2014-05-06 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-7987:
---

Status: Patch Available  (was: Open)

 Snapshot Manifest file instead of multiple empty files
 --

 Key: HBASE-7987
 URL: https://issues.apache.org/jira/browse/HBASE-7987
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.99.0

 Attachments: HBASE-7987-v0.patch, HBASE-7987-v1.patch, 
 HBASE-7987-v2.patch, HBASE-7987-v2.sketch, HBASE-7987-v3.patch, 
 HBASE-7987-v4.patch, HBASE-7987-v5.patch, HBASE-7987-v6.patch, 
 HBASE-7987.sketch


 Currently taking a snapshot means creating one empty file for each file in 
 the source table directory, plus copying the .regioninfo file for each 
 region, the table descriptor file and a snapshotInfo file.
 during the restore or snapshot verification we traverse the filesystem 
 (fs.listStatus()) to find the snapshot files, and we open the .regioninfo 
 files to get the information.
 to avoid hammering the NameNode and having lots of empty files, we can use a 
 manifest file that contains the list of files and information that we need.
 To keep the RS parallelism that we have, each RS can write its own manifest.
 {code}
 message SnapshotDescriptor {
   required string name;
   optional string table;
   optional int64 creationTime;
   optional Type type;
   optional int32 version;
 }
 message SnapshotRegionManifest {
   optional int32 version;
   required RegionInfo regionInfo;
   repeated FamilyFiles familyFiles;
   message StoreFile {
 required string name;
 optional Reference reference;
   }
   message FamilyFiles {
 required bytes familyName;
 repeated StoreFile storeFiles;
   }
 }
 {code}
 {code}
 /hbase/.snapshot/snapshotName
 /hbase/.snapshot/snapshotName/snapshotInfo
 /hbase/.snapshot/snapshotName/tableName
 /hbase/.snapshot/snapshotName/tableName/tableInfo
 /hbase/.snapshot/snapshotName/tableName/regionManifest(.n)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-7987) Snapshot Manifest file instead of multiple empty files

2014-05-06 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-7987:
---

Status: Open  (was: Patch Available)

 Snapshot Manifest file instead of multiple empty files
 --

 Key: HBASE-7987
 URL: https://issues.apache.org/jira/browse/HBASE-7987
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.99.0

 Attachments: HBASE-7987-v0.patch, HBASE-7987-v1.patch, 
 HBASE-7987-v2.patch, HBASE-7987-v2.sketch, HBASE-7987-v3.patch, 
 HBASE-7987-v4.patch, HBASE-7987-v5.patch, HBASE-7987-v6.patch, 
 HBASE-7987.sketch


 Currently taking a snapshot means creating one empty file for each file in 
 the source table directory, plus copying the .regioninfo file for each 
 region, the table descriptor file and a snapshotInfo file.
 during the restore or snapshot verification we traverse the filesystem 
 (fs.listStatus()) to find the snapshot files, and we open the .regioninfo 
 files to get the information.
 to avoid hammering the NameNode and having lots of empty files, we can use a 
 manifest file that contains the list of files and information that we need.
 To keep the RS parallelism that we have, each RS can write its own manifest.
 {code}
 message SnapshotDescriptor {
   required string name;
   optional string table;
   optional int64 creationTime;
   optional Type type;
   optional int32 version;
 }
 message SnapshotRegionManifest {
   optional int32 version;
   required RegionInfo regionInfo;
   repeated FamilyFiles familyFiles;
   message StoreFile {
 required string name;
 optional Reference reference;
   }
   message FamilyFiles {
 required bytes familyName;
 repeated StoreFile storeFiles;
   }
 }
 {code}
 {code}
 /hbase/.snapshot/snapshotName
 /hbase/.snapshot/snapshotName/snapshotInfo
 /hbase/.snapshot/snapshotName/tableName
 /hbase/.snapshot/snapshotName/tableName/tableInfo
 /hbase/.snapshot/snapshotName/tableName/regionManifest(.n)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-11072) Abstract WAL splitting from ZK

2014-05-06 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov reassigned HBASE-11072:
---

Assignee: Konstantin Boudnik  (was: Mikhail Antonov)

 Abstract WAL splitting from ZK
 --

 Key: HBASE-11072
 URL: https://issues.apache.org/jira/browse/HBASE-11072
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Affects Versions: 0.99.0
Reporter: Mikhail Antonov
Assignee: Konstantin Boudnik

 HM side:
  - SplitLogManager
 RS side:
  - SplitLogWorker
  - HLogSplitter and a few handler classes.
 This jira may need to be split further apart into smaller ones.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10357) Failover RPC's for scans

2014-05-06 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-10357:


Attachment: 10357-4.2.txt

Patch with more tests and explanatory comments.

 Failover RPC's for scans
 

 Key: HBASE-10357
 URL: https://issues.apache.org/jira/browse/HBASE-10357
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Enis Soztutar
 Fix For: 0.99.0

 Attachments: 10357-1.txt, 10357-2.txt, 10357-3.2.txt, 10357-3.txt, 
 10357-4.2.txt, 10357-4.txt


 This is extension of HBASE-10355 to add failover support for scans. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HBASE-10923) Control where to put meta region

2014-05-06 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang reopened HBASE-10923:
-


Re-open it. We need a way for the master to do the old-style deployment.

 Control where to put meta region
 

 Key: HBASE-10923
 URL: https://issues.apache.org/jira/browse/HBASE-10923
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang

 There is a concern on placing meta regions on the master, as in the comments 
 of HBASE-10569. I was thinking we should have a configuration for a load 
 balancer to decide where to put it.  Adjusting this configuration we can 
 control whether to put the meta on master, or other region server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11087) Eclipse Import Problem

2014-05-06 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990959#comment-13990959
 ] 

Nick Dimiduk commented on HBASE-11087:
--

The mailing list link doesn't go anywhere. What's the error you're seeing?

I just deleted my workspace and imported from scratch. The only error I see is 
regarding the maven-antrun-plugin, which it says is not covered by lifecycle 
configuration. Is this the error you're seeing?

This patch touches too many pom.xml files (in my testing, only 
hbase-thrift/pom.xml should be necessary). Further, it disables generating of 
thift sources; have a look at the contents of 
hbase-thrift/target/generated-sources with and without this patch.

-1

 Eclipse Import Problem
 --

 Key: HBASE-11087
 URL: https://issues.apache.org/jira/browse/HBASE-11087
 Project: HBase
  Issue Type: Bug
 Environment: Eclipse version is 4.3 Build Id : 20140224-0627
 M2E plugin version : 1.4.0
Reporter: Talat UYARER
Assignee: Talat UYARER
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-11087.patch


 If you try to import eclipse you will get some import errors about maven 
 plugins. [1] So, in order to avoid the exceptions in Eclipse, looks like one 
 needs to simply enclose all the plugin tags inside a pluginManagement tag. 
 I create a patch for this problem. 
 [1] http://mail-archives.apache.org/mod_mbox/hbase-dev/201404.mbox/browser



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11087) Eclipse Import Problem

2014-05-06 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-11087:
-

Status: Open  (was: Patch Available)

 Eclipse Import Problem
 --

 Key: HBASE-11087
 URL: https://issues.apache.org/jira/browse/HBASE-11087
 Project: HBase
  Issue Type: Bug
 Environment: Eclipse version is 4.3 Build Id : 20140224-0627
 M2E plugin version : 1.4.0
Reporter: Talat UYARER
Assignee: Talat UYARER
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-11087.patch


 If you try to import eclipse you will get some import errors about maven 
 plugins. [1] So, in order to avoid the exceptions in Eclipse, looks like one 
 needs to simply enclose all the plugin tags inside a pluginManagement tag. 
 I create a patch for this problem. 
 [1] http://mail-archives.apache.org/mod_mbox/hbase-dev/201404.mbox/browser



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10993) Deprioritize long-running scanners

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990964#comment-13990964
 ] 

Hadoop QA commented on HBASE-10993:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643572/HBASE-10993-v4.patch
  against trunk revision .
  ATTACHMENT ID: 12643572

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9467//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9467//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9467//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9467//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9467//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9467//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9467//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9467//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9467//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9467//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9467//console

This message is automatically generated.

 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10993-v0.patch, HBASE-10993-v1.patch, 
 HBASE-10993-v2.patch, HBASE-10993-v3.patch, HBASE-10993-v4.patch, 
 HBASE-10993-v4.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-05-06 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990971#comment-13990971
 ] 

Nick Dimiduk commented on HBASE-8:
--

Getting this working for everyone will be tricky. Allow me to ask a silly 
question: can we build the provider to work around this issue? I don't know how 
the provider stuff works, but I'm looking at this bit 
https://github.com/Cascading/cascading.hbase/blob/2.2/build.gradle#L116-L120

 non environment variable solution for IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.2
Reporter: André Kelpe
 Fix For: 0.98.3


 I am running into the problem described in 
 https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
 newer version within cascading.hbase 
 (https://github.com/cascading/cascading.hbase).
 One of the features of cascading.hbase is that you can use it from lingual 
 (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
 lingual has a notion of providers, which are fat jars that we pull down 
 dynamically at runtime. Those jars give users the ability to talk to any 
 system or format from SQL. They are added to the classpath  programmatically 
 before we submit jobs to a hadoop cluster.
 Since lingual does not know upfront , which providers are going to be used in 
 a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
 clunky and breaks the ease of use we had before. No other provider requires 
 this right now.
 It would be great to have a programmatical way to fix this, when using fat 
 jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11117) [AccessController] checkAndPut/Delete hook should check only Read permission

2014-05-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990985#comment-13990985
 ] 

Andrew Purtell commented on HBASE-7:


+1, save a few cycles

 [AccessController] checkAndPut/Delete hook should check only Read permission
 

 Key: HBASE-7
 URL: https://issues.apache.org/jira/browse/HBASE-7
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-7.patch


 We check for Read and Write permissions in checkAndPut/Delete hooks. Here we 
 check for the condition part alone and so can check for Read permission 
 alone. Later prePut/Delete hook is getting called with Put/Delete mutation 
 and in that we properly check for the Write permission



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-05-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991026#comment-13991026
 ] 

André Kelpe commented on HBASE-8:
-

For the provider mechanism:

Lingual is a cascading app that submits itself to the cluster. You can use a 
provider to talk to different data formats/sources. You basically tell lingual 
in the table definition this table is actually in Hbase, use  this jar file 
over there to talk to it. lingual itself does not really know about Hbase or 
any other format/system, but HDFS and delimited files. We create fat jars for 
those providers to keep the dependency fetching sane. Only lingual uses those 
jars. We could def. make them shaded jars, but that will not work in this case, 
since protobuf is the mechanism to talk to the HBase cluster. 

Next to that, my understanding of the problem at hand is that it also breaks 
the classic hadoop jars with lib folders. For those we do not have any control, 
since our users are just going to use cascading.hbase, build a jar and submit 
it to the cluster.

 non environment variable solution for IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.2
Reporter: André Kelpe
 Fix For: 0.98.3


 I am running into the problem described in 
 https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
 newer version within cascading.hbase 
 (https://github.com/cascading/cascading.hbase).
 One of the features of cascading.hbase is that you can use it from lingual 
 (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
 lingual has a notion of providers, which are fat jars that we pull down 
 dynamically at runtime. Those jars give users the ability to talk to any 
 system or format from SQL. They are added to the classpath  programmatically 
 before we submit jobs to a hadoop cluster.
 Since lingual does not know upfront , which providers are going to be used in 
 a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
 clunky and breaks the ease of use we had before. No other provider requires 
 this right now.
 It would be great to have a programmatical way to fix this, when using fat 
 jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)



[jira] [Updated] (HBASE-10923) Control where to put meta region

2014-05-06 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10923:


Attachment: hbase-10923.patch

 Control where to put meta region
 

 Key: HBASE-10923
 URL: https://issues.apache.org/jira/browse/HBASE-10923
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: hbase-10923.patch


 There is a concern on placing meta regions on the master, as in the comments 
 of HBASE-10569. I was thinking we should have a configuration for a load 
 balancer to decide where to put it.  Adjusting this configuration we can 
 control whether to put the meta on master, or other region server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10923) Control where to put meta region

2014-05-06 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10923:


Fix Version/s: 0.99.0
   Status: Patch Available  (was: Reopened)

Attached a patch that places regions on master based on a configuration instead 
of hard-coded settings. It doesn't change the default behavior for trunk. 
However, it makes it possible not to put meta on the master so that we can fall 
back to the old-style of master that doesn't host any region.

 Control where to put meta region
 

 Key: HBASE-10923
 URL: https://issues.apache.org/jira/browse/HBASE-10923
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.99.0

 Attachments: hbase-10923.patch


 There is a concern on placing meta regions on the master, as in the comments 
 of HBASE-10569. I was thinking we should have a configuration for a load 
 balancer to decide where to put it.  Adjusting this configuration we can 
 control whether to put the meta on master, or other region server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10985) Decouple Split Transaction from Zookeeper

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991076#comment-13991076
 ] 

Hadoop QA commented on HBASE-10985:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643534/HBASE_10985-v3.patch
  against trunk revision .
  ATTACHMENT ID: 12643534

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 48 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9468//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9468//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9468//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9468//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9468//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9468//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9468//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9468//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9468//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9468//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9468//console

This message is automatically generated.

 Decouple Split Transaction from Zookeeper
 -

 Key: HBASE-10985
 URL: https://issues.apache.org/jira/browse/HBASE-10985
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Reporter: Sergey Soldatov
 Attachments: HBASE-10985.patch, HBASE-10985.patch, HBASE-10985.patch, 
 HBASE_10985-v2.patch, HBASE_10985-v3.patch


 As part of  HBASE-10296 SplitTransaction should be decoupled from Zookeeper. 
 This is an initial patch for review. At the moment the consensus provider  
 placed directly to SplitTransaction to minimize affected code. In the ideal 
 world it should be done in HServer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11087) Eclipse Import Problem

2014-05-06 Thread Talat UYARER (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Talat UYARER updated HBASE-11087:
-

Description: 
If you try to import eclipse you will get some import errors about maven 
plugins. [1] So, in order to avoid the exceptions in Eclipse, looks like one 
needs to simply enclose all the plugin tags inside a pluginManagement tag. I 
create a patch for this problem. 

[1] 
http://mail-archives.apache.org/mod_mbox/hbase-dev/201404.mbox/%3CCAEz%2Byv-hMdjpue91TSh%2B1YdGW8oqo5TXPeH09Y4dntvtPro2bQ%40mail.gmail.com%3E

  was:
If you try to import eclipse you will get some import errors about maven 
plugins. [1] So, in order to avoid the exceptions in Eclipse, looks like one 
needs to simply enclose all the plugin tags inside a pluginManagement tag. I 
create a patch for this problem. 

[1] http://mail-archives.apache.org/mod_mbox/hbase-dev/201404.mbox/browser


 Eclipse Import Problem
 --

 Key: HBASE-11087
 URL: https://issues.apache.org/jira/browse/HBASE-11087
 Project: HBase
  Issue Type: Bug
 Environment: Eclipse version is 4.3 Build Id : 20140224-0627
 M2E plugin version : 1.4.0
Reporter: Talat UYARER
Assignee: Talat UYARER
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-11087.patch


 If you try to import eclipse you will get some import errors about maven 
 plugins. [1] So, in order to avoid the exceptions in Eclipse, looks like one 
 needs to simply enclose all the plugin tags inside a pluginManagement tag. 
 I create a patch for this problem. 
 [1] 
 http://mail-archives.apache.org/mod_mbox/hbase-dev/201404.mbox/%3CCAEz%2Byv-hMdjpue91TSh%2B1YdGW8oqo5TXPeH09Y4dntvtPro2bQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11087) Eclipse Import Problem

2014-05-06 Thread Talat UYARER (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1399#comment-1399
 ] 

Talat UYARER commented on HBASE-11087:
--

Hi [~ndimiduk],

Sorry for I attached wrong permalink. I updated it. When I try import with m2e 
plugin in my eclipse 4.3. It gives error. As you can see on the maillist link. 
I shared my screenshot etc. I researched about this. I read some arctiles about 
this issue. We need just pluginManagement tag. The patch don't change anythings 
else rather than adding pluginManagement tag. 



 Eclipse Import Problem
 --

 Key: HBASE-11087
 URL: https://issues.apache.org/jira/browse/HBASE-11087
 Project: HBase
  Issue Type: Bug
 Environment: Eclipse version is 4.3 Build Id : 20140224-0627
 M2E plugin version : 1.4.0
Reporter: Talat UYARER
Assignee: Talat UYARER
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-11087.patch


 If you try to import eclipse you will get some import errors about maven 
 plugins. [1] So, in order to avoid the exceptions in Eclipse, looks like one 
 needs to simply enclose all the plugin tags inside a pluginManagement tag. 
 I create a patch for this problem. 
 [1] 
 http://mail-archives.apache.org/mod_mbox/hbase-dev/201404.mbox/%3CCAEz%2Byv-hMdjpue91TSh%2B1YdGW8oqo5TXPeH09Y4dntvtPro2bQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10569) Co-locate meta and master

2014-05-06 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10569:


Attachment: master-rs.pdf

Added a doc to clarify some about master/rs and deployment impact related to 
this issue.

 Co-locate meta and master
 -

 Key: HBASE-10569
 URL: https://issues.apache.org/jira/browse/HBASE-10569
 Project: HBase
  Issue Type: Improvement
  Components: master, Region Assignment
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.99.0

 Attachments: Co-locateMetaAndMasterHBASE-10569.pdf, 
 hbase-10569_v1.patch, hbase-10569_v2.patch, hbase-10569_v3.1.patch, 
 hbase-10569_v3.patch, master-rs.pdf


 I was thinking simplifying/improving the region assignments. The first step 
 is to co-locate the meta and the master as many people agreed on HBASE-5487.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-11119) Update ExportSnapShot to optionally not use a tmp file on external file system

2014-05-06 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi reassigned HBASE-9:
---

Assignee: Ted Malaska

 Update ExportSnapShot to optionally not use a tmp file on external file system
 --

 Key: HBASE-9
 URL: https://issues.apache.org/jira/browse/HBASE-9
 Project: HBase
  Issue Type: New Feature
Reporter: Ted Malaska
Assignee: Ted Malaska
Priority: Minor

 There are FileSystem like S3 where renaming is extremely expensive.  This 
 patch will add a parameter that says something like
 use.tmp.folder
 It will be defaulted to true.  So default behavior is the same.  If false is 
 set them the files will land in the final destination with no need for a 
 rename. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11119) Update ExportSnapShot to optionally not use a tmp file on external file system

2014-05-06 Thread Ted Malaska (JIRA)
Ted Malaska created HBASE-9:
---

 Summary: Update ExportSnapShot to optionally not use a tmp file on 
external file system
 Key: HBASE-9
 URL: https://issues.apache.org/jira/browse/HBASE-9
 Project: HBase
  Issue Type: New Feature
Reporter: Ted Malaska
Priority: Minor


There are FileSystem like S3 where renaming is extremely expensive.  This patch 
will add a parameter that says something like

use.tmp.folder

It will be defaulted to true.  So default behavior is the same.  If false is 
set them the files will land in the final destination with no need for a 
rename. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10993) Deprioritize long-running scanners

2014-05-06 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991179#comment-13991179
 ] 

Jean-Daniel Cryans commented on HBASE-10993:


I'm +1, there are a few things to fix in the comments but you can do that on 
commit (like mantain).

BTW have you run this on a cluster where clients  handlers? 

 Deprioritize long-running scanners
 --

 Key: HBASE-10993
 URL: https://issues.apache.org/jira/browse/HBASE-10993
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10993-v0.patch, HBASE-10993-v1.patch, 
 HBASE-10993-v2.patch, HBASE-10993-v3.patch, HBASE-10993-v4.patch, 
 HBASE-10993-v4.patch


 Currently we have a single call queue that serves all the normal user  
 requests, and the requests are executed in FIFO.
 When running map-reduce jobs and user-queries on the same machine, we want to 
 prioritize the user-queries.
 Without changing too much code, and not having the user giving hints, we can 
 add a “vtime” field to the scanner, to keep track from how long is running. 
 And we can replace the callQueue with a priorityQueue. In this way we can 
 deprioritize long-running scans, the longer a scan request lives the less 
 priority it gets.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11120) Update documentation about major compaction algorithm

2014-05-06 Thread Misty Stanley-Jones (JIRA)
Misty Stanley-Jones created HBASE-11120:
---

 Summary: Update documentation about major compaction algorithm
 Key: HBASE-11120
 URL: https://issues.apache.org/jira/browse/HBASE-11120
 Project: HBase
  Issue Type: Bug
  Components: Compaction, documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones


[14:20:38]  jdcryans   seems that there's 
http://hbase.apache.org/book.html#compaction and 
http://hbase.apache.org/book.html#managed.compactions
[14:20:56]  jdcryans   the latter doesn't say much, except that you should 
manage them
[14:21:44]  jdcryans   the former gives a good description of the _old_ 
selection algo

[14:45:25]  jdcryans   this is the new selection algo since C5 / 0.96.0: 
https://issues.apache.org/jira/browse/HBASE-7842





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11121) Guidelines on when / how often to do major compaction, as well as default settings and how to change

2014-05-06 Thread Misty Stanley-Jones (JIRA)
Misty Stanley-Jones created HBASE-11121:
---

 Summary: Guidelines on when / how often to do major compaction, as 
well as default settings and how to change
 Key: HBASE-11121
 URL: https://issues.apache.org/jira/browse/HBASE-11121
 Project: HBase
  Issue Type: Bug
  Components: Compaction, documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones


[14:53:10]  jdcryans   I'd say 80% of the time you should just let it happen 
naturally
[14:53:26]  jdcryans   the main thing is deciding how often you want to major 
compact
[14:53:38]  jdcryans   some people disable it completely because they don't 
update/delete
[14:53:58]  jdcryans   others may decide to do it weekly because they don't 
update/delete that often
[14:54:18]  jdcryans   and then there's the aggressive default of doing it 
every 24h
[14:55:39]  jdcryans   and it looks like the default is now 1 week un trunk
[14:56:09]  jdcryans   same in C5

[14:58:35]  jdcryans   automatic major* compactions



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11121) Guidelines on when / how often to do major compaction, as well as default settings and how to change

2014-05-06 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11121:


Description: 
[14:53:10]  jdcryans   I'd say 80% of the time you should just let it happen 
naturally
[14:53:26]  jdcryans   the main thing is deciding how often you want to major 
compact
[14:53:38]  jdcryans   some people disable it completely because they don't 
update/delete
[14:53:58]  jdcryans   others may decide to do it weekly because they don't 
update/delete that often
[14:54:18]  jdcryans   and then there's the aggressive default of doing it 
every 24h
[14:55:39]  jdcryans   and it looks like the default is now 1 week un trunk

[14:58:35]  jdcryans   automatic major* compactions

  was:
[14:53:10]  jdcryans   I'd say 80% of the time you should just let it happen 
naturally
[14:53:26]  jdcryans   the main thing is deciding how often you want to major 
compact
[14:53:38]  jdcryans   some people disable it completely because they don't 
update/delete
[14:53:58]  jdcryans   others may decide to do it weekly because they don't 
update/delete that often
[14:54:18]  jdcryans   and then there's the aggressive default of doing it 
every 24h
[14:55:39]  jdcryans   and it looks like the default is now 1 week un trunk
[14:56:09]  jdcryans   same in C5

[14:58:35]  jdcryans   automatic major* compactions


 Guidelines on when / how often to do major compaction, as well as default 
 settings and how to change
 

 Key: HBASE-11121
 URL: https://issues.apache.org/jira/browse/HBASE-11121
 Project: HBase
  Issue Type: Bug
  Components: Compaction, documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones

 [14:53:10]  jdcryans I'd say 80% of the time you should just let it 
 happen naturally
 [14:53:26]  jdcryans the main thing is deciding how often you want 
 to major compact
 [14:53:38]  jdcryans some people disable it completely because they 
 don't update/delete
 [14:53:58]  jdcryans others may decide to do it weekly because they 
 don't update/delete that often
 [14:54:18]  jdcryans and then there's the aggressive default of 
 doing it every 24h
 [14:55:39]  jdcryans and it looks like the default is now 1 week un 
 trunk
 [14:58:35]  jdcryans automatic major* compactions



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-11121) Guidelines on when / how often to do major compaction, as well as default settings and how to change

2014-05-06 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones reassigned HBASE-11121:
---

Assignee: Misty Stanley-Jones

 Guidelines on when / how often to do major compaction, as well as default 
 settings and how to change
 

 Key: HBASE-11121
 URL: https://issues.apache.org/jira/browse/HBASE-11121
 Project: HBase
  Issue Type: Bug
  Components: Compaction, documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones

 [14:53:10]  jdcryans I'd say 80% of the time you should just let it 
 happen naturally
 [14:53:26]  jdcryans the main thing is deciding how often you want 
 to major compact
 [14:53:38]  jdcryans some people disable it completely because they 
 don't update/delete
 [14:53:58]  jdcryans others may decide to do it weekly because they 
 don't update/delete that often
 [14:54:18]  jdcryans and then there's the aggressive default of 
 doing it every 24h
 [14:55:39]  jdcryans and it looks like the default is now 1 week un 
 trunk
 [14:58:35]  jdcryans automatic major* compactions



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-11120) Update documentation about major compaction algorithm

2014-05-06 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones reassigned HBASE-11120:
---

Assignee: Misty Stanley-Jones

 Update documentation about major compaction algorithm
 -

 Key: HBASE-11120
 URL: https://issues.apache.org/jira/browse/HBASE-11120
 Project: HBase
  Issue Type: Bug
  Components: Compaction, documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones

 [14:20:38]  jdcryans seems that there's 
 http://hbase.apache.org/book.html#compaction and 
 http://hbase.apache.org/book.html#managed.compactions
 [14:20:56]  jdcryans the latter doesn't say much, except that you 
 should manage them
 [14:21:44]  jdcryans the former gives a good description of the 
 _old_ selection algo
 [14:45:25]  jdcryans this is the new selection algo since C5 / 
 0.96.0: https://issues.apache.org/jira/browse/HBASE-7842



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10569) Co-locate meta and master

2014-05-06 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10569:


Attachment: (was: master-rs.pdf)

 Co-locate meta and master
 -

 Key: HBASE-10569
 URL: https://issues.apache.org/jira/browse/HBASE-10569
 Project: HBase
  Issue Type: Improvement
  Components: master, Region Assignment
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.99.0

 Attachments: Co-locateMetaAndMasterHBASE-10569.pdf, 
 hbase-10569_v1.patch, hbase-10569_v2.patch, hbase-10569_v3.1.patch, 
 hbase-10569_v3.patch, master_rs.pdf


 I was thinking simplifying/improving the region assignments. The first step 
 is to co-locate the meta and the master as many people agreed on HBASE-5487.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11118) non environment variable solution for IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralBy

2014-05-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-8:
---

Fix Version/s: 0.99.0

 non environment variable solution for IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.2
Reporter: André Kelpe
 Fix For: 0.99.0, 0.98.3


 I am running into the problem described in 
 https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
 newer version within cascading.hbase 
 (https://github.com/cascading/cascading.hbase).
 One of the features of cascading.hbase is that you can use it from lingual 
 (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
 lingual has a notion of providers, which are fat jars that we pull down 
 dynamically at runtime. Those jars give users the ability to talk to any 
 system or format from SQL. They are added to the classpath  programmatically 
 before we submit jobs to a hadoop cluster.
 Since lingual does not know upfront , which providers are going to be used in 
 a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
 clunky and breaks the ease of use we had before. No other provider requires 
 this right now.
 It would be great to have a programmatical way to fix this, when using fat 
 jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10569) Co-locate meta and master

2014-05-06 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10569:


Attachment: master_rs.pdf

 Co-locate meta and master
 -

 Key: HBASE-10569
 URL: https://issues.apache.org/jira/browse/HBASE-10569
 Project: HBase
  Issue Type: Improvement
  Components: master, Region Assignment
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.99.0

 Attachments: Co-locateMetaAndMasterHBASE-10569.pdf, 
 hbase-10569_v1.patch, hbase-10569_v2.patch, hbase-10569_v3.1.patch, 
 hbase-10569_v3.patch, master_rs.pdf


 I was thinking simplifying/improving the region assignments. The first step 
 is to co-locate the meta and the master as many people agreed on HBASE-5487.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-05-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991281#comment-13991281
 ] 

Andrew Purtell commented on HBASE-8:


Can we either:

1. Relocate the protobuf Java runtime library classes somewhere under 
org.apache.hbase at package time using Maven's shader module 
(http://maven.apache.org/plugins/maven-shade-plugin/examples/class-relocation.html)
 when producing HBase JARs? If my read of the doc is correct all references in 
HBase code will be relocated at the bytecode level. 

2. Fork the BSD-licensed protobuf Java runtime library classes into a package 
under org.apache.hbase.

?


 non environment variable solution for IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.2
Reporter: André Kelpe
 Fix For: 0.99.0, 0.98.3


 I am running into the problem described in 
 https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
 newer version within cascading.hbase 
 (https://github.com/cascading/cascading.hbase).
 One of the features of cascading.hbase is that you can use it from lingual 
 (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
 lingual has a notion of providers, which are fat jars that we pull down 
 dynamically at runtime. Those jars give users the ability to talk to any 
 system or format from SQL. They are added to the classpath  programmatically 
 before we submit jobs to a hadoop cluster.
 Since lingual does not know upfront , which providers are going to be used in 
 a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
 clunky and breaks the ease of use we had before. No other provider requires 
 this right now.
 It would be great to have a programmatical way to fix this, when using fat 
 jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11122) Annotate coprocessor APIs

2014-05-06 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-11122:
--

 Summary: Annotate coprocessor APIs
 Key: HBASE-11122
 URL: https://issues.apache.org/jira/browse/HBASE-11122
 Project: HBase
  Issue Type: Task
Affects Versions: 0.99.0, 0.98.3
Reporter: Andrew Purtell


Add annotations to coprocessor APIs for:\\

- Interface stability

- If or if not bypassable

- If or if not executed under row lock



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6646) Upgrade to 0.96 section in the book

2014-05-06 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991320#comment-13991320
 ] 

Misty Stanley-Jones commented on HBASE-6646:


Is this done? http://hbase.apache.org/book.html#upgrade0.96

There is also http://hbase.apache.org/book.html#upgrade0.98 with a TODO tag. 
Need a new JIRA?

 Upgrade to 0.96 section in the book
 ---

 Key: HBASE-6646
 URL: https://issues.apache.org/jira/browse/HBASE-6646
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.95.2
Reporter: Enis Soztutar
Priority: Blocker

 We should have an upgrade section in the book for 0.96. Raising this as 
 blocker.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6626) Add a chapter on HDFS in the troubleshooting section of the HBase reference guide.

2014-05-06 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991316#comment-13991316
 ] 

Misty Stanley-Jones commented on HBASE-6626:


What's the status of this? It is marked as a blocker and looks like there is a 
lot of info in the comments.

 Add a chapter on HDFS in the troubleshooting section of the HBase reference 
 guide.
 --

 Key: HBASE-6626
 URL: https://issues.apache.org/jira/browse/HBASE-6626
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.95.2
Reporter: Nicolas Liochon
Assignee: Doug Meil
Priority: Blocker
 Attachments: troubleshooting.txt


 I looked mainly at the major failure case, but here is what I have:
 New sub chapter in the existing chapter Troubleshooting and Debugging 
 HBase: HDFS  HBASE
 1) HDFS  HBase
 2) Connection related settings
 2.1) Number of retries
 2.2) Timeouts
 3) Log samples
 1) HDFS  HBase
 HBase uses HDFS to store its HFile, i.e. the core HBase files and the 
 Write-Ahead-Logs, i.e. the files that will be used to restore the data after 
 a crash.
 In both cases, the reliability of HBase comes from the fact that HDFS writes 
 the data to multiple locations. To be efficient, HBase needs the data to be 
 available locally, hence it's highly recommended to have the HDFS datanode on 
 the same machines as the HBase Region Servers.
 Detailed information on how HDFS works can be found at [1].
 Important features are:
  - HBase is a client application of HDFS, i.e. uses the HDFS DFSClient class. 
 This class can appears in HBase logs with other HDFS client related logs.
  - Some HDFS settings are HDFS-server-side, i.e. must be set on the HDFS 
 side, while some other are HDFS-client-side, i.e. must be set in HBase, while 
 some other must be set in both places.
  - the HDFS writes are pipelined from one datanode to another. When writing, 
 there are communications between:
 - HBase and HDFS namenode, through the HDFS client classes.
 - HBase and HDFS datanodes, through the HDFS client classes.
 - HDFS datanode between themselves: issues on these communications are in 
 HDFS logs, not HBase. HDFS writes are always local when possible. As a 
 consequence, there should not be much write error in HBase Region Servers: 
 they write to the local datanode. If this datanode can't replicate the 
 blocks, it will appear in its logs, not in the region servers logs.
  - datanodes can be contacted through the ipc.Client interface (once again 
 this class can shows up in HBase logs) and the data transfer interface 
 (usually shows up as the DataNode class in the HBase logs). There are on 
 different ports (defaults being: 50010 and 50020).
  - To understand exactly what's going on, you must look that the HDFS log 
 files as well: HBase logs represent the client side.
  - With the default setting, HDFS needs 630s to mark a datanode as dead. For 
 this reason, this node will still be tried by HBase or by other datanodes 
 when writing and reading until HDFS definitively decides it's dead. This will 
 add some extras lines in the logs. This monitoring is performed by the 
 NameNode.
  - The HDFS clients (i.e. HBase using HDFS client code) don't fully rely on 
 the NameNode, but can mark temporally a node as dead if they had an error 
 when they tried to use it.
 2) Settings for retries and timeouts
 2.1) Retries
 ipc.client.connect.max.retries
 Default 10
 Indicates the number of retries a client will make to establish a server 
 connection. Not taken into account if the error is a SocketTimeout. In this 
 case the number of retries is 45 (fixed on branch, HADOOP-7932 or in 
 HADOOP-7397). For SASL, the number of retries is hard-coded to 15. Can be 
 increased, especially if the socket timeouts have been lowered.
 ipc.client.connect.max.retries.on.timeouts
 Default 45
 If you have HADOOP-7932, max number of retries on timeout. Counter is 
 different than ipc.client.connect.max.retries so if you mix the socket errors 
 you will get 55 retries with the default values. Could be lowered, once it is 
 available. With HADOOP-7397 ipc.client.connect.max.retries is reused so there 
 would be 10 tries.
 dfs.client.block.write.retries
 Default 3
 Number of tries for the client when writing a block. After a failure, will 
 connect to the namenode a get a new location, sending the list of the 
 datanodes already tried without success. Could be increased, especially if 
 the socket timeouts have been lowered. See HBASE-6490.
 dfs.client.block.write.locateFollowingBlock.retries
 Default 5
 Number of retries to the namenode when the client got 
 NotReplicatedYetException, i.e. the existing nodes of the files are not yet 
 replicated to 

[jira] [Resolved] (HBASE-6646) Upgrade to 0.96 section in the book

2014-05-06 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-6646.
--

Resolution: Duplicate

Thanks. Yes, this has been done elsewhere. Having a upgrade from 0.94 to 0.98 
would be good though in a separate jira. 

 Upgrade to 0.96 section in the book
 ---

 Key: HBASE-6646
 URL: https://issues.apache.org/jira/browse/HBASE-6646
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.95.2
Reporter: Enis Soztutar
Priority: Blocker

 We should have an upgrade section in the book for 0.96. Raising this as 
 blocker.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11123) Upgrade instructions from 0.94 to 0.98

2014-05-06 Thread Misty Stanley-Jones (JIRA)
Misty Stanley-Jones created HBASE-11123:
---

 Summary: Upgrade instructions from 0.94 to 0.98
 Key: HBASE-11123
 URL: https://issues.apache.org/jira/browse/HBASE-11123
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.95.2
Reporter: Misty Stanley-Jones
Priority: Blocker


We should have an upgrade section in the book for 0.96. Raising this as blocker.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11123) Upgrade instructions from 0.94 to 0.98

2014-05-06 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11123:


Priority: Minor  (was: Blocker)

 Upgrade instructions from 0.94 to 0.98
 --

 Key: HBASE-11123
 URL: https://issues.apache.org/jira/browse/HBASE-11123
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones
Priority: Minor

 I cloned this from the 0.96 upgrade docs task. It was suggested that we need 
 upgrade instructions from 0.94 to 0.98. I will need source material to even 
 prioritize this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11123) Upgrade instructions from 0.94 to 0.98

2014-05-06 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11123:


Description: I cloned this from the 0.96 upgrade docs task. It was 
suggested that we need upgrade instructions from 0.94 to 0.98. I will need 
source material to even prioritize this. Assuming this is Minor.  (was: I 
cloned this from the 0.96 upgrade docs task. It was suggested that we need 
upgrade instructions from 0.94 to 0.98. I will need source material to even 
prioritize this.)

 Upgrade instructions from 0.94 to 0.98
 --

 Key: HBASE-11123
 URL: https://issues.apache.org/jira/browse/HBASE-11123
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones
Priority: Minor

 I cloned this from the 0.96 upgrade docs task. It was suggested that we need 
 upgrade instructions from 0.94 to 0.98. I will need source material to even 
 prioritize this. Assuming this is Minor.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10923) Control where to put meta region

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13991364#comment-13991364
 ] 

Hadoop QA commented on HBASE-10923:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643621/hbase-10923.patch
  against trunk revision .
  ATTACHMENT ID: 12643621

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestMultiParallel
  org.apache.hadoop.hbase.master.TestAssignmentManager

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9469//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9469//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9469//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9469//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9469//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9469//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9469//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9469//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9469//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9469//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9469//console

This message is automatically generated.

 Control where to put meta region
 

 Key: HBASE-10923
 URL: https://issues.apache.org/jira/browse/HBASE-10923
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.99.0

 Attachments: hbase-10923.patch


 There is a concern on placing meta regions on the master, as in the comments 
 of HBASE-10569. I was thinking we should have a configuration for a load 
 balancer to decide where to put it.  Adjusting this configuration we can 
 control whether to put the meta on master, or other region server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11123) Upgrade instructions from 0.94 to 0.98

2014-05-06 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11123:


Affects Version/s: (was: 0.95.2)
   0.98.2

 Upgrade instructions from 0.94 to 0.98
 --

 Key: HBASE-11123
 URL: https://issues.apache.org/jira/browse/HBASE-11123
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones
Priority: Minor

 I cloned this from the 0.96 upgrade docs task. It was suggested that we need 
 upgrade instructions from 0.94 to 0.98. I will need source material to even 
 prioritize this. Assuming this is Minor.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11123) Upgrade instructions from 0.94 to 0.98

2014-05-06 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11123:


Description: I cloned this from the 0.96 upgrade docs task. It was 
suggested that we need upgrade instructions from 0.94 to 0.98. I will need 
source material to even prioritize this.  (was: We should have an upgrade 
section in the book for 0.96. Raising this as blocker.)

 Upgrade instructions from 0.94 to 0.98
 --

 Key: HBASE-11123
 URL: https://issues.apache.org/jira/browse/HBASE-11123
 Project: HBase
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones
Priority: Blocker

 I cloned this from the 0.96 upgrade docs task. It was suggested that we need 
 upgrade instructions from 0.94 to 0.98. I will need source material to even 
 prioritize this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)