[jira] [Commented] (HBASE-11169) nit: fix incorrect javadoc in OrderedBytes about BlobVar and BlobCopy

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000661#comment-14000661
 ] 

Hudson commented on HBASE-11169:


SUCCESS: Integrated in HBase-TRUNK #5133 (See 
[https://builds.apache.org/job/HBase-TRUNK/5133/])
HBASE-11169 nit: fix incorrect javadoc in OrderedBytes about BlobVar and 
BlobCopy (jmhsieh: rev 1595389)
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/OrderedBytes.java


 nit: fix incorrect javadoc in OrderedBytes about BlobVar and BlobCopy
 -

 Key: HBASE-11169
 URL: https://issues.apache.org/jira/browse/HBASE-11169
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.95.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Trivial
 Fix For: 0.99.0, 0.96.3, 0.98.3

 Attachments: HBASE-11169.patch


 Trivial error in javadoc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10003) OnlineMerge should be extended to allow bulk merging

2014-05-17 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000684#comment-14000684
 ] 

Esteban Gutierrez commented on HBASE-10003:
---

[~takeshi.miao] any progress on this? I think this is a really good improvement 
to have.

 OnlineMerge should be extended to allow bulk merging
 

 Key: HBASE-10003
 URL: https://issues.apache.org/jira/browse/HBASE-10003
 Project: HBase
  Issue Type: Improvement
  Components: Admin, Usability
Affects Versions: 0.98.0, 0.94.6
Reporter: Clint Heath
Assignee: takeshi.miao
Priority: Critical
  Labels: noob

 Now that we have Online Merge capabilities, the function of that tool should 
 be extended to make it much easier for HBase operations folks to use.  
 Currently it is a very manual process (one fraught with confusion) to hand 
 pick two regions that are contiguous to each other in the META table such 
 that the admin can manually request those two regions to be merged.
 In the real world, when admins find themselves wanting to merge regions, it's 
 usually because they've greatly increased their hbase.hregion.max.filesize 
 property and they have way too many regions on a table and want to reduce the 
 region count for that entire table quickly and easily.
 Why can't the OnlineMerge command just take a -max argument along with a 
 table name which tells it to go ahead and merge all regions of said table 
 until the resulting regions are all of max size?  This takes the voodoo out 
 of the process and quickly gets the admin what they're looking for.
 As part of this improvement, I also suggest a -regioncount argument for 
 OnlineMerge, which will attempt to reduce the table's region count down to 
 the specified #.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10835) DBE encode path improvements

2014-05-17 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000626#comment-14000626
 ] 

Anoop Sam John commented on HBASE-10835:


{code}
5 warnings
[WARNING] Javadoc Warnings
[WARNING] javadoc: warning - Multiple sources of package comments found for 
package org.apache.hadoop.hbase.io.hfile
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java:47:
 warning - Tag @link: can't find endBlockEncoding(HFileBlockEncodingContext, 
DataOutputStream, byte[]) in 
org.apache.hadoop.hbase.io.hfile.HFileDataBlockEncoder
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/package-info.java:119:
 warning - Tag @link: reference not found: 
CacheConfig#SLAB_CACHE_OFFHEAP_PERCENTAGE_KEY
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/package-info.java:119:
 warning - Tag @link: reference not found: HConstants#HFILE_BLOCK_CACHE_SIZE_KEY
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/package-info.java:119:
 warning - Tag @link: reference not found: 
CacheConfig#BUCKET_CACHE_COMBINED_PERCENTAGE_KEY
[INFO] 
{code}
Out of this only one warn regarding HFileDataBlockEncoder is introduced by this 
patch. I can correct that on commit.

 DBE encode path improvements
 

 Key: HBASE-10835
 URL: https://issues.apache.org/jira/browse/HBASE-10835
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0

 Attachments: HBASE-10835.patch, HBASE-10835_V2.patch, 
 HBASE-10835_V3.patch, HBASE-10835_V4.patch


 Here 1st we write KVs (Cells) into a buffer and then passed to DBE encoder. 
 Encoder again reads kvs one by one from the buffer and encodes and creates a 
 new buffer.
 There is no need to have this model now. Previously we had option of no 
 encode in disk and encode only in cache. At that time the read buffer from a 
 HFile block was passed to this and encodes.
 So encode cell by cell can be done now. Making this change will need us to 
 have a NoOp DBE impl which just do the write of a cell as it is with out any 
 encoding.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11167) Avoid usage of java.rmi package Exception in MemStore

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000663#comment-14000663
 ] 

Hudson commented on HBASE-11167:


SUCCESS: Integrated in HBase-TRUNK #5133 (See 
[https://builds.apache.org/job/HBase-TRUNK/5133/])
HBASE-11167 Avoid usage of java.rmi package Exception in MemStore. (Anoop) 
(anoopsamjohn: rev 1595100)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultMemStore.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/UnexpectedStateException.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java


 Avoid usage of java.rmi package Exception in MemStore
 -

 Key: HBASE-11167
 URL: https://issues.apache.org/jira/browse/HBASE-11167
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0

 Attachments: HBASE-11167.patch


 This Exception was in use already. While making MemStore into Interface I 
 have not looked closely into it (whether class from RMI or not). So it went 
 into Interface as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10768) hbase/bin/hbase-cleanup.sh has wrong usage string

2014-05-17 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez resolved HBASE-10768.
---

Resolution: Duplicate

Dup of HBASE-10769

 hbase/bin/hbase-cleanup.sh has wrong usage string
 -

 Key: HBASE-10768
 URL: https://issues.apache.org/jira/browse/HBASE-10768
 Project: HBase
  Issue Type: Improvement
  Components: Usability
Affects Versions: 0.96.1, 0.98.1
Reporter: Vamsee Yarlagadda
Priority: Trivial

 Looks like hbase-cleanup,sh has wrong Usage string.
 https://github.com/apache/hbase/blob/trunk/bin/hbase-cleanup.sh#L34
 Current Usage string:
 {code}
 [systest@search-testing-c5-ncm-1 ~]$ echo 
 `/usr/lib/hbase/bin/hbase-cleanup.sh`
 Usage: hbase-cleanup.sh (zk|hdfs|all)
 {code}
 But ideally digging into the login of hbase-cleanup.sh, it should be modified 
 to
 {code}
 [systest@search-testing-c5-ncm-1 ~]$ echo 
 `/usr/lib/hbase/bin/hbase-cleanup.sh`
 Usage: hbase-cleanup.sh (--cleanZk|--cleanHdfs|--cleanAll)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11108) Split ZKTable into interface and implementation

2014-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000593#comment-14000593
 ] 

Hadoop QA commented on HBASE-11108:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645309/HBASE-11108.patch
  against trunk revision .
  ATTACHMENT ID: 12645309

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 115 
new or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   * @throws org.apache.hadoop.hbase.CoordinatedStateException if error 
happened in underlying coordination engine
+  public static final String HBASE_COORDINATED_STATE_MANAGER_CLASS = 
hbase.coordinated.state.manager.class;
+CoordinatedStateManager cp = 
CoordinatedStateManagerFactory.getCoordinatedStateManager(conf);
+  private void handleEnableTable() throws IOException, 
CoordinatedStateException, InterruptedException {
+CoordinatedStateManager cp = 
CoordinatedStateManagerFactory.getCoordinatedStateManager(HTU.getConfiguration());
+CoordinatedStateManager cp = 
CoordinatedStateManagerFactory.getCoordinatedStateManager(HTU.getConfiguration());
+CoordinatedStateManager cp = 
CoordinatedStateManagerFactory.getCoordinatedStateManager(HTU.getConfiguration());

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.util.TestHBaseFsck

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9528//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9528//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9528//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9528//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9528//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9528//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9528//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9528//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9528//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9528//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9528//console

This message is automatically generated.

 Split ZKTable into interface and implementation
 ---

 Key: HBASE-11108
 URL: https://issues.apache.org/jira/browse/HBASE-11108
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Affects Versions: 0.99.0
Reporter: Konstantin Boudnik
Assignee: Mikhail Antonov
 Attachments: HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, 
 HBASE-11108.patch, HBASE-11108.patch


 In HBASE-11071 we are trying to split admin handlers away from ZK. However, a 
 ZKTable instance is being used in multiple places, hence it would be 
 beneficial to hide its implementation behind a well defined interface.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11189) Subprocedure should be marked as complete upon failure

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000662#comment-14000662
 ] 

Hudson commented on HBASE-11189:


SUCCESS: Integrated in HBase-TRUNK #5133 (See 
[https://builds.apache.org/job/HBase-TRUNK/5133/])
HBASE-11189 Subprocedure should be marked as complete upon failure (tedyu: rev 
1595357)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureMember.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/Subprocedure.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedure.java


 Subprocedure should be marked as complete upon failure
 --

 Key: HBASE-11189
 URL: https://issues.apache.org/jira/browse/HBASE-11189
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.99.0, 0.98.3

 Attachments: 11189-v1.txt, 11189-v2.txt


 ProcedureMember#submitSubprocedure() uses the following check:
 {code}
   if (!rsub.isComplete()) {
 LOG.error(Subproc ' + procName + ' is already running. Bailing 
 out);
 return false;
   }
 {code}
 If a subprocedure of that name previously ran but failed, its complete field 
 would stay false, leading to early bailout.
 A failed subprocedure should mark itself complete.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11166) TestTimestampEncoder doesn't have category

2014-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000672#comment-14000672
 ] 

Hadoop QA commented on HBASE-11166:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645361/11166-v2.txt
  against trunk revision .
  ATTACHMENT ID: 12645361

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestMultiTableInputFormat.testScan(TestMultiTableInputFormat.java:244)
at 
org.apache.hadoop.hbase.mapreduce.TestMultiTableInputFormat.testScanYZYToEmpty(TestMultiTableInputFormat.java:195)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9530//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9530//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9530//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9530//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9530//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9530//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9530//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9530//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9530//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9530//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9530//console

This message is automatically generated.

 TestTimestampEncoder doesn't have category
 --

 Key: HBASE-11166
 URL: https://issues.apache.org/jira/browse/HBASE-11166
 Project: HBase
  Issue Type: Test
Affects Versions: 0.98.2
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Minor
 Attachments: 11166-v2.txt, HBASE-11166.1.patch


 Jeff Bowles discovered that TestTimestampEncoder doesn't have category



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11137) Add mapred.TableSnapshotInputFormat

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000766#comment-14000766
 ] 

Hudson commented on HBASE-11137:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #291 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/291/])
HBASE-11137 Add mapred.TableSnapshotInputFormat (ndimiduk: rev 1594984)
* 
/hbase/branches/0.98/hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestTableSnapshotInputFormat.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableSnapshotInputFormat.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatTestBase.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java


 Add mapred.TableSnapshotInputFormat
 ---

 Key: HBASE-11137
 URL: https://issues.apache.org/jira/browse/HBASE-11137
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce, Performance
Affects Versions: 0.98.0, 0.96.2
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-11137.00-0.98.patch, HBASE-11137.00.patch, 
 HBASE-11137.01.patch, HBASE-11137.01_rerun.patch, HBASE-11137.02-0.98.patch, 
 HBASE-11137.02.patch


 We should have feature parity between mapreduce and mapred implementations. 
 This is important for Hive.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11168) [docs] Remove references to RowLocks in post 0.96 docs.

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000771#comment-14000771
 ] 

Hudson commented on HBASE-11168:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #291 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/291/])
HBASE-11168 [docs] Remove references to Rowlocks in post-0.96 docs. (jmhsieh: 
rev 1595393)
* /hbase/branches/0.98/src/main/docbkx/book.xml
* /hbase/branches/0.98/src/main/docbkx/troubleshooting.xml


 [docs] Remove references to RowLocks in post 0.96 docs.
 ---

 Key: HBASE-11168
 URL: https://issues.apache.org/jira/browse/HBASE-11168
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.95.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.99.0, 0.98.3

 Attachments: hbase-11168.patch


 Row locks were removed in 0.95 by HBASE-7315 / HBASE-2332.  There are a few 
 vestiges of them in the docs.   Remove.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6990) Pretty print TTL

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000763#comment-14000763
 ] 

Hudson commented on HBASE-6990:
---

FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #291 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/291/])
HBASE-6990 ADDENDUM (jmhsieh: rev 1595408)
* 
/hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/util/PrettyPrinter.java
HBASE-6990 pretty print TTL (Esteban Gutierrez) (jmhsieh: rev 1595397)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java


 Pretty print TTL
 

 Key: HBASE-6990
 URL: https://issues.apache.org/jira/browse/HBASE-6990
 Project: HBase
  Issue Type: Improvement
  Components: Usability
Reporter: Jean-Daniel Cryans
Assignee: Esteban Gutierrez
Priority: Minor
 Attachments: HBASE-6990.v0.patch, HBASE-6990.v1.patch, 
 HBASE-6990.v2.patch, HBASE-6990.v3.patch, HBASE-6990.v4.patch


 I've seen a lot of users getting confused by the TTL configuration and I 
 think that if we just pretty printed it it would solve most of the issues. 
 For example, let's say a user wanted to set a TTL of 90 days. That would be 
 7776000. But let's say that it was typo'd to 7776 instead, it gives you 
 900 days!
 So when we print the TTL we could do something like x days, x hours, x 
 minutes, x seconds (real_ttl_value). This would also help people when they 
 use ms instead of seconds as they would see really big values in there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11189) Subprocedure should be marked as complete upon failure

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000769#comment-14000769
 ] 

Hudson commented on HBASE-11189:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #291 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/291/])
HBASE-11189 Subprocedure should be marked as complete upon failure (tedyu: rev 
1595356)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureMember.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/Subprocedure.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedure.java


 Subprocedure should be marked as complete upon failure
 --

 Key: HBASE-11189
 URL: https://issues.apache.org/jira/browse/HBASE-11189
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.99.0, 0.98.3

 Attachments: 11189-v1.txt, 11189-v2.txt


 ProcedureMember#submitSubprocedure() uses the following check:
 {code}
   if (!rsub.isComplete()) {
 LOG.error(Subproc ' + procName + ' is already running. Bailing 
 out);
 return false;
   }
 {code}
 If a subprocedure of that name previously ran but failed, its complete field 
 would stay false, leading to early bailout.
 A failed subprocedure should mark itself complete.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11169) nit: fix incorrect javadoc in OrderedBytes about BlobVar and BlobCopy

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000764#comment-14000764
 ] 

Hudson commented on HBASE-11169:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #291 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/291/])
HBASE-11169 nit: fix incorrect javadoc in OrderedBytes about BlobVar and 
BlobCopy (jmhsieh: rev 1595390)
* 
/hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/util/OrderedBytes.java


 nit: fix incorrect javadoc in OrderedBytes about BlobVar and BlobCopy
 -

 Key: HBASE-11169
 URL: https://issues.apache.org/jira/browse/HBASE-11169
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.95.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Trivial
 Fix For: 0.99.0, 0.96.3, 0.98.3

 Attachments: HBASE-11169.patch


 Trivial error in javadoc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11134) Add a -list-snapshots option to SnapshotInfo

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000770#comment-14000770
 ] 

Hudson commented on HBASE-11134:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #291 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/291/])
HBASE-11134 Add a -list-snapshots option to SnapshotInfo (mbertozzi: rev 
1594855)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java


 Add a -list-snapshots option to SnapshotInfo
 

 Key: HBASE-11134
 URL: https://issues.apache.org/jira/browse/HBASE-11134
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.99.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.99.0, 0.96.3, 0.94.20, 0.98.3

 Attachments: HBASE-11134-v0.patch, HBASE-11134-v1.patch


 Add a -list-snapshots option to SnapshotInfo to show all the snapshots 
 available. Also add a -remote-dir option to simplify the usage of 
 SnapshotInfo in case the snapshot dir is not the one of the current hbase 
 cluster
 {code}
 $ hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -list-snapshots
 SNAPSHOT | CREATION TIME| TABLE NAME
 foo  |  2014-05-07T22:40:13 | testtb
 bar  |  2014-05-07T22:40:16 | testtb
 $ hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -remote-dir 
 file:///backup/ -snapshot my_local_snapshot
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10212) New rpc metric: number of active handler

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000765#comment-14000765
 ] 

Hudson commented on HBASE-10212:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #291 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/291/])
Amend HBASE-10561 Forward port: HBASE-10212 New rpc metric: number of active 
handler (liangxie: rev 1594133)
* 
/hbase/branches/0.98/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
Amend HBASE-10561 Forward port: HBASE-10212 New rpc metric: number of active 
handler (liangxie: rev 1594130)
* 
/hbase/branches/0.98/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java
* 
/hbase/branches/0.98/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperStub.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
HBASE-10561 Forward port: HBASE-10212 New rpc metric: number of active handler 
(liangxie: rev 1594118)
* 
/hbase/branches/0.98/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapper.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/FifoRpcScheduler.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcScheduler.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperStub.java


 New rpc metric: number of active handler
 

 Key: HBASE-10212
 URL: https://issues.apache.org/jira/browse/HBASE-10212
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Reporter: Chao Shi
Assignee: Chao Shi
 Fix For: 0.94.17

 Attachments: hbase-10212.patch


 The attached patch adds a new metric: number of active handler threads. We 
 found this is a good metric to measure how busy of a server. If this number 
 is too high (compared to the total number of handlers), the server has risks 
 in getting call queue full.
 We used to monitor  # reads or # writes. However we found this often produce 
 false alerts, because a read touching HDFS will produce much high workload 
 than a block-cached read.
 The attached patch is based on our internal 0.94 branch, but I think it 
 pretty easy to port to rebase to other branches if you think it is useful.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10251) Restore API Compat for PerformanceEvaluation.generateValue()

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000772#comment-14000772
 ] 

Hudson commented on HBASE-10251:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #291 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/291/])
HBASE-10251 Restore API Compat for PerformanceEvaluation.generateValue() (Dima 
Spivak via JD) (jdcryans: rev 1594638)
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java


 Restore API Compat for PerformanceEvaluation.generateValue()
 

 Key: HBASE-10251
 URL: https://issues.apache.org/jira/browse/HBASE-10251
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.98.1, 0.99.0, 0.98.2
Reporter: Aleksandr Shulman
Assignee: Dima Spivak
  Labels: api_compatibility
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-10251-v2.patch, HBASE_10251.patch, 
 HBASE_10251_v3.patch


 Observed:
 A couple of my client tests fail to compile against trunk because the method 
 PerformanceEvaluation.generateValue was removed as part of HBASE-8496.
 This is an issue because it was used in a number of places, including unit 
 tests. Since we did not explicitly label this API as private, it's ambiguous 
 as to whether this could/should have been used by people writing apps against 
 0.96. If they used it, then they would be broken upon upgrade to 0.98 and 
 trunk.
 Potential Solution:
 The method was renamed to generateData, but the logic is still the same. We 
 can reintroduce it as deprecated in 0.98, as compat shim over generateData. 
 The patch should be a few lines. We may also consider doing so in trunk, but 
 I'd be just as fine with leaving it out.
 More generally, this raises the question about what other code is in this 
 grey-area, where it is public, is used outside of the package, but is not 
 explicitly labeled with an AudienceInterface.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11143) Improve replication metrics

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000767#comment-14000767
 ] 

Hudson commented on HBASE-11143:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #291 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/291/])
HBASE-11143 Improve replication metrics. (larsh: rev 1594469)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsSource.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java


 Improve replication metrics
 ---

 Key: HBASE-11143
 URL: https://issues.apache.org/jira/browse/HBASE-11143
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.99.0, 0.94.20, 0.98.3

 Attachments: 11143-0.94-v2.txt, 11143-0.94-v3.txt, 11143-0.94.txt, 
 11143-trunk.txt


 We are trying to report on replication lag and find that there is no good 
 single metric to do that.
 ageOfLastShippedOp is close, but unfortunately it is increased even when 
 there is nothing to ship on a particular RegionServer.
 I would like discuss a few options here:
 Add a new metric: replicationQueueTime (or something) with the above meaning. 
 I.e. if we have something to ship we set the age of that last shipped edit, 
 if we fail we increment that last time (just like we do now). But if there is 
 nothing to replicate we set it to current time (and hence that metric is 
 reported to close to 0).
 Alternatively we could change the meaning of ageOfLastShippedOp to mean to do 
 that. That might lead to surprises, but the current behavior is clearly weird 
 when there is nothing to replicate.
 Comments? [~jdcryans], [~stack].
 If approach sounds good, I'll make a patch for all branches.
 Edit: Also adds a new shippedKBs metric to track the amount of data that is 
 shipped via replication.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10561) Forward port: HBASE-10212 New rpc metric: number of active handler

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000768#comment-14000768
 ] 

Hudson commented on HBASE-10561:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #291 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/291/])
Amend HBASE-10561 Forward port: HBASE-10212 New rpc metric: number of active 
handler (liangxie: rev 1594133)
* 
/hbase/branches/0.98/hbase-hadoop1-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
Amend HBASE-10561 Forward port: HBASE-10212 New rpc metric: number of active 
handler (liangxie: rev 1594130)
* 
/hbase/branches/0.98/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java
* 
/hbase/branches/0.98/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperStub.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java
HBASE-10561 Forward port: HBASE-10212 New rpc metric: number of active handler 
(liangxie: rev 1594118)
* 
/hbase/branches/0.98/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapper.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/FifoRpcScheduler.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcScheduler.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperStub.java


 Forward port: HBASE-10212 New rpc metric: number of active handler
 --

 Key: HBASE-10561
 URL: https://issues.apache.org/jira/browse/HBASE-10561
 Project: HBase
  Issue Type: Sub-task
  Components: IPC/RPC
Reporter: Lars Hofhansl
Assignee: Liang Xie
 Fix For: 0.99.0, 0.96.3, 0.98.3

 Attachments: HBASE-10561.txt


 The metrics implementation has changed a lot in 0.96.
 Forward port HBASE-10212 to 0.96 and later.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11126) Add RegionObserver pre hooks that operate under row lock

2014-05-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000791#comment-14000791
 ] 

Andrew Purtell commented on HBASE-11126:


bq. So I think we should not change the CP behaviour. Atleast in 98. So the 
option would be to come up with new hooks.

That was my first thought too. 

 Add RegionObserver pre hooks that operate under row lock
 

 Key: HBASE-11126
 URL: https://issues.apache.org/jira/browse/HBASE-11126
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.99.0, 0.98.3
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan

 The coprocessor hooks were placed outside of row locks. This was meant to 
 sidestep performance issues arising from significant work done within hook 
 invocations. However as the security code increases in sophistication we are 
 now running into concurrency issues trying to use them as a result of that 
 early decision. Since the initial introduction of coprocessor upcalls there 
 has been some significant refactoring done around them and concurrency 
 control in core has become more complex. This is potentially an issue for 
 many coprocessor users.
 We should do either:\\
 - Move all existing RegionObserver pre* hooks to execute under row lock.
 - Introduce a new set of RegionObserver pre* hooks that execute under row 
 lock, named to indicate such.
 The second option is less likely to lead to surprises.
 All RegionObserver hook Javadoc should be updated with advice to the 
 coprocessor implementor not to take their own row locks in the hook. If the 
 current thread happens to already have a row lock and they try to take a lock 
 on another row, there is a deadlock risk.
 As always a drawback of adding hooks is the potential for performance impact. 
 We should benchmark the impact and decide if the second option above is a 
 viable choice or if the first option is required.
 Finally, we should introduce a higher level interface for managing the 
 registration of 'user' code for execution from the low level hooks. I filed 
 HBASE-11125 to discuss this further.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11194) [AccessController] issue with covering permission check in case of concurrent op on same row

2014-05-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000798#comment-14000798
 ] 

Andrew Purtell commented on HBASE-11194:


Both the thread doing the Put and the thread doing the Delete in this scenario 
must have been granted permission already. When deciding if an action is 
allowed we take the union of permissions found in the permission dominance 
hierarchy, which is global - namespace - table - cf - cell. We start checks 
at the top and work our way down. In this scheme, cell ACLs can grant access if 
no other level grants, but they don't revoke access. (Complicating note: We can 
get the effect of a cell ACL revoking access using the cell-first evaluation 
strategy at Get or Scan time, but this is not relevant for covering 
calculations for mutations.). 

If access is not granted at the CF or table level for the Delete, but the Put 
includes a cell ACL that grants, and the Put is not yet visible to the Delete, 
then the Delete will not be allowed as there is no visible/effective grant.

If access is granted at the CF or table level for the Delete, it doesn't matter 
what kind of cell ACL the Put has, the Delete is still allowed. The described 
interaction between the Put and the Delete is expected behavior for concurrent 
ops executing in different RPC handlers. 

However, certainly it's complicated trying to explain it. It's valid to say 
we don't want cell ACL semantics so difficult to explain or reason about.

HBASE-11126 is a must as this starts as an issue with the coprocessor framework.

 [AccessController] issue with covering permission check in case of concurrent 
 op on same row
 

 Key: HBASE-11194
 URL: https://issues.apache.org/jira/browse/HBASE-11194
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0, 0.98.3


 The issue is the hook where we do check in which we have not acquired 
 rowlock. Take case of delete, we do the check in the preDelete() hook. We do 
 get the covering cells and check against their acls. At the point of the 
 preDelete hook, we have not acquired the row lock on the deleting row.
 Consider 2 parallel threads one doing put and other delete both dealing with 
 same row.
 Thread 1 acquired the rowlock and decided the TS  (HRS time) and doing the 
 memstore write and HLog sync but the mvcc read point is NOT advanced. 
 Thread 2 at same time, doing the delete of the row (Say with latest TS . The 
 intent is to delete entire row) and in place of preDelete hook. There is no 
 row locking happening at this point. As part of covering permission check, it 
 doing a Get. But as said above, the put is not complete and the mvcc advance 
 has not happened. So the Get won’t return the new cell.  It will return the 
 old cells. And the check pass for the old cells.  Now suppose the new cell 
 ACL is not matching for the deleting user.  But the cell was not read, the 
 check has not happened.  So the ACL check will allow the user  to delete 
 row..  The flow later comes to HRegion#doMiniBatchMutate() and try acquire 
 row lock and by that time the Thread 1 op was over. So it will get lock and 
 will add the delete tombstone.  As a result the cell, for which the deleting 
 user has no acl right, also will get deleted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11194) [AccessController] issue with covering permission check in case of concurrent op on same row

2014-05-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000798#comment-14000798
 ] 

Andrew Purtell edited comment on HBASE-11194 at 5/17/14 2:09 PM:
-

Both the thread doing the Put and the thread doing the Delete in this scenario 
must have been granted permission already. When deciding if an action is 
allowed we take the union of permissions found when evaluating grants in the 
permission dominance hierarchy, which is global - namespace - table - cf - 
cell. We start checks at the top and work our way down. In this scheme, cell 
ACLs can grant access if no other level grants, but they don't revoke access. 
(Complicating note: We can get the effect of a cell ACL revoking access using 
the cell-first evaluation strategy at Get or Scan time, but this is not 
relevant for covering calculations for mutations.). 

If access is not granted at the CF or table level for the Delete, but the Put 
includes a cell ACL that grants, and the Put is not yet visible to the Delete, 
then the Delete will not be allowed as there is no visible/effective grant.

If access is granted at the CF or table level for the Delete, it doesn't matter 
what kind of cell ACL the Put has, the Delete is still allowed. The described 
interaction between the Put and the Delete is expected behavior for concurrent 
ops executing in different RPC handlers. 

However, certainly it's complicated trying to explain it. It's valid to say 
we don't want cell ACL semantics so difficult to explain or reason about.

HBASE-11126 is a must as this starts as an issue with the coprocessor framework.


was (Author: apurtell):
Both the thread doing the Put and the thread doing the Delete in this scenario 
must have been granted permission already. When deciding if an action is 
allowed we take the union of permissions found in the permission dominance 
hierarchy, which is global - namespace - table - cf - cell. We start checks 
at the top and work our way down. In this scheme, cell ACLs can grant access if 
no other level grants, but they don't revoke access. (Complicating note: We can 
get the effect of a cell ACL revoking access using the cell-first evaluation 
strategy at Get or Scan time, but this is not relevant for covering 
calculations for mutations.). 

If access is not granted at the CF or table level for the Delete, but the Put 
includes a cell ACL that grants, and the Put is not yet visible to the Delete, 
then the Delete will not be allowed as there is no visible/effective grant.

If access is granted at the CF or table level for the Delete, it doesn't matter 
what kind of cell ACL the Put has, the Delete is still allowed. The described 
interaction between the Put and the Delete is expected behavior for concurrent 
ops executing in different RPC handlers. 

However, certainly it's complicated trying to explain it. It's valid to say 
we don't want cell ACL semantics so difficult to explain or reason about.

HBASE-11126 is a must as this starts as an issue with the coprocessor framework.

 [AccessController] issue with covering permission check in case of concurrent 
 op on same row
 

 Key: HBASE-11194
 URL: https://issues.apache.org/jira/browse/HBASE-11194
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0, 0.98.3


 The issue is the hook where we do check in which we have not acquired 
 rowlock. Take case of delete, we do the check in the preDelete() hook. We do 
 get the covering cells and check against their acls. At the point of the 
 preDelete hook, we have not acquired the row lock on the deleting row.
 Consider 2 parallel threads one doing put and other delete both dealing with 
 same row.
 Thread 1 acquired the rowlock and decided the TS  (HRS time) and doing the 
 memstore write and HLog sync but the mvcc read point is NOT advanced. 
 Thread 2 at same time, doing the delete of the row (Say with latest TS . The 
 intent is to delete entire row) and in place of preDelete hook. There is no 
 row locking happening at this point. As part of covering permission check, it 
 doing a Get. But as said above, the put is not complete and the mvcc advance 
 has not happened. So the Get won’t return the new cell.  It will return the 
 old cells. And the check pass for the old cells.  Now suppose the new cell 
 ACL is not matching for the deleting user.  But the cell was not read, the 
 check has not happened.  So the ACL check will allow the user  to delete 
 row..  The flow later comes to HRegion#doMiniBatchMutate() and try acquire 
 row lock and by that time the Thread 1 op was over. So it will get lock and 
 

[jira] [Updated] (HBASE-11166) Categorize tests in hbase-prefix-tree module

2014-05-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11166:
---

Description: Jeff Bowles discovered that tests in hbase-prefix-tree module, 
e.g. TestTimestampEncoder, don't have test category  (was: Jeff Bowles 
discovered that TestTimestampEncoder doesn't have category)

 Categorize tests in hbase-prefix-tree module
 

 Key: HBASE-11166
 URL: https://issues.apache.org/jira/browse/HBASE-11166
 Project: HBase
  Issue Type: Test
Affects Versions: 0.98.2
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Minor
 Attachments: 11166-v2.txt, HBASE-11166.1.patch


 Jeff Bowles discovered that tests in hbase-prefix-tree module, e.g. 
 TestTimestampEncoder, don't have test category



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11166) Categorize tests in hbase-prefix-tree module

2014-05-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000808#comment-14000808
 ] 

Ted Yu commented on HBASE-11166:


javadoc warning is not related to the patch:
{code}
[WARNING] Javadoc Warnings
[WARNING] javadoc: warning - Multiple sources of package comments found for 
package org.apache.hadoop.hbase.io.hfile
{code}
Test failure is not related to the patch.

 Categorize tests in hbase-prefix-tree module
 

 Key: HBASE-11166
 URL: https://issues.apache.org/jira/browse/HBASE-11166
 Project: HBase
  Issue Type: Test
Affects Versions: 0.98.2
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Minor
 Attachments: 11166-v2.txt, HBASE-11166.1.patch


 Jeff Bowles discovered that tests in hbase-prefix-tree module, e.g. 
 TestTimestampEncoder, don't have test category



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11166) Categorize tests in hbase-prefix-tree module

2014-05-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11166:
---

Summary: Categorize tests in hbase-prefix-tree module  (was: 
TestTimestampEncoder doesn't have category)

 Categorize tests in hbase-prefix-tree module
 

 Key: HBASE-11166
 URL: https://issues.apache.org/jira/browse/HBASE-11166
 Project: HBase
  Issue Type: Test
Affects Versions: 0.98.2
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Minor
 Attachments: 11166-v2.txt, HBASE-11166.1.patch


 Jeff Bowles discovered that TestTimestampEncoder doesn't have category



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11195) Potentially improve block locality during major compaction for old regions

2014-05-17 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000828#comment-14000828
 ] 

Jean-Marc Spaggiari commented on HBASE-11195:
-

Hi Churro,

does it apply to Trunk to?

 Potentially improve block locality during major compaction for old regions
 --

 Key: HBASE-11195
 URL: https://issues.apache.org/jira/browse/HBASE-11195
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.19
Reporter: churro morales
Assignee: churro morales
 Attachments: HBASE-11195-0.94.patch


 This might be a specific use case.  But we have some regions which are no 
 longer written to (due to the key).  Those regions have 1 store file and they 
 are very old, they haven't been written to in a while.  We still use these 
 regions to read from so locality would be nice.  
 I propose putting a configuration option: something like
 hbase.hstore.min.locality.to.skip.major.compact [between 0 and 1]
 such that you can decide whether or not to skip major compaction for an old 
 region with a single store file.
 I'll attach a patch, let me know what you guys think.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11191) HBase ClusterId File Empty Check Loggic

2014-05-17 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000824#comment-14000824
 ] 

Jean-Marc Spaggiari commented on HBASE-11191:
-

Same as HBASE-11192? Cluster can not start without a correct ID right? So what 
should be the correct behavior? I guess we should not start, but should we 
display a more use friendly message? There is already messages in the log when 
ID is not readable. don't you have it?

 HBase ClusterId File Empty Check Loggic
 ---

 Key: HBASE-11191
 URL: https://issues.apache.org/jira/browse/HBASE-11191
 Project: HBase
  Issue Type: Bug
 Environment: HBase 0.94+Hadoop2.2.0+Zookeeper3.4.5
Reporter: sunjingtao

 if the clusterid file exists but empty ,then the following check logic in the 
 MasterFileSystem.java has none effects.
 if (!FSUtils.checkClusterIdExists(fs, rd, c.getInt(
 HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000))) {
   FSUtils.setClusterId(fs, rd, UUID.randomUUID().toString(), c.getInt(
   HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000));
 }
 clusterId = FSUtils.getClusterId(fs, rd);
 because the checkClusterIdExists method only check the path .
 Path filePath = new Path(rootdir, HConstants.CLUSTER_ID_FILE_NAME);
 return fs.exists(filePath);
 in my case ,the file exists but is empty,so the readed clusterid is null 
 which cause a nullPointerException:
 java.lang.NullPointerException
   at org.apache.hadoop.hbase.util.Bytes.toBytes(Bytes.java:441)
   at 
 org.apache.hadoop.hbase.zookeeper.ClusterId.setClusterId(ClusterId.java:72)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:581)
   at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:433)
   at java.lang.Thread.run(Thread.java:745)
 is this a bug?please make sure!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11166) Categorize tests in hbase-prefix-tree module

2014-05-17 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000845#comment-14000845
 ] 

Rekha Joshi commented on HBASE-11166:
-

Thanks [~tedyu].Updated for all tests under prefix-tree.

 Categorize tests in hbase-prefix-tree module
 

 Key: HBASE-11166
 URL: https://issues.apache.org/jira/browse/HBASE-11166
 Project: HBase
  Issue Type: Test
Affects Versions: 0.98.2
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Minor
 Attachments: 11166-v2.txt, HBASE-11166.1.patch


 Jeff Bowles discovered that tests in hbase-prefix-tree module, e.g. 
 TestTimestampEncoder, don't have test category



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11166) Categorize tests in hbase-prefix-tree module

2014-05-17 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated HBASE-11166:


Attachment: HBASE-11166.3.patch

Thanks Ted. Updated patch for tests under hbase-prefix.

 Categorize tests in hbase-prefix-tree module
 

 Key: HBASE-11166
 URL: https://issues.apache.org/jira/browse/HBASE-11166
 Project: HBase
  Issue Type: Test
Affects Versions: 0.98.2
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Minor
 Attachments: 11166-v2.txt, HBASE-11166.1.patch, HBASE-11166.3.patch


 Jeff Bowles discovered that tests in hbase-prefix-tree module, e.g. 
 TestTimestampEncoder, don't have test category



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11104) IntegrationTestImportTsv#testRunFromOutputCommitter misses credential initialization

2014-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000855#comment-14000855
 ] 

Hadoop QA commented on HBASE-11104:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12645375/HBASE-11104_98_v3.patch
  against trunk revision .
  ATTACHMENT ID: 12645375

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9531//console

This message is automatically generated.

 IntegrationTestImportTsv#testRunFromOutputCommitter misses credential 
 initialization
 

 Key: HBASE-11104
 URL: https://issues.apache.org/jira/browse/HBASE-11104
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Attachments: 11104-v1.txt, HBASE-11104_98_v3.patch, 
 HBASE-11104_trunk.patch


 IntegrationTestImportTsv#testRunFromOutputCommitter a parent job that ships 
 the HBase dependencies.
 However, call to TableMapReduceUtil.initCredentials(job) is missing - making 
 this test fail on a secure cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11166) Categorize tests in hbase-prefix-tree module

2014-05-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11166:
---

Fix Version/s: 0.98.3
   0.99.0

 Categorize tests in hbase-prefix-tree module
 

 Key: HBASE-11166
 URL: https://issues.apache.org/jira/browse/HBASE-11166
 Project: HBase
  Issue Type: Test
Affects Versions: 0.98.2
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 11166-v2.txt, HBASE-11166.1.patch, HBASE-11166.3.patch


 Jeff Bowles discovered that tests in hbase-prefix-tree module, e.g. 
 TestTimestampEncoder, don't have test category



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11166) Categorize tests in hbase-prefix-tree module

2014-05-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11166:
---

Hadoop Flags: Reviewed

Thanks Rekha for the patch.

 Categorize tests in hbase-prefix-tree module
 

 Key: HBASE-11166
 URL: https://issues.apache.org/jira/browse/HBASE-11166
 Project: HBase
  Issue Type: Test
Affects Versions: 0.98.2
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 11166-v2.txt, HBASE-11166.1.patch, HBASE-11166.3.patch


 Jeff Bowles discovered that tests in hbase-prefix-tree module, e.g. 
 TestTimestampEncoder, don't have test category



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11007) BLOCKCACHE in schema descriptor seems not aptly named

2014-05-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11007:
--

Fix Version/s: 0.99.0
Affects Version/s: (was: 0.94.18)
   0.94.19
   Status: Patch Available  (was: Open)

Making the patch for trunk.  I could make a patch for 0.94 to backport the 
javadoc piece (since it already has a test, a test that is better than what I'm 
adding to trunk).  You want it [~lhofhansl]

 BLOCKCACHE in schema descriptor seems not aptly named
 -

 Key: HBASE-11007
 URL: https://issues.apache.org/jira/browse/HBASE-11007
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.19
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Minor
 Fix For: 0.99.0

 Attachments: 11007.txt


 Hi,
 It seems that setting BLOCKCACHE key to false will disable the Data blocks 
 from being cached but will continue to cache bloom and index blocks. This 
 same property seems to be called cacheDataOnRead inside CacheConfig.
 Should this be called CACHE_DATA_ON_READ instead of BLOCKCACHE similar to the 
 other CACHE_DATA_ON_WRITE/CACHE_INDEX_ON_WRITE. We got quite confused and 
 ended up adding our own property CACHE_DATA_ON_READ - we also added some unit 
 tests for the same.
 What do folks think about this ?
 Thanks
 Varun



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11007) BLOCKCACHE in schema descriptor seems not aptly named

2014-05-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11007:
--

Attachment: 11007.txt

Changed javadoc for BLOCKCACHE and for its methods to explain that a better 
name would have been CACHE_DATA_ON_READ explaining that this attribute enables 
DATA block caching, yes/no.

The TestForceCacheImportantBlocks in trunk was testing nothing (since removal 
of SchemaMetrics).  Cache stats are opaque on whether DATA or META 
(INDEX/BLOOM).  Let me fix that elsewhere.  Meantime made 
TestForceCacheImportantBlocks do a very basic verification that when BLOCKCACHE 
is on/off, that the miss count reflects at least a difference.

 BLOCKCACHE in schema descriptor seems not aptly named
 -

 Key: HBASE-11007
 URL: https://issues.apache.org/jira/browse/HBASE-11007
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.18
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Minor
 Attachments: 11007.txt


 Hi,
 It seems that setting BLOCKCACHE key to false will disable the Data blocks 
 from being cached but will continue to cache bloom and index blocks. This 
 same property seems to be called cacheDataOnRead inside CacheConfig.
 Should this be called CACHE_DATA_ON_READ instead of BLOCKCACHE similar to the 
 other CACHE_DATA_ON_WRITE/CACHE_INDEX_ON_WRITE. We got quite confused and 
 ended up adding our own property CACHE_DATA_ON_READ - we also added some unit 
 tests for the same.
 What do folks think about this ?
 Thanks
 Varun



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11166) Categorize tests in hbase-prefix-tree module

2014-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000890#comment-14000890
 ] 

Hadoop QA commented on HBASE-11166:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645421/HBASE-11166.3.patch
  against trunk revision .
  ATTACHMENT ID: 12645421

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 36 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestMultiParallel

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9532//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9532//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9532//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9532//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9532//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9532//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9532//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9532//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9532//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9532//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9532//console

This message is automatically generated.

 Categorize tests in hbase-prefix-tree module
 

 Key: HBASE-11166
 URL: https://issues.apache.org/jira/browse/HBASE-11166
 Project: HBase
  Issue Type: Test
Affects Versions: 0.98.2
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 11166-v2.txt, HBASE-11166.1.patch, HBASE-11166.3.patch


 Jeff Bowles discovered that tests in hbase-prefix-tree module, e.g. 
 TestTimestampEncoder, don't have test category



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11007) BLOCKCACHE in schema descriptor seems not aptly named

2014-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000912#comment-14000912
 ] 

Hadoop QA commented on HBASE-11007:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645430/11007.txt
  against trunk revision .
  ATTACHMENT ID: 12645430

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9533//console

This message is automatically generated.

 BLOCKCACHE in schema descriptor seems not aptly named
 -

 Key: HBASE-11007
 URL: https://issues.apache.org/jira/browse/HBASE-11007
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.19
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Minor
 Fix For: 0.99.0

 Attachments: 11007.txt


 Hi,
 It seems that setting BLOCKCACHE key to false will disable the Data blocks 
 from being cached but will continue to cache bloom and index blocks. This 
 same property seems to be called cacheDataOnRead inside CacheConfig.
 Should this be called CACHE_DATA_ON_READ instead of BLOCKCACHE similar to the 
 other CACHE_DATA_ON_WRITE/CACHE_INDEX_ON_WRITE. We got quite confused and 
 ended up adding our own property CACHE_DATA_ON_READ - we also added some unit 
 tests for the same.
 What do folks think about this ?
 Thanks
 Varun



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11007) BLOCKCACHE in schema descriptor seems not aptly named

2014-05-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11007:
--

Attachment: 11007v2.txt

 BLOCKCACHE in schema descriptor seems not aptly named
 -

 Key: HBASE-11007
 URL: https://issues.apache.org/jira/browse/HBASE-11007
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.94.19
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Minor
 Fix For: 0.99.0

 Attachments: 11007.txt, 11007v2.txt


 Hi,
 It seems that setting BLOCKCACHE key to false will disable the Data blocks 
 from being cached but will continue to cache bloom and index blocks. This 
 same property seems to be called cacheDataOnRead inside CacheConfig.
 Should this be called CACHE_DATA_ON_READ instead of BLOCKCACHE similar to the 
 other CACHE_DATA_ON_WRITE/CACHE_INDEX_ON_WRITE. We got quite confused and 
 ended up adding our own property CACHE_DATA_ON_READ - we also added some unit 
 tests for the same.
 What do folks think about this ?
 Thanks
 Varun



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10835) DBE encode path improvements

2014-05-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000920#comment-14000920
 ] 

stack commented on HBASE-10835:
---

[~anoop.hbase] The others were introduced by me Anoop (Since fixed).  Yeah to 
fixing above warning on commit.

 DBE encode path improvements
 

 Key: HBASE-10835
 URL: https://issues.apache.org/jira/browse/HBASE-10835
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0

 Attachments: HBASE-10835.patch, HBASE-10835_V2.patch, 
 HBASE-10835_V3.patch, HBASE-10835_V4.patch


 Here 1st we write KVs (Cells) into a buffer and then passed to DBE encoder. 
 Encoder again reads kvs one by one from the buffer and encodes and creates a 
 new buffer.
 There is no need to have this model now. Previously we had option of no 
 encode in disk and encode only in cache. At that time the read buffer from a 
 HFile block was passed to this and encodes.
 So encode cell by cell can be done now. Making this change will need us to 
 have a NoOp DBE impl which just do the write of a cell as it is with out any 
 encoding.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6990) Pretty print TTL

2014-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000926#comment-14000926
 ] 

Hadoop QA commented on HBASE-6990:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645173/HBASE-6990.v4.patch
  against trunk revision .
  ATTACHMENT ID: 12645173

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9536//console

This message is automatically generated.

 Pretty print TTL
 

 Key: HBASE-6990
 URL: https://issues.apache.org/jira/browse/HBASE-6990
 Project: HBase
  Issue Type: Improvement
  Components: Usability
Reporter: Jean-Daniel Cryans
Assignee: Esteban Gutierrez
Priority: Minor
 Attachments: HBASE-6990.v0.patch, HBASE-6990.v1.patch, 
 HBASE-6990.v2.patch, HBASE-6990.v3.patch, HBASE-6990.v4.patch


 I've seen a lot of users getting confused by the TTL configuration and I 
 think that if we just pretty printed it it would solve most of the issues. 
 For example, let's say a user wanted to set a TTL of 90 days. That would be 
 7776000. But let's say that it was typo'd to 7776 instead, it gives you 
 900 days!
 So when we print the TTL we could do something like x days, x hours, x 
 minutes, x seconds (real_ttl_value). This would also help people when they 
 use ms instead of seconds as they would see really big values in there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11191) HBase ClusterId File Empty Check Loggic

2014-05-17 Thread sunjingtao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000945#comment-14000945
 ] 

sunjingtao commented on HBASE-11191:


Yes,same as HBASE-11192! I don't konw why there is two ,maybe i had
submitted twice because of some network problems.About this issue,i think
we should change the following code in the checkClusterIdExists method,not
only check the file whether exists or not ,but also check whether this file
contains a valid clusterId:


*Path filePath = new Path(rootdir, HConstants.CLUSTER_ID_FILE_NAME);return
fs.exists(filePath);*


Because in my opinion,the following code's purpose is just to  make sure
the master shouldn't fail to start due to the clusterId not exists
problem,but it has not achieve this target:


*if (!FSUtils.checkClusterIdExists(fs, rd, c.getInt(*

*HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000)))*

*{ FSUtils.setClusterId(fs, rd, UUID.randomUUID().toString(), c.getInt(
HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000)); }*









On Sun, May 18, 2014 at 1:07 AM, Jean-Marc Spaggiari (JIRA) j...@apache.org



 HBase ClusterId File Empty Check Loggic
 ---

 Key: HBASE-11191
 URL: https://issues.apache.org/jira/browse/HBASE-11191
 Project: HBase
  Issue Type: Bug
 Environment: HBase 0.94+Hadoop2.2.0+Zookeeper3.4.5
Reporter: sunjingtao

 if the clusterid file exists but empty ,then the following check logic in the 
 MasterFileSystem.java has none effects.
 if (!FSUtils.checkClusterIdExists(fs, rd, c.getInt(
 HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000))) {
   FSUtils.setClusterId(fs, rd, UUID.randomUUID().toString(), c.getInt(
   HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000));
 }
 clusterId = FSUtils.getClusterId(fs, rd);
 because the checkClusterIdExists method only check the path .
 Path filePath = new Path(rootdir, HConstants.CLUSTER_ID_FILE_NAME);
 return fs.exists(filePath);
 in my case ,the file exists but is empty,so the readed clusterid is null 
 which cause a nullPointerException:
 java.lang.NullPointerException
   at org.apache.hadoop.hbase.util.Bytes.toBytes(Bytes.java:441)
   at 
 org.apache.hadoop.hbase.zookeeper.ClusterId.setClusterId(ClusterId.java:72)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:581)
   at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:433)
   at java.lang.Thread.run(Thread.java:745)
 is this a bug?please make sure!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11198) test-patch.sh should handle the case where trunk patch is attached along with patch for 0.98

2014-05-17 Thread Ted Yu (JIRA)
Ted Yu created HBASE-11198:
--

 Summary: test-patch.sh should handle the case where trunk patch is 
attached along with patch for 0.98
 Key: HBASE-11198
 URL: https://issues.apache.org/jira/browse/HBASE-11198
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu


From https://builds.apache.org/job/PreCommit-HBASE-Build/9531//console :
{code}
-1 overall. Here are the results of testing the latest attachment 
http://issues.apache.org/jira/secure/attachment/12645375/HBASE-11104_98_v3.patch
against trunk revision .
ATTACHMENT ID: 12645375
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
-1 patch. The patch command could not apply the patch.
{code}
The cause was that patch for 0.98 was slightly newer than trunk patch.

test-patch.sh should handle this case by recognizing 'could not apply the 
patch' error and retrying with trunk patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11166) Categorize tests in hbase-prefix-tree module

2014-05-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11166:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Categorize tests in hbase-prefix-tree module
 

 Key: HBASE-11166
 URL: https://issues.apache.org/jira/browse/HBASE-11166
 Project: HBase
  Issue Type: Test
Affects Versions: 0.98.2
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 11166-v2.txt, HBASE-11166.1.patch, HBASE-11166.3.patch


 Jeff Bowles discovered that tests in hbase-prefix-tree module, e.g. 
 TestTimestampEncoder, don't have test category



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11166) Categorize tests in hbase-prefix-tree module

2014-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000954#comment-14000954
 ] 

Hudson commented on HBASE-11166:


SUCCESS: Integrated in HBase-TRUNK #5134 (See 
[https://builds.apache.org/job/HBase-TRUNK/5134/])
HBASE-11166 Categorize tests in hbase-prefix-tree module (Rekha Joshi) (tedyu: 
rev 1595540)
* /hbase/trunk/hbase-prefix-tree/pom.xml
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/keyvalue/TestKeyValueTool.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/blockmeta/TestBlockMeta.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/builder/TestTokenizer.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/builder/TestTreeDepth.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/column/TestColumnBuilder.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestPrefixTreeSearcher.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowEncoder.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/timestamp/TestTimestampEncoder.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/bytes/TestByteRange.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestFIntTool.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestVIntTool.java
* 
/hbase/trunk/hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestVLongTool.java


 Categorize tests in hbase-prefix-tree module
 

 Key: HBASE-11166
 URL: https://issues.apache.org/jira/browse/HBASE-11166
 Project: HBase
  Issue Type: Test
Affects Versions: 0.98.2
Reporter: Ted Yu
Assignee: Rekha Joshi
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 11166-v2.txt, HBASE-11166.1.patch, HBASE-11166.3.patch


 Jeff Bowles discovered that tests in hbase-prefix-tree module, e.g. 
 TestTimestampEncoder, don't have test category



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-4909) Detailed Block Cache Metrics

2014-05-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000957#comment-14000957
 ] 

stack commented on HBASE-4909:
--

Here is the commit Nicolas is referring to:


r1181972 | nspiegelberg | 2011-10-11 10:45:00 -0700 (Tue, 11 Oct 2011) | 25 
lines

Refactored and more detailed block read/cache and bloom metrics

Summary: As we keep adding more granular block read and block cache usage
statistics, there is a combinatorial explosion of the number of cases we have to
monitor, especially when we want both per-column family / block type statistics
and aggregate statistics on one or both of these dimensions. I am trying to
unclutter HFile readers, LruBlockCache, StoreFile, etc. by creating a
centralized class that knows how to update all kinds of per-column family/block
type statistics.

Test Plan:
Run all unit tests.
New unit test.
Deploy to one region server in dark launch and compare the new output of
hbaseStats.py to the old one (take a diff of the set of keys).

Reviewers: pritam, liyintang, jgray, kannan

Reviewed By: kannan

CC: , hbase@lists, dist-storage@lists, kannan

Differential Revision: 321147

Looking at svn diff -r1181971:1181972... the commit is all about:

+  BlockCategory blockCategory = dataBlock.getBlockType().getCategory();

...

and

+  cfMetrics.updateOnCacheMiss(blockCategory, isCompaction, delta);

... and this stuff in a class called ColumnFamilyMetrics:

+READ_TIME(Read, true),
+READ_COUNT(BlockReadCnt, true),
+CACHE_HIT(BlockReadCacheHitCnt, true),
+CACHE_MISS(BlockReadCacheMissCnt, true),
+
+CACHE_SIZE(blockCacheSize, false),
+CACHED(blockCacheNumCached, false),
+EVICTED(blockCacheNumEvicted, false);

We have this.  It is differently named, it is CacheStats.  So, we have this 
detail.  It came in with HBASE-4027, the slab cache issue.  We need more but we 
have this much now so resolving as implemented.

 Detailed Block Cache Metrics
 

 Key: HBASE-4909
 URL: https://issues.apache.org/jira/browse/HBASE-4909
 Project: HBase
  Issue Type: Sub-task
  Components: Client, regionserver
Reporter: Nicolas Spiegelberg

 Moving issue w/ no recent movement out of 0.95



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-4909) Detailed Block Cache Metrics

2014-05-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-4909.
--

Resolution: Fixed

Subsumed by HBASE-4027

 Detailed Block Cache Metrics
 

 Key: HBASE-4909
 URL: https://issues.apache.org/jira/browse/HBASE-4909
 Project: HBase
  Issue Type: Sub-task
  Components: Client, regionserver
Reporter: Nicolas Spiegelberg

 Moving issue w/ no recent movement out of 0.95



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-3324) LRU block cache configuration improvements

2014-05-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-3324.
--

Resolution: Later

Part dup of HBASE-3306 and stale.  Resolving.

 LRU block cache configuration improvements
 --

 Key: HBASE-3324
 URL: https://issues.apache.org/jira/browse/HBASE-3324
 Project: HBase
  Issue Type: Improvement
  Components: io, regionserver
Reporter: Jonathan Gray
Assignee: Jonathan Gray

 The block cache has lots of configuration parameters but they aren't using 
 Configuration like they should.
 It would also be nice to have a better way of doing hit ratios, like a 
 rolling window.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-5285) runtime exception -- cached an already cached block -- during compaction

2014-05-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-5285.
--

Resolution: Cannot Reproduce

Haven't see this in years.  Correct me if I am wrong.

 runtime exception -- cached an already cached block -- during compaction
 

 Key: HBASE-5285
 URL: https://issues.apache.org/jira/browse/HBASE-5285
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.92.0
 Environment: hadoop-1.0 and hbase-0.92
 18 node cluster, dedicated namenode, zookeeper, hbasemaster, and YCSB client 
 machine. 
 latest YCSB
Reporter: Simon Dircks
Priority: Trivial

 #On YCSB client machine:
 /usr/local/bin/java -cp build/ycsb.jar:db/hbase/lib/*:db/hbase/conf/ 
 com.yahoo.ycsb.Client -load -db com.yahoo.ycsb.db.HBaseClient -P 
 workloads/workloada -p columnfamily=family1 -p recordcount=500 -s  
 load.dat
 loaded 5mil records, that created 8 regions. (balanced all onto the same RS)
 /usr/local/bin/java -cp build/ycsb.jar:db/hbase/lib/*:db/hbase/conf/ 
 com.yahoo.ycsb.Client -t -db com.yahoo.ycsb.db.HBaseClient -P 
 workloads/workloada -p columnfamily=family1 -p operationcount=500 
 -threads 10 -s  transaction.dat
 #On RS that was holding the 8 regions above. 
 2012-01-25 23:23:51,556 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
 regionserver:60020-0x134f70a343101a0 Successfully transitioned node 
 162702503c650e551130e5fb588b3ec2 from RS_ZK_REGION_SPLIT to RS_ZK_REGION_SPLIT
 2012-01-25 23:23:51,616 ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer:
 java.lang.RuntimeException: Cached an already cached block
 at 
 org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:268)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:276)
 at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:487)
 at 
 org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekTo(HalfStoreFileReader.java:168)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:181)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:83)
 at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:2861)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1432)
 at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1424)
 at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1400)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3688)
 at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3581)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1771)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
 at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1325)
 2012-01-25 23:23:51,656 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
 regionserver:60020-0x134f70a343101a0 Attempting to transition node 
 162702503c650e551130e5fb588b3ec2 from RS_ZK_REGION_SPLIT to RS_ZK_REGION_SPLIT



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11104) IntegrationTestImportTsv#testRunFromOutputCommitter misses credential initialization

2014-05-17 Thread Vandana Ayyalasomayajula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vandana Ayyalasomayajula updated HBASE-11104:
-

Attachment: HBASE-11104_trunk.patch

Attaching the patch again for hadoopqa!!

 IntegrationTestImportTsv#testRunFromOutputCommitter misses credential 
 initialization
 

 Key: HBASE-11104
 URL: https://issues.apache.org/jira/browse/HBASE-11104
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Attachments: 11104-v1.txt, HBASE-11104_98_v3.patch, 
 HBASE-11104_trunk.patch, HBASE-11104_trunk.patch


 IntegrationTestImportTsv#testRunFromOutputCommitter a parent job that ships 
 the HBase dependencies.
 However, call to TableMapReduceUtil.initCredentials(job) is missing - making 
 this test fail on a secure cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6506) Setting CACHE_BLOCKS to false in an hbase shell scan doesn't work

2014-05-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14000995#comment-14000995
 ] 

stack commented on HBASE-6506:
--

I made a fix and then added a puts after getting value to see what it is (the 
attached 'fix' patch does not include the 'puts').  

{code}
diff --git a/hbase-shell/src/main/ruby/hbase/table.rb 
b/hbase-shell/src/main/ruby/hbase/table.rb
index 1984177..be946d1 100644
--- a/hbase-shell/src/main/ruby/hbase/table.rb
+++ b/hbase-shell/src/main/ruby/hbase/table.rb
@@ -376,7 +376,9 @@ EOF
 stoprow = args[STOPROW]
 timestamp = args[TIMESTAMP]
 columns = args[COLUMNS] || args[COLUMN] || []
-cache_blocks = args[CACHE_BLOCKS] || true
+# If CACHE_BLOCKS not set, then default 'true'.
+cache_blocks = args[CACHE_BLOCKS].nil? ? true: args[CACHE_BLOCKS]
+puts cache_blocks
 cache = args[CACHE] || 0
 reversed = args[REVERSED] || false
 versions = args[VERSIONS] || 1
{code}

See below for my setting CACHE_BLOCKS and see how we print right value in each 
case.

{code}
hbase(main):002:0 scan 't1', {COLUMNS = ['c1'], CACHE_BLOCKS = false}
ROW  COLUMN+CELL
false
0 row(s) in 0.0470 seconds

hbase(main):003:0 scan 't1', {COLUMNS = ['c1'], CACHE_BLOCKS = true}
ROW  COLUMN+CELL
true
0 row(s) in 0.0070 seconds

hbase(main):004:0 scan 't1', {COLUMNS = ['c1']}
ROW  COLUMN+CELL
true
0 row(s) in 0.0050 seconds
{code}

 Setting CACHE_BLOCKS to false in an hbase shell scan doesn't work
 -

 Key: HBASE-6506
 URL: https://issues.apache.org/jira/browse/HBASE-6506
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.94.0
Reporter: Josh Wymer
Priority: Minor
  Labels: cache, ruby, scan, shell
   Original Estimate: 1m
  Remaining Estimate: 1m

 I was attempting to prevent blocks from being cached by setting CACHE_BLOCKS 
 = false in the hbase shell when doing a scan but I kept seeing tons of 
 evictions when I ran it. After inspecting table.rb I found this line:
 cache = args[CACHE_BLOCKS] || true
 The problem then is that if CACHE_BLOCKS is false then this expression will 
 always return true. Therefore, it's impossible to turn off block caching. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-6990) Pretty print TTL

2014-05-17 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14001004#comment-14001004
 ] 

Lars Hofhansl commented on HBASE-6990:
--

Do you want this in 0.94, [~esteban]? It seems mostly like a nice to have. Is 
there a strong reason to have this in 0.94?

 Pretty print TTL
 

 Key: HBASE-6990
 URL: https://issues.apache.org/jira/browse/HBASE-6990
 Project: HBase
  Issue Type: Improvement
  Components: Usability
Reporter: Jean-Daniel Cryans
Assignee: Esteban Gutierrez
Priority: Minor
 Attachments: HBASE-6990.v0.patch, HBASE-6990.v1.patch, 
 HBASE-6990.v2.patch, HBASE-6990.v3.patch, HBASE-6990.v4.patch


 I've seen a lot of users getting confused by the TTL configuration and I 
 think that if we just pretty printed it it would solve most of the issues. 
 For example, let's say a user wanted to set a TTL of 90 days. That would be 
 7776000. But let's say that it was typo'd to 7776 instead, it gives you 
 900 days!
 So when we print the TTL we could do something like x days, x hours, x 
 minutes, x seconds (real_ttl_value). This would also help people when they 
 use ms instead of seconds as they would see really big values in there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-6506) Setting CACHE_BLOCKS to false in an hbase shell scan doesn't work

2014-05-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6506:
-

Attachment: 6506.txt

Small fix.

 Setting CACHE_BLOCKS to false in an hbase shell scan doesn't work
 -

 Key: HBASE-6506
 URL: https://issues.apache.org/jira/browse/HBASE-6506
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.94.0
Reporter: Josh Wymer
Priority: Minor
  Labels: cache, ruby, scan, shell
 Fix For: 0.99.0

 Attachments: 6506.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 I was attempting to prevent blocks from being cached by setting CACHE_BLOCKS 
 = false in the hbase shell when doing a scan but I kept seeing tons of 
 evictions when I ran it. After inspecting table.rb I found this line:
 cache = args[CACHE_BLOCKS] || true
 The problem then is that if CACHE_BLOCKS is false then this expression will 
 always return true. Therefore, it's impossible to turn off block caching. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-6506) Setting CACHE_BLOCKS to false in an hbase shell scan doesn't work

2014-05-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6506:
-

Fix Version/s: 0.99.0
   Status: Patch Available  (was: Open)

 Setting CACHE_BLOCKS to false in an hbase shell scan doesn't work
 -

 Key: HBASE-6506
 URL: https://issues.apache.org/jira/browse/HBASE-6506
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.94.0
Reporter: Josh Wymer
Priority: Minor
  Labels: cache, ruby, scan, shell
 Fix For: 0.99.0

 Attachments: 6506.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 I was attempting to prevent blocks from being cached by setting CACHE_BLOCKS 
 = false in the hbase shell when doing a scan but I kept seeing tons of 
 evictions when I ran it. After inspecting table.rb I found this line:
 cache = args[CACHE_BLOCKS] || true
 The problem then is that if CACHE_BLOCKS is false then this expression will 
 always return true. Therefore, it's impossible to turn off block caching. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)