[jira] [Commented] (HBASE-7336) HFileBlock.readAtOffset does not work well with multiple threads

2012-12-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13533726#comment-13533726
 ] 

Hudson commented on HBASE-7336:
---

Integrated in HBase-0.94 #632 (See 
[https://builds.apache.org/job/HBase-0.94/632/])
HBASE-7336 Revert due to OOMs on TestHFileBlock potentially caused by this. 
(Revision 1422767)

 Result = FAILURE
larsh : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java


> HFileBlock.readAtOffset does not work well with multiple threads
> 
>
> Key: HBASE-7336
> URL: https://issues.apache.org/jira/browse/HBASE-7336
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7336-0.94.txt, 7336-0.96.txt
>
>
> HBase grinds to a halt when many threads scan along the same set of blocks 
> and neither read short circuit is nor block caching is enabled for the dfs 
> client ... disabling the block cache makes sense on very large scans.
> It turns out that synchronizing in istream in HFileBlock.readAtOffset is the 
> culprit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7294) Check for snapshot file cleaners on start

2012-12-17 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-7294:
---

Attachment: (was: HBASE-7294-v0.patch)

> Check for snapshot file cleaners on start
> -
>
> Key: HBASE-7294
> URL: https://issues.apache.org/jira/browse/HBASE-7294
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Affects Versions: hbase-6055
>Reporter: Jesse Yates
>Assignee: Matteo Bertozzi
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7294-v1.patch
>
>
> Snapshots currently use the SnaphotHfileCleaner and SnapshotHLogCleaner to 
> ensure that any hfiles or hlogs (respectively) that are currently part of a 
> snapshot are not removed from their respective archive directories (.archive 
> and .oldlogs).
> From Matteo Bertozzi:
> {quote}
> currently the snapshot cleaner is not in hbase-default.xml
> and there's no warning/exception on snapshot/restore operation, if not 
> enabled.
> even if we add the cleaner to the hbase-default.xml how do we ensure that the 
> user doesn't remove it?
> Do we want to hardcode the cleaner at master startup?
> Do we want to add a check in snapshot/restore that throws an exception if the 
> cleaner is not enabled?
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7294) Check for snapshot file cleaners on start

2012-12-17 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-7294:
---

Attachment: HBASE-7294-v1.patch

moved the check inside SnapshotManager.start()
cleaners are started later so it's fine.

> Check for snapshot file cleaners on start
> -
>
> Key: HBASE-7294
> URL: https://issues.apache.org/jira/browse/HBASE-7294
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Affects Versions: hbase-6055
>Reporter: Jesse Yates
>Assignee: Matteo Bertozzi
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7294-v1.patch
>
>
> Snapshots currently use the SnaphotHfileCleaner and SnapshotHLogCleaner to 
> ensure that any hfiles or hlogs (respectively) that are currently part of a 
> snapshot are not removed from their respective archive directories (.archive 
> and .oldlogs).
> From Matteo Bertozzi:
> {quote}
> currently the snapshot cleaner is not in hbase-default.xml
> and there's no warning/exception on snapshot/restore operation, if not 
> enabled.
> even if we add the cleaner to the hbase-default.xml how do we ensure that the 
> user doesn't remove it?
> Do we want to hardcode the cleaner at master startup?
> Do we want to add a check in snapshot/restore that throws an exception if the 
> cleaner is not enabled?
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7294) Check for snapshot file cleaners on start

2012-12-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13533815#comment-13533815
 ] 

Hadoop QA commented on HBASE-7294:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12561265/HBASE-7294-v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3568//console

This message is automatically generated.

> Check for snapshot file cleaners on start
> -
>
> Key: HBASE-7294
> URL: https://issues.apache.org/jira/browse/HBASE-7294
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Affects Versions: hbase-6055
>Reporter: Jesse Yates
>Assignee: Matteo Bertozzi
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7294-v1.patch
>
>
> Snapshots currently use the SnaphotHfileCleaner and SnapshotHLogCleaner to 
> ensure that any hfiles or hlogs (respectively) that are currently part of a 
> snapshot are not removed from their respective archive directories (.archive 
> and .oldlogs).
> From Matteo Bertozzi:
> {quote}
> currently the snapshot cleaner is not in hbase-default.xml
> and there's no warning/exception on snapshot/restore operation, if not 
> enabled.
> even if we add the cleaner to the hbase-default.xml how do we ensure that the 
> user doesn't remove it?
> Do we want to hardcode the cleaner at master startup?
> Do we want to add a check in snapshot/restore that throws an exception if the 
> cleaner is not enabled?
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6423) Writes should not block reads on blocking updates to memstores

2012-12-17 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13533860#comment-13533860
 ] 

binlijin commented on HBASE-6423:
-

{code}
HRegion
  public boolean flushcache() throws IOException {
  lock(lock.readLock());
  }
{code}
HRegion.flushcache can be called by normal flush cache, should we convert it to 
the old style?

> Writes should not block reads on blocking updates to memstores
> --
>
> Key: HBASE-6423
> URL: https://issues.apache.org/jira/browse/HBASE-6423
> Project: HBase
>  Issue Type: Bug
>Reporter: Karthik Ranganathan
>Assignee: Jimmy Xiang
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 0.94-6423.patch, 0.94-6423_v4.patch, 6423.addendum, 
> trunk-6423.patch, trunk-6423_v2.1.patch, trunk-6423_v2.patch, 
> trunk-6423_v3.2.patch, trunk-6423_v3.3.patch, trunk-6423_v3.4.patch, 
> trunk-6423_v4.patch
>
>
> We have a big data use case where we turn off WAL and have a ton of reads and 
> writes. We found that:
> 1. flushing a memstore takes a while (GZIP compression)
> 2. incoming writes cause the new memstore to grow in an unbounded fashion
> 3. this triggers blocking memstore updates
> 4. in turn, this causes all the RPC handler threads to block on writes to 
> that memstore
> 5. we are not able to read during this time as RPC handlers are blocked
> At a higher level, we should not hold up the RPC threads while blocking 
> updates, and we should build in some sort of rate control.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7272) HFileOutputFormat.configureIncrementalLoad should honor table Max File Size and Columns BlockSize

2012-12-17 Thread Randy Fox (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13533999#comment-13533999
 ] 

Randy Fox commented on HBASE-7272:
--

configureIncrementalLoad takes an HTable as a parameter, so shouldn't it be 
able to make these determinations?

> HFileOutputFormat.configureIncrementalLoad should honor table Max File Size 
> and Columns BlockSize
> -
>
> Key: HBASE-7272
> URL: https://issues.apache.org/jira/browse/HBASE-7272
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 0.92.1
>Reporter: Randy Fox
>Priority: Minor
>
> HFileOutputFormat.configureIncrementalLoad is used to generate HFiles 
> matching region assignments.  The problem is that is it does not using file 
> and block size settings.  This can create a lot of files of wrong block size, 
> which may take a long time to compact.  I think it should honor these 
> settings and expedite its use.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7145) ReusableStreamGzipCodec NPE upon reset with IBM JDK

2012-12-17 Thread Renata Ghisloti Duarte de Souza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renata Ghisloti Duarte de Souza updated HBASE-7145:
---

Attachment: HBASE-7145.1.patch.txt

Patch create the fix the "Deflator" bug in IBM Java 6.

> ReusableStreamGzipCodec NPE upon reset with IBM JDK
> ---
>
> Key: HBASE-7145
> URL: https://issues.apache.org/jira/browse/HBASE-7145
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HBASE-7145.1.patch.txt
>
>
> This is the same issue as described in HADOOP-8419, repeat the issue 
> description here:
> The ReusableStreamGzipCodec will NPE upon reset after finish when the native 
> zlib codec is not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> ReusableStreamGzipCodec uses GZIPOutputStream which is extended to provide 
> the resetState method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 
> SR10, GZIPOutputStream#finish will release the underlying deflater(calls the 
> deflater's end method), which causes NPE upon reset. This seems to be an IBM 
> JDK quirk as Sun JDK and OpenJDK doesn't have this issue.
> Since in HBASE-5387 HBase source has refactor its code not to use hadoop's 
> GzipCodec during real compress/decompress, it's necessary to make a separate 
> patch for HBase on the same issue

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-7367:
---

Attachment: HBASE-7367-v0.patch

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-7367:
--

 Summary: Snapshot coprocessor and ACL security
 Key: HBASE-7367
 URL: https://issues.apache.org/jira/browse/HBASE-7367
 Project: HBase
  Issue Type: Sub-task
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: hbase-6055, 0.96.0
 Attachments: HBASE-7367-v0.patch

Currently snapshot don't care about ACL...
and in the first draft snapshots should be disabled if the ACL coprocessor is 
enabled.

After the first step, we can discuss how to handle the snapshot/restore/clone.
Is saving and restoring the _acl_ related rights, the right way? maybe after 3 
months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-7367:
---

Status: Patch Available  (was: Open)

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534068#comment-13534068
 ] 

Matteo Bertozzi commented on HBASE-7367:


review board: https://reviews.apache.org/r/8638/

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534084#comment-13534084
 ] 

Hadoop QA commented on HBASE-7367:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12561313/HBASE-7367-v0.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3569//console

This message is automatically generated.

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7145) ReusableStreamGzipCodec NPE upon reset with IBM JDK

2012-12-17 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534114#comment-13534114
 ] 

Luke Lu commented on HBASE-7145:


It'd be better if you use a static boolean hasBrokenFinish instead of having to 
do 2 String#contains calls in the finish method.

> ReusableStreamGzipCodec NPE upon reset with IBM JDK
> ---
>
> Key: HBASE-7145
> URL: https://issues.apache.org/jira/browse/HBASE-7145
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HBASE-7145.1.patch.txt
>
>
> This is the same issue as described in HADOOP-8419, repeat the issue 
> description here:
> The ReusableStreamGzipCodec will NPE upon reset after finish when the native 
> zlib codec is not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> ReusableStreamGzipCodec uses GZIPOutputStream which is extended to provide 
> the resetState method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 
> SR10, GZIPOutputStream#finish will release the underlying deflater(calls the 
> deflater's end method), which causes NPE upon reset. This seems to be an IBM 
> JDK quirk as Sun JDK and OpenJDK doesn't have this issue.
> Since in HBASE-5387 HBase source has refactor its code not to use hadoop's 
> GzipCodec during real compress/decompress, it's necessary to make a separate 
> patch for HBase on the same issue

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7145) ReusableStreamGzipCodec NPE upon reset with IBM JDK

2012-12-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534118#comment-13534118
 ] 

Ted Yu commented on HBASE-7145:
---

@Yu:
Can you attach patch for trunk so that Hadoop QA can run test suite ?

> ReusableStreamGzipCodec NPE upon reset with IBM JDK
> ---
>
> Key: HBASE-7145
> URL: https://issues.apache.org/jira/browse/HBASE-7145
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HBASE-7145.1.patch.txt
>
>
> This is the same issue as described in HADOOP-8419, repeat the issue 
> description here:
> The ReusableStreamGzipCodec will NPE upon reset after finish when the native 
> zlib codec is not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> ReusableStreamGzipCodec uses GZIPOutputStream which is extended to provide 
> the resetState method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 
> SR10, GZIPOutputStream#finish will release the underlying deflater(calls the 
> deflater's end method), which causes NPE upon reset. This seems to be an IBM 
> JDK quirk as Sun JDK and OpenJDK doesn't have this issue.
> Since in HBASE-5387 HBase source has refactor its code not to use hadoop's 
> GzipCodec during real compress/decompress, it's necessary to make a separate 
> patch for HBase on the same issue

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534128#comment-13534128
 ] 

Andrew Purtell commented on HBASE-7367:
---

-1

Throwing our hands up upon snapshot if security is enabled is not good enough. 

In CP API design, we start by adding hooks into the server side RPC handlers 
for various actions; then if that coverage isn't sufficient we hook deeper. 
(For example, for the master we hook the admin ops, and also the async handlers 
for table operations.)

Why not hook where the user requests the snapshot and restore function, and 
only allow it if they have GLOBAL ADMIN privilege, or CREATE privilege on the 
specific table? I think that is what will be minimally viable here.

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534134#comment-13534134
 ] 

Matteo Bertozzi commented on HBASE-7367:


as I've said in the jira, this is just the first step to have all the code 
ready.
and the semantic of the snapshot acl is still a bit unclear.

assuming that everyone with GLOBAL ADMIN or CREATE privilege on the table can 
take the snapshot.
What should the restore/clone look like?
 * only GLOBAL ADMIN can clone?
 * only GLOBAL admin and users with CREATE privilege can restore?

What happens to the _acl_ table?
 * Restore the old (snapshotted) rights for the table?
 * Keep the current one?

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7361) Fix all javadoc warnings in hbase-server/{,mapreduce}

2012-12-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534137#comment-13534137
 ] 

Nick Dimiduk commented on HBASE-7361:
-

[~enis]

bq. In some places you replaced @link's with @code's. Is this needed because we 
have separated modules?

For the {{autoFlush}} lines, there is no class member to link to. In the case 
of {{TestRowProcessorEndpoint}}, I believe our javadoc module is not configured 
to include the test directories. The {{FileLink}} change happens inside the 
{{FileLink}} class, so no need for an external link.

> Fix all javadoc warnings in hbase-server/{,mapreduce}
> -
>
> Key: HBASE-7361
> URL: https://issues.apache.org/jira/browse/HBASE-7361
> Project: HBase
>  Issue Type: Improvement
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 7361-fix-javadoc-warnings.0.diff
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534139#comment-13534139
 ] 

Andrew Purtell commented on HBASE-7367:
---

bq. as I've said in the jira, this is just the first step to have all the code 
ready.

I saw that, and I don't agree it is appropriate to commit in present form. 
"Security is not compatible with snapshots" = no.

GLOBAL ADMIN is a reasonable first step, this punts all decisionmaking about 
what to snapshot or restore and how to essentially the superuser.

Up to this point IIRC we've been allowing TABLE CREATE to have the same privs 
on their specific tables that the GLOBAL ADMIN has.

Whether or not to restore the _acl_ table is an administrative decision. Allow 
it if GLOBAL ADMIN says so I'd say.

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7361) Fix all javadoc warnings in hbase-server/{,mapreduce}

2012-12-17 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-7361:


Attachment: 7361-fix-javadoc-warnings.1.diff

This patch differs from the previous only in that I cleaned up some whitespace 
and line length issues introduced by the previous patch.

Outstanding issues re: the two remaining warnings persist.

> Fix all javadoc warnings in hbase-server/{,mapreduce}
> -
>
> Key: HBASE-7361
> URL: https://issues.apache.org/jira/browse/HBASE-7361
> Project: HBase
>  Issue Type: Improvement
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Attachments: 7361-fix-javadoc-warnings.0.diff, 
> 7361-fix-javadoc-warnings.1.diff
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534146#comment-13534146
 ] 

Andrew Purtell commented on HBASE-7367:
---

Correction, TABLE ADMIN gets the same privs on their specific tables that the 
GLOBAL ADMIN has. TABLE CREATE gets a large subset, the DDL ops.

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7336) HFileBlock.readAtOffset does not work well with multiple threads

2012-12-17 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534149#comment-13534149
 ] 

Lars Hofhansl commented on HBASE-7336:
--

The 0.94 tests fail with the same OOM even without this patch, so I am going to 
reapply. Sorry for the noise, but I had to make sure.

> HFileBlock.readAtOffset does not work well with multiple threads
> 
>
> Key: HBASE-7336
> URL: https://issues.apache.org/jira/browse/HBASE-7336
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7336-0.94.txt, 7336-0.96.txt
>
>
> HBase grinds to a halt when many threads scan along the same set of blocks 
> and neither read short circuit is nor block caching is enabled for the dfs 
> client ... disabling the block cache makes sense on very large scans.
> It turns out that synchronizing in istream in HFileBlock.readAtOffset is the 
> culprit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7353) [shell] have list and list_snapshot return jruby string arrays.

2012-12-17 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534154#comment-13534154
 ] 

Jonathan Hsieh commented on HBASE-7353:
---

Thanks for the reviews.

I'm going to commit this to the snapshot branch (since it has snapshot 
modifications not in trunk yet).  I'll file a follow on as a sub of HBASE-781 
to add tricks to the refguide, and try to get this in before the merge.  (There 
are more hooks I may add to the shell).

> [shell] have list and list_snapshot return jruby string arrays.
> ---
>
> Key: HBASE-7353
> URL: https://issues.apache.org/jira/browse/HBASE-7353
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, shell
>Affects Versions: hbase-6055
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Attachments: hbase-7353.patch
>
>
> It is really convenient to allow comamnds like list and list_snapshots return 
> a jruby array of values in the hbase shell.
> it allows for nice things like this:
> {code}
> # drop all tables starting with foo
> list("foo.*").map { |t| disable t; drop t }
> {code}
> or 
> {code}
> # clone all tables that start with bar
> list_snapshots("bar.*").map { |s| clone_snapshot s, s + "-table"}
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7368) Add shell tricks documentation to the refguide

2012-12-17 Thread Jonathan Hsieh (JIRA)
Jonathan Hsieh created HBASE-7368:
-

 Summary: Add shell tricks documentation to the refguide
 Key: HBASE-7368
 URL: https://issues.apache.org/jira/browse/HBASE-7368
 Project: HBase
  Issue Type: Sub-task
Reporter: Jonathan Hsieh


bq. Consider adding a sentence to 
http://hbase.apache.org/book.html#shell_tricks on your new fangled assignable 
additions. ...  (Should do same for your count change too). 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7145) ReusableStreamGzipCodec NPE upon reset with IBM JDK

2012-12-17 Thread Renata Ghisloti Duarte de Souza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renata Ghisloti Duarte de Souza updated HBASE-7145:
---

Attachment: HBASE-7145.2_trunk.patch.txt

Patch made against Hbase trunk with the suggestion made by Luke Lu.

> ReusableStreamGzipCodec NPE upon reset with IBM JDK
> ---
>
> Key: HBASE-7145
> URL: https://issues.apache.org/jira/browse/HBASE-7145
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Attachments: HBASE-7145.1.patch.txt, HBASE-7145.2_trunk.patch.txt
>
>
> This is the same issue as described in HADOOP-8419, repeat the issue 
> description here:
> The ReusableStreamGzipCodec will NPE upon reset after finish when the native 
> zlib codec is not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> ReusableStreamGzipCodec uses GZIPOutputStream which is extended to provide 
> the resetState method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 
> SR10, GZIPOutputStream#finish will release the underlying deflater(calls the 
> deflater's end method), which causes NPE upon reset. This seems to be an IBM 
> JDK quirk as Sun JDK and OpenJDK doesn't have this issue.
> Since in HBASE-5387 HBase source has refactor its code not to use hadoop's 
> GzipCodec during real compress/decompress, it's necessary to make a separate 
> patch for HBase on the same issue

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7369) HConnectionManager should remove aborted connections

2012-12-17 Thread Bryan Baugher (JIRA)
Bryan Baugher created HBASE-7369:


 Summary: HConnectionManager should remove aborted connections
 Key: HBASE-7369
 URL: https://issues.apache.org/jira/browse/HBASE-7369
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.94.3
Reporter: Bryan Baugher
Priority: Minor


When an HConnection is abort()'ed (i.e. if numerous services are lost) the 
connection becomes unusable. HConnectionManager cache of HConnections currently 
does not have any logic around removing aborted connections automatically. 
Currently it is up to the consumer to do so using 
HConnectionManager.deleteStaleConnection(HConnection).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7145) ReusableStreamGzipCodec NPE upon reset with IBM JDK

2012-12-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7145:
--

Fix Version/s: 0.96.0
   Status: Patch Available  (was: Open)

> ReusableStreamGzipCodec NPE upon reset with IBM JDK
> ---
>
> Key: HBASE-7145
> URL: https://issues.apache.org/jira/browse/HBASE-7145
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Fix For: 0.96.0
>
> Attachments: HBASE-7145.1.patch.txt, HBASE-7145.2_trunk.patch.txt
>
>
> This is the same issue as described in HADOOP-8419, repeat the issue 
> description here:
> The ReusableStreamGzipCodec will NPE upon reset after finish when the native 
> zlib codec is not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> ReusableStreamGzipCodec uses GZIPOutputStream which is extended to provide 
> the resetState method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 
> SR10, GZIPOutputStream#finish will release the underlying deflater(calls the 
> deflater's end method), which causes NPE upon reset. This seems to be an IBM 
> JDK quirk as Sun JDK and OpenJDK doesn't have this issue.
> Since in HBASE-5387 HBase source has refactor its code not to use hadoop's 
> GzipCodec during real compress/decompress, it's necessary to make a separate 
> patch for HBase on the same issue

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5416) Improve performance of scans with some kind of filters.

2012-12-17 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534158#comment-13534158
 ] 

Sergey Shelukhin commented on HBASE-5416:
-

bq. The above enum is a tri-state boolean. I think using Boolean should suffice.
Imho the Boolean is not as expressive... I did Boolean first then replaced with 
enum because the null checks looked sketchy.
I guess I can replace it back if there's strong opinion.

Rest fixed.

> Improve performance of scans with some kind of filters.
> ---
>
> Key: HBASE-5416
> URL: https://issues.apache.org/jira/browse/HBASE-5416
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters, Performance, regionserver
>Affects Versions: 0.90.4
>Reporter: Max Lapan
>Assignee: Max Lapan
> Fix For: 0.96.0
>
> Attachments: 5416-Filtered_scans_v6.patch, 5416-v5.txt, 5416-v6.txt, 
> Filtered_scans.patch, Filtered_scans_v2.patch, Filtered_scans_v3.patch, 
> Filtered_scans_v4.patch, Filtered_scans_v5.1.patch, Filtered_scans_v5.patch, 
> Filtered_scans_v7.patch, HBASE-5416-v7-rebased.patch, HBASE-5416-v8.patch
>
>
> When the scan is performed, whole row is loaded into result list, after that 
> filter (if exists) is applied to detect that row is needed.
> But when scan is performed on several CFs and filter checks only data from 
> the subset of these CFs, data from CFs, not checked by a filter is not needed 
> on a filter stage. Only when we decided to include current row. And in such 
> case we can significantly reduce amount of IO performed by a scan, by loading 
> only values, actually checked by a filter.
> For example, we have two CFs: flags and snap. Flags is quite small (bunch of 
> megabytes) and is used to filter large entries from snap. Snap is very large 
> (10s of GB) and it is quite costly to scan it. If we needed only rows with 
> some flag specified, we use SingleColumnValueFilter to limit result to only 
> small subset of region. But current implementation is loading both CFs to 
> perform scan, when only small subset is needed.
> Attached patch adds one routine to Filter interface to allow filter to 
> specify which CF is needed to it's operation. In HRegion, we separate all 
> scanners into two groups: needed for filter and the rest (joined). When new 
> row is considered, only needed data is loaded, filter applied, and only if 
> filter accepts the row, rest of data is loaded. At our data, this speeds up 
> such kind of scans 30-50 times. Also, this gives us the way to better 
> normalize the data into separate columns by optimizing the scans performed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7369) HConnectionManager should remove aborted connections

2012-12-17 Thread Bryan Baugher (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Baugher updated HBASE-7369:
-

Attachment: patch.diff

> HConnectionManager should remove aborted connections
> 
>
> Key: HBASE-7369
> URL: https://issues.apache.org/jira/browse/HBASE-7369
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 0.94.3
>Reporter: Bryan Baugher
>Priority: Minor
> Attachments: patch.diff
>
>
> When an HConnection is abort()'ed (i.e. if numerous services are lost) the 
> connection becomes unusable. HConnectionManager cache of HConnections 
> currently does not have any logic around removing aborted connections 
> automatically. Currently it is up to the consumer to do so using 
> HConnectionManager.deleteStaleConnection(HConnection).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7369) HConnectionManager should remove aborted connections

2012-12-17 Thread Bryan Baugher (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Baugher updated HBASE-7369:
-

Status: Patch Available  (was: Open)

Patch provided

> HConnectionManager should remove aborted connections
> 
>
> Key: HBASE-7369
> URL: https://issues.apache.org/jira/browse/HBASE-7369
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 0.94.3
>Reporter: Bryan Baugher
>Priority: Minor
> Attachments: patch.diff
>
>
> When an HConnection is abort()'ed (i.e. if numerous services are lost) the 
> connection becomes unusable. HConnectionManager cache of HConnections 
> currently does not have any logic around removing aborted connections 
> automatically. Currently it is up to the consumer to do so using 
> HConnectionManager.deleteStaleConnection(HConnection).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5416) Improve performance of scans with some kind of filters.

2012-12-17 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-5416:


Attachment: HBASE-5416-v9.patch

> Improve performance of scans with some kind of filters.
> ---
>
> Key: HBASE-5416
> URL: https://issues.apache.org/jira/browse/HBASE-5416
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters, Performance, regionserver
>Affects Versions: 0.90.4
>Reporter: Max Lapan
>Assignee: Max Lapan
> Fix For: 0.96.0
>
> Attachments: 5416-Filtered_scans_v6.patch, 5416-v5.txt, 5416-v6.txt, 
> Filtered_scans.patch, Filtered_scans_v2.patch, Filtered_scans_v3.patch, 
> Filtered_scans_v4.patch, Filtered_scans_v5.1.patch, Filtered_scans_v5.patch, 
> Filtered_scans_v7.patch, HBASE-5416-v7-rebased.patch, HBASE-5416-v8.patch, 
> HBASE-5416-v9.patch
>
>
> When the scan is performed, whole row is loaded into result list, after that 
> filter (if exists) is applied to detect that row is needed.
> But when scan is performed on several CFs and filter checks only data from 
> the subset of these CFs, data from CFs, not checked by a filter is not needed 
> on a filter stage. Only when we decided to include current row. And in such 
> case we can significantly reduce amount of IO performed by a scan, by loading 
> only values, actually checked by a filter.
> For example, we have two CFs: flags and snap. Flags is quite small (bunch of 
> megabytes) and is used to filter large entries from snap. Snap is very large 
> (10s of GB) and it is quite costly to scan it. If we needed only rows with 
> some flag specified, we use SingleColumnValueFilter to limit result to only 
> small subset of region. But current implementation is loading both CFs to 
> perform scan, when only small subset is needed.
> Attached patch adds one routine to Filter interface to allow filter to 
> specify which CF is needed to it's operation. In HRegion, we separate all 
> scanners into two groups: needed for filter and the rest (joined). When new 
> row is considered, only needed data is loaded, filter applied, and only if 
> filter accepts the row, rest of data is loaded. At our data, this speeds up 
> such kind of scans 30-50 times. Also, this gives us the way to better 
> normalize the data into separate columns by optimizing the scans performed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7361) Fix all javadoc warnings in hbase-server/{,mapreduce}

2012-12-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7361:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk.  Thanks Nick.

> Fix all javadoc warnings in hbase-server/{,mapreduce}
> -
>
> Key: HBASE-7361
> URL: https://issues.apache.org/jira/browse/HBASE-7361
> Project: HBase
>  Issue Type: Improvement
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.96.0
>
> Attachments: 7361-fix-javadoc-warnings.0.diff, 
> 7361-fix-javadoc-warnings.1.diff
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7318) Add verbose logging option to HConnectionManager

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534176#comment-13534176
 ] 

stack commented on HBASE-7318:
--

Patch looks good Sergey.  Do you have to introduce the new flag?  Could you not 
log the new stuff at TRACE level?  Would that work for you?  Then the config. 
will be in the place folks expect to find it in, in log4j.properties?

> Add verbose logging option to HConnectionManager
> 
>
> Key: HBASE-7318
> URL: https://issues.apache.org/jira/browse/HBASE-7318
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.96.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HBASE-7318-v0.patch
>
>
> In the course of HBASE-7250 I found that client-side errors (as well as 
> server-side errors, but that's another question) are hard to debug.
> I have some local commits with useful, not-that-hacky HConnectionManager 
> logging added.
> Need to "productionize" it to be off by default but easy-to-enable for 
> debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7318) Add verbose logging option to HConnectionManager

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534176#comment-13534176
 ] 

stack commented on HBASE-7318:
--

Patch looks good Sergey.  Do you have to introduce the new flag?  Could you not 
log the new stuff at TRACE level?  Would that work for you?  Then the config. 
will be in the place folks expect to find it in, in log4j.properties?

> Add verbose logging option to HConnectionManager
> 
>
> Key: HBASE-7318
> URL: https://issues.apache.org/jira/browse/HBASE-7318
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.96.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HBASE-7318-v0.patch
>
>
> In the course of HBASE-7250 I found that client-side errors (as well as 
> server-side errors, but that's another question) are hard to debug.
> I have some local commits with useful, not-that-hacky HConnectionManager 
> logging added.
> Need to "productionize" it to be off by default but easy-to-enable for 
> debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7349) Jenkins build should compare trunk vs patch for Javadoc warnings

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534185#comment-13534185
 ] 

stack commented on HBASE-7349:
--

Should we close this one now that we went a different route (zero javadoc 
warnings?)

> Jenkins build should compare trunk vs patch for Javadoc warnings
> 
>
> Key: HBASE-7349
> URL: https://issues.apache.org/jira/browse/HBASE-7349
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: 7349-build-improve-javadoc-warnings.0.diff
>
>
> The javadoc check should look for an increase in the number of warnings. It 
> can do so by running javadoc against trunk before running it for the patch. 
> This will increase build times.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7342) Split operation without split key incorrectly finds the middle key in off-by-one error

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534191#comment-13534191
 ] 

stack commented on HBASE-7342:
--

We should not commit the test for this patch.  It is over-the-top spinning up a 
cluster to check a plain array math problem (but thanks for making the test 
Aleksandr ...)

> Split operation without split key incorrectly finds the middle key in 
> off-by-one error
> --
>
> Key: HBASE-7342
> URL: https://issues.apache.org/jira/browse/HBASE-7342
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, io
>Affects Versions: 0.94.1, 0.94.2, 0.94.3, 0.96.0
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7342-0.94.txt, 7342-trunk-v3.txt, HBASE-7342-v1.patch, 
> HBASE-7342-v2.patch
>
>
> I took a deeper look into issues I was having using region splitting when 
> specifying a region (but not a key for splitting).
> The midkey calculation is off by one and when there are 2 rows, will pick the 
> 0th one. This causes the firstkey to be the same as midkey and the split will 
> fail. Removing the -1 causes it work correctly, as per the test I've added.
> Looking into the code here is what goes on:
> 1. Split takes the largest storefile
> 2. It puts all the keys into a 2-dimensional array called blockKeys[][]. Key 
> i resides as blockKeys[i]
> 3. Getting the middle root-level index should yield the key in the middle of 
> the storefile
> 4. In step 3, we see that there is a possible erroneous (-1) to adjust for 
> the 0-offset indexing.
> 5. In a result with where there are only 2 blockKeys, this yields the 0th 
> block key. 
> 6. Unfortunately, this is the same block key that 'firstKey' will be.
> 7. This yields the result in HStore.java:1873 ("cannot split because midkey 
> is the same as first or last row")
> 8. Removing the -1 solves the problem (in this case). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7342) Split operation without split key incorrectly finds the middle key in off-by-one error

2012-12-17 Thread Aleksandr Shulman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534194#comment-13534194
 ] 

Aleksandr Shulman commented on HBASE-7342:
--

No worries. I also agree that it's not necessary.

> Split operation without split key incorrectly finds the middle key in 
> off-by-one error
> --
>
> Key: HBASE-7342
> URL: https://issues.apache.org/jira/browse/HBASE-7342
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, io
>Affects Versions: 0.94.1, 0.94.2, 0.94.3, 0.96.0
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7342-0.94.txt, 7342-trunk-v3.txt, HBASE-7342-v1.patch, 
> HBASE-7342-v2.patch
>
>
> I took a deeper look into issues I was having using region splitting when 
> specifying a region (but not a key for splitting).
> The midkey calculation is off by one and when there are 2 rows, will pick the 
> 0th one. This causes the firstkey to be the same as midkey and the split will 
> fail. Removing the -1 causes it work correctly, as per the test I've added.
> Looking into the code here is what goes on:
> 1. Split takes the largest storefile
> 2. It puts all the keys into a 2-dimensional array called blockKeys[][]. Key 
> i resides as blockKeys[i]
> 3. Getting the middle root-level index should yield the key in the middle of 
> the storefile
> 4. In step 3, we see that there is a possible erroneous (-1) to adjust for 
> the 0-offset indexing.
> 5. In a result with where there are only 2 blockKeys, this yields the 0th 
> block key. 
> 6. Unfortunately, this is the same block key that 'firstKey' will be.
> 7. This yields the result in HStore.java:1873 ("cannot split because midkey 
> is the same as first or last row")
> 8. Removing the -1 solves the problem (in this case). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7349) Jenkins build should compare trunk vs patch for Javadoc warnings

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534197#comment-13534197
 ] 

stack commented on HBASE-7349:
--

At the moment I see the following WARNINGs still:

{code}
  3 50 warnings
  4 [WARNING] Javadoc Warnings
  5 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java:1351:
 warning - Tag @link: can't find call(RpcRequestBody[], InetSocketAddress[], 
Class, User) in org.apache.hadoop.hbase.ipc.HBaseClient
  6 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java:200:
 warning - Tag @link: can't find call(Class, RpcRequestBody, long, 
MonitoredRPCHandler) in org.apache.hadoop.hbase.ipc.HBaseServer
  7 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/GeneralBulkAssigner.java:44:
 warning - Tag @link: reference not found: SingleServerBulkAssigner
  8 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/TimeToLiveHFileCleaner.java:36:
 warning - TimeToLiveHFileCleaner#DEFAULT_TTL (referenced by @value tag) is an 
unknown reference.
  9 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java:1082:
 warning - @return tag has no arguments.
 10 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java:749:
 warning - @return tag has no arguments.
 11 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java:1057:
 warning - @param argument "tableName" is not a parameter name.
 12 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java:1994:
 warning - Tag @see: can't find put(Pair[]) in 
org.apache.hadoop.hbase.regionserver.HRegion
 13 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:3725:
 warning - @return tag has no arguments.
 14 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:1968:
 warning - Tag @see: can't find getOnlineRegions() in 
org.apache.hadoop.hbase.regionserver.HRegionServer
 15 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:3752:
 warning - @return tag has no arguments.
 16 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:1407:
 warning - @param argument "logdir" is not a parameter name.
 17 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:1407:
 warning - @param argument "oldLogDir" is not a parameter name.
 18 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java:529:
 warning - Tag @link:illegal character: "60" in 
"RegionObserver#preFlush(ObserverContext, HStore, 
KeyValueScanner)"
 19 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java:529:
 warning - Tag @link:illegal character: "62" in 
"RegionObserver#preFlush(ObserverContext, HStore, 
KeyValueScanner)"
 20 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java:529:
 warning - Tag @link: can't find 
preFlush(ObserverContext, HStore, 
KeyValueScanner) in org.apache.hadoop.hbase.coprocessor.RegionObserver
 21 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.java:100:
 warning - @return tag has no arguments.
 22 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RowProcessor.java:127:
 warning - Tag @link: can't find initialize(ByteString) in 
org.apache.hadoop.hbase.regionserver.RowProcessor
 23 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java:225:
 warning - @return tag has no arguments.
 24 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactionRequest.java:94:
 warning - @return tag has no arguments.
 25 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java:47:
 warning - Tag @link: reference not found: FSHLog.Writer
 26 [WARNING] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControllerProtocol.java:65:
 warning - @param argument "permission" is not a para

[jira] [Commented] (HBASE-7349) Jenkins build should compare trunk vs patch for Javadoc warnings

2012-12-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534199#comment-13534199
 ] 

Nick Dimiduk commented on HBASE-7349:
-

[~stack] This patch was committed also, so the build daemon will continue to 
run the javadoc target twice and compare results. It only adds about 3 minutes 
per build, so I think it's worth keeping, but you may disagree.

> Jenkins build should compare trunk vs patch for Javadoc warnings
> 
>
> Key: HBASE-7349
> URL: https://issues.apache.org/jira/browse/HBASE-7349
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: 7349-build-improve-javadoc-warnings.0.diff
>
>
> The javadoc check should look for an increase in the number of warnings. It 
> can do so by running javadoc against trunk before running it for the patch. 
> This will increase build times.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7369) HConnectionManager should remove aborted connections

2012-12-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534202#comment-13534202
 ] 

Ted Yu commented on HBASE-7369:
---

{code}
+if(managed)
+HConnectionManager.deleteStaleConnection(this);
{code}
nit: insert space between if and (.
Call to HConnectionManager.deleteStaleConnection() can be placed on the same 
line as if statement.

> HConnectionManager should remove aborted connections
> 
>
> Key: HBASE-7369
> URL: https://issues.apache.org/jira/browse/HBASE-7369
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 0.94.3
>Reporter: Bryan Baugher
>Priority: Minor
> Attachments: patch.diff
>
>
> When an HConnection is abort()'ed (i.e. if numerous services are lost) the 
> connection becomes unusable. HConnectionManager cache of HConnections 
> currently does not have any logic around removing aborted connections 
> automatically. Currently it is up to the consumer to do so using 
> HConnectionManager.deleteStaleConnection(HConnection).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7349) Jenkins build should compare trunk vs patch for Javadoc warnings

2012-12-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534207#comment-13534207
 ] 

Nick Dimiduk commented on HBASE-7349:
-

[~stack] the first two are known. Those are the two I mentioned in my comment 
as not knowing the correct syntax. Weren't the others handled by Nicolas's 
patch?

> Jenkins build should compare trunk vs patch for Javadoc warnings
> 
>
> Key: HBASE-7349
> URL: https://issues.apache.org/jira/browse/HBASE-7349
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: 7349-build-improve-javadoc-warnings.0.diff
>
>
> The javadoc check should look for an increase in the number of warnings. It 
> can do so by running javadoc against trunk before running it for the patch. 
> This will increase build times.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7342) Split operation without split key incorrectly finds the middle key in off-by-one error

2012-12-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534220#comment-13534220
 ] 

Ted Yu commented on HBASE-7342:
---

Integrated to trunk and 0.94 without the test

Thanks for the patch, Alex.

Thanks for the reviews, Stack, Ram and Lars

> Split operation without split key incorrectly finds the middle key in 
> off-by-one error
> --
>
> Key: HBASE-7342
> URL: https://issues.apache.org/jira/browse/HBASE-7342
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, io
>Affects Versions: 0.94.1, 0.94.2, 0.94.3, 0.96.0
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7342-0.94.txt, 7342-trunk-v3.txt, HBASE-7342-v1.patch, 
> HBASE-7342-v2.patch
>
>
> I took a deeper look into issues I was having using region splitting when 
> specifying a region (but not a key for splitting).
> The midkey calculation is off by one and when there are 2 rows, will pick the 
> 0th one. This causes the firstkey to be the same as midkey and the split will 
> fail. Removing the -1 causes it work correctly, as per the test I've added.
> Looking into the code here is what goes on:
> 1. Split takes the largest storefile
> 2. It puts all the keys into a 2-dimensional array called blockKeys[][]. Key 
> i resides as blockKeys[i]
> 3. Getting the middle root-level index should yield the key in the middle of 
> the storefile
> 4. In step 3, we see that there is a possible erroneous (-1) to adjust for 
> the 0-offset indexing.
> 5. In a result with where there are only 2 blockKeys, this yields the 0th 
> block key. 
> 6. Unfortunately, this is the same block key that 'firstKey' will be.
> 7. This yields the result in HStore.java:1873 ("cannot split because midkey 
> is the same as first or last row")
> 8. Removing the -1 solves the problem (in this case). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7342) Split operation without split key incorrectly finds the middle key in off-by-one error

2012-12-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534225#comment-13534225
 ] 

Ted Yu commented on HBASE-7342:
---

I forgot to mention that I ran the tests reported by Hadoop QA below and didn't 
see hanging test:
{code}
 BEGIN zombies jstack extract
at 
org.apache.hadoop.hbase.util.TestHBaseFsck.testFixByTable(TestHBaseFsck.java:1188)
at 
org.apache.hadoop.hbase.util.TestHBaseFsck.testLingeringSplitParent(TestHBaseFsck.java:1262)
at 
org.apache.hadoop.hbase.catalog.TestCatalogTracker.testServerNotRunningIOException(TestCatalogTracker.java:250)
 END  zombies jstack extract
{code}

> Split operation without split key incorrectly finds the middle key in 
> off-by-one error
> --
>
> Key: HBASE-7342
> URL: https://issues.apache.org/jira/browse/HBASE-7342
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, io
>Affects Versions: 0.94.1, 0.94.2, 0.94.3, 0.96.0
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7342-0.94.txt, 7342-trunk-v3.txt, HBASE-7342-v1.patch, 
> HBASE-7342-v2.patch
>
>
> I took a deeper look into issues I was having using region splitting when 
> specifying a region (but not a key for splitting).
> The midkey calculation is off by one and when there are 2 rows, will pick the 
> 0th one. This causes the firstkey to be the same as midkey and the split will 
> fail. Removing the -1 causes it work correctly, as per the test I've added.
> Looking into the code here is what goes on:
> 1. Split takes the largest storefile
> 2. It puts all the keys into a 2-dimensional array called blockKeys[][]. Key 
> i resides as blockKeys[i]
> 3. Getting the middle root-level index should yield the key in the middle of 
> the storefile
> 4. In step 3, we see that there is a possible erroneous (-1) to adjust for 
> the 0-offset indexing.
> 5. In a result with where there are only 2 blockKeys, this yields the 0th 
> block key. 
> 6. Unfortunately, this is the same block key that 'firstKey' will be.
> 7. This yields the result in HStore.java:1873 ("cannot split because midkey 
> is the same as first or last row")
> 8. Removing the -1 solves the problem (in this case). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7369) HConnectionManager should remove aborted connections

2012-12-17 Thread Bryan Baugher (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Baugher updated HBASE-7369:
-

Attachment: patch2.diff

Addressed comment by Ted Yu and also fixed the test in which aborting the 
HConnection caused other tests/infrastructure to fail.

> HConnectionManager should remove aborted connections
> 
>
> Key: HBASE-7369
> URL: https://issues.apache.org/jira/browse/HBASE-7369
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 0.94.3
>Reporter: Bryan Baugher
>Priority: Minor
> Attachments: patch2.diff, patch.diff
>
>
> When an HConnection is abort()'ed (i.e. if numerous services are lost) the 
> connection becomes unusable. HConnectionManager cache of HConnections 
> currently does not have any logic around removing aborted connections 
> automatically. Currently it is up to the consumer to do so using 
> HConnectionManager.deleteStaleConnection(HConnection).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7145) ReusableStreamGzipCodec NPE upon reset with IBM JDK

2012-12-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534231#comment-13534231
 ] 

Hadoop QA commented on HBASE-7145:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12561327/HBASE-7145.2_trunk.patch.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
additional warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 26 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestRollingRestart

 {color:red}-1 core zombie tests{color}.  There are zombie tests. See build 
logs for details.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3571//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3571//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3571//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3571//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3571//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3571//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3571//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3571//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3571//console

This message is automatically generated.

> ReusableStreamGzipCodec NPE upon reset with IBM JDK
> ---
>
> Key: HBASE-7145
> URL: https://issues.apache.org/jira/browse/HBASE-7145
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Fix For: 0.96.0
>
> Attachments: HBASE-7145.1.patch.txt, HBASE-7145.2_trunk.patch.txt
>
>
> This is the same issue as described in HADOOP-8419, repeat the issue 
> description here:
> The ReusableStreamGzipCodec will NPE upon reset after finish when the native 
> zlib codec is not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> ReusableStreamGzipCodec uses GZIPOutputStream which is extended to provide 
> the resetState method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 
> SR10, GZIPOutputStream#finish will release the underlying deflater(calls the 
> deflater's end method), which causes NPE upon reset. This seems to be an IBM 
> JDK quirk as Sun JDK and OpenJDK doesn't have this issue.
> Since in HBASE-5387 HBase source has refactor its code not to use hadoop's 
> GzipCodec during real compress/decompress, it's necessary to make a separate 
> patch for HBase on the same issue

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7361) Fix all javadoc warnings in hbase-server/{,mapreduce}

2012-12-17 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534232#comment-13534232
 ] 

Enis Soztutar commented on HBASE-7361:
--

bq. For the autoFlush lines, there is no class member to link to. In the case 
of TestRowProcessorEndpoint, I believe our javadoc module is not configured to 
include the test directories. The FileLink change happens inside the FileLink 
class, so no need for an external link.
Thanks for the explanation. Makes sense. 
bq. Outstanding issues re: the two remaining warnings persist.
I guess we can do an addendum issue if you want to resolve those as well. 

> Fix all javadoc warnings in hbase-server/{,mapreduce}
> -
>
> Key: HBASE-7361
> URL: https://issues.apache.org/jira/browse/HBASE-7361
> Project: HBase
>  Issue Type: Improvement
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.96.0
>
> Attachments: 7361-fix-javadoc-warnings.0.diff, 
> 7361-fix-javadoc-warnings.1.diff
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7361) Fix all javadoc warnings in hbase-server/{,mapreduce}

2012-12-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534238#comment-13534238
 ] 

Hadoop QA commented on HBASE-7361:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12561325/7361-fix-javadoc-warnings.1.diff
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
additional warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 26 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.catalog.TestMetaReaderEditor
  org.apache.hadoop.hbase.util.TestHBaseFsck

 {color:red}-1 core zombie tests{color}.  There are zombie tests. See build 
logs for details.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3570//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3570//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3570//console

This message is automatically generated.

> Fix all javadoc warnings in hbase-server/{,mapreduce}
> -
>
> Key: HBASE-7361
> URL: https://issues.apache.org/jira/browse/HBASE-7361
> Project: HBase
>  Issue Type: Improvement
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.96.0
>
> Attachments: 7361-fix-javadoc-warnings.0.diff, 
> 7361-fix-javadoc-warnings.1.diff
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7145) ReusableStreamGzipCodec NPE upon reset with IBM JDK

2012-12-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534240#comment-13534240
 ] 

Ted Yu commented on HBASE-7145:
---

I ran the test with patch and it passed:
{code}
Running org.apache.hadoop.hbase.master.TestRollingRestart
2012-12-17 11:54:38.434 java[1572:db03] Unable to load realm info from 
SCDynamicStore
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.558 sec
{code}
+1 on patch

> ReusableStreamGzipCodec NPE upon reset with IBM JDK
> ---
>
> Key: HBASE-7145
> URL: https://issues.apache.org/jira/browse/HBASE-7145
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Fix For: 0.96.0
>
> Attachments: HBASE-7145.1.patch.txt, HBASE-7145.2_trunk.patch.txt
>
>
> This is the same issue as described in HADOOP-8419, repeat the issue 
> description here:
> The ReusableStreamGzipCodec will NPE upon reset after finish when the native 
> zlib codec is not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> ReusableStreamGzipCodec uses GZIPOutputStream which is extended to provide 
> the resetState method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 
> SR10, GZIPOutputStream#finish will release the underlying deflater(calls the 
> deflater's end method), which causes NPE upon reset. This seems to be an IBM 
> JDK quirk as Sun JDK and OpenJDK doesn't have this issue.
> Since in HBASE-5387 HBase source has refactor its code not to use hadoop's 
> GzipCodec during real compress/decompress, it's necessary to make a separate 
> patch for HBase on the same issue

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7338) Fix flaky condition for org.apache.hadoop.hbase.TestRegionRebalancing.testRebalanceOnRegionServerNumberChange

2012-12-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7338:
-

Fix Version/s: 0.94.4

Committed to 0.94 w/ -p1

> Fix flaky condition for 
> org.apache.hadoop.hbase.TestRegionRebalancing.testRebalanceOnRegionServerNumberChange
> -
>
> Key: HBASE-7338
> URL: https://issues.apache.org/jira/browse/HBASE-7338
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: HBASE-7338.patch
>
>
> The balancer doesn't run in case a region is in-transition. The check to 
> confirm whether there all regions are assigned looks for region count > 22, 
> where the total regions are 27. This may result in a failure:
> {code}
> java.lang.AssertionError: After 5 attempts, region assignments were not 
> balanced.
>   at org.junit.Assert.fail(Assert.java:93)
>   at 
> org.apache.hadoop.hbase.TestRegionRebalancing.assertRegionsAreBalanced(TestRegionRebalancing.java:203)
>   at 
> org.apache.hadoop.hbase.TestRegionRebalancing.testRebalanceOnRegionServerNumberChange(TestRegionRebalancing.java:123)
> .
> 2012-12-11 13:47:02,231 INFO  [pool-1-thread-1] 
> hbase.TestRegionRebalancing(120): Added fourth 
> server=p0118.mtv.cloudera.com,44414,1355262422083
> 2012-12-11 13:47:02,231 INFO  
> [RegionServer:3;p0118.mtv.cloudera.com,44414,1355262422083] 
> regionserver.HRegionServer(3769): Registered RegionServer MXBean
> 2012-12-11 13:47:02,231 DEBUG [pool-1-thread-1] master.HMaster(987): Not 
> running balancer because 1 region(s) in transition: 
> {c786446fb2542f190e937057cdc79d9d=test,kkk,1355262401365.c786446fb2542f190e937057cdc79d9d.
>  state=OPENING, ts=1355262421037, 
> server=p0118.mtv.cloudera.com,54281,1355262419765}
> 2012-12-11 13:47:02,232 DEBUG [pool-1-thread-1] 
> hbase.TestRegionRebalancing(165): There are 4 servers and 26 regions. Load 
> Average: 13.0 low border: 9, up border: 16; attempt: 0
> 2012-12-11 13:47:02,232 DEBUG [pool-1-thread-1] 
> hbase.TestRegionRebalancing(171): p0118.mtv.cloudera.com,51590,1355262395329 
> Avg: 13.0 actual: 11
> 2012-12-11 13:47:02,232 DEBUG [pool-1-thread-1] 
> hbase.TestRegionRebalancing(171): p0118.mtv.cloudera.com,52987,1355262407916 
> Avg: 13.0 actual: 15
> 2012-12-11 13:47:02,233 DEBUG [pool-1-thread-1] 
> hbase.TestRegionRebalancing(171): p0118.mtv.cloudera.com,48044,1355262421787 
> Avg: 13.0 actual: 0
> 2012-12-11 13:47:02,233 DEBUG [pool-1-thread-1] 
> hbase.TestRegionRebalancing(179): p0118.mtv.cloudera.com,48044,1355262421787 
> Isn't balanced!!! Avg: 13.0 actual: 0 slop: 0.2
> 2012-12-11 13:47:12,233 DEBUG [pool-1-thread-1] master.HMaster(987): Not 
> running balancer because 1 region(s) in transition: 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534258#comment-13534258
 ] 

Andrew Purtell commented on HBASE-7367:
---

Points raised by Matteo and Jon on RB should definitely be discussed here. 

I understand the goal is to punt, initially. Just throwing an ADE is punting 
too early.

Checking for GLOBAL ADMIN privilege and allowing snapshots is more reasonable. 
This means security won't get in the way of snapshots but won't add anything 
either.

It assumes the superuser knows all, and knows that ACLs will have to be 
reconstructed on a restored table. The default policy is deny so the restored 
or cloned table cannot be read by who you want, not that data will suddenly 
leak.

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534258#comment-13534258
 ] 

Andrew Purtell edited comment on HBASE-7367 at 12/17/12 8:17 PM:
-

Points raised by Matteo and Jon on RB should definitely be discussed here. 

I understand the goal is to punt, initially. Just throwing an ADE is punting 
too early.

Checking for GLOBAL ADMIN privilege and allowing snapshots if the (super)user 
has this priv, otherwise throwing an ADE, is more reasonable. This means 
security won't get in the way of snapshots but won't add anything either.

It assumes the superuser knows all, and knows that ACLs will have to be 
reconstructed on a restored table. The default policy is deny so the restored 
or cloned table cannot be read by who you want, not that data will suddenly 
leak.

  was (Author: apurtell):
Points raised by Matteo and Jon on RB should definitely be discussed here. 

I understand the goal is to punt, initially. Just throwing an ADE is punting 
too early.

Checking for GLOBAL ADMIN privilege and allowing snapshots is more reasonable. 
This means security won't get in the way of snapshots but won't add anything 
either.

It assumes the superuser knows all, and knows that ACLs will have to be 
reconstructed on a restored table. The default policy is deny so the restored 
or cloned table cannot be read by who you want, not that data will suddenly 
leak.
  
> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6775) Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix

2012-12-17 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-6775:
--

Attachment: HBASE-6775-v2.patch

Latest patch from reviewboard.

> Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix
> --
>
> Key: HBASE-6775
> URL: https://issues.apache.org/jira/browse/HBASE-6775
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 0.94.2
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 0.94.4
>
> Attachments: HBASE-6775-v2.patch
>
>
> HBASE-6710 fixed a 0.92/0.94 compatibility issue by writing two znodes in 
> different formats.
> If a ZK failure occurs between the writing of the two znodes, strange 
> behavior can result.
> This issue is a reminder to change these two ZK writes to use ZK.multi when 
> we require ZK 3.4+. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6775) Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix

2012-12-17 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-6775:
--

Status: Patch Available  (was: Open)

> Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix
> --
>
> Key: HBASE-6775
> URL: https://issues.apache.org/jira/browse/HBASE-6775
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 0.94.2
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 0.94.4
>
> Attachments: HBASE-6775-v2.patch
>
>
> HBASE-6710 fixed a 0.92/0.94 compatibility issue by writing two znodes in 
> different formats.
> If a ZK failure occurs between the writing of the two znodes, strange 
> behavior can result.
> This issue is a reminder to change these two ZK writes to use ZK.multi when 
> we require ZK 3.4+. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7145) ReusableStreamGzipCodec NPE upon reset with IBM JDK

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534277#comment-13534277
 ] 

stack commented on HBASE-7145:
--

Would suggest that the parts of this patch that probe for a particular JVM 
belong better in the org.apache.hadoop.hbase.util.JVM class.

> ReusableStreamGzipCodec NPE upon reset with IBM JDK
> ---
>
> Key: HBASE-7145
> URL: https://issues.apache.org/jira/browse/HBASE-7145
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.94.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Fix For: 0.96.0
>
> Attachments: HBASE-7145.1.patch.txt, HBASE-7145.2_trunk.patch.txt
>
>
> This is the same issue as described in HADOOP-8419, repeat the issue 
> description here:
> The ReusableStreamGzipCodec will NPE upon reset after finish when the native 
> zlib codec is not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> ReusableStreamGzipCodec uses GZIPOutputStream which is extended to provide 
> the resetState method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 
> SR10, GZIPOutputStream#finish will release the underlying deflater(calls the 
> deflater's end method), which causes NPE upon reset. This seems to be an IBM 
> JDK quirk as Sun JDK and OpenJDK doesn't have this issue.
> Since in HBASE-5387 HBase source has refactor its code not to use hadoop's 
> GzipCodec during real compress/decompress, it's necessary to make a separate 
> patch for HBase on the same issue

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6775) Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix

2012-12-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534280#comment-13534280
 ] 

Hadoop QA commented on HBASE-6775:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12561348/HBASE-6775-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3574//console

This message is automatically generated.

> Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix
> --
>
> Key: HBASE-6775
> URL: https://issues.apache.org/jira/browse/HBASE-6775
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 0.94.2
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 0.94.4
>
> Attachments: HBASE-6775-v2.patch
>
>
> HBASE-6710 fixed a 0.92/0.94 compatibility issue by writing two znodes in 
> different formats.
> If a ZK failure occurs between the writing of the two znodes, strange 
> behavior can result.
> This issue is a reminder to change these two ZK writes to use ZK.multi when 
> we require ZK 3.4+. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7370) Remove Writable From ScanMetrics.

2012-12-17 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-7370:


 Summary: Remove Writable From ScanMetrics.
 Key: HBASE-7370
 URL: https://issues.apache.org/jira/browse/HBASE-7370
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Elliott Clark


Right now ScanMetrics uses Writable to be able to set MapReduce counters.  We 
should remove this and use protobuf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7370) Remove Writable From ScanMetrics.

2012-12-17 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7370:
-

Affects Version/s: 0.96.0

> Remove Writable From ScanMetrics.
> -
>
> Key: HBASE-7370
> URL: https://issues.apache.org/jira/browse/HBASE-7370
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
>
> Right now ScanMetrics uses Writable to be able to set MapReduce counters.  We 
> should remove this and use protobuf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7370) Remove Writable From ScanMetrics.

2012-12-17 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7370:
-

Priority: Critical  (was: Major)

> Remove Writable From ScanMetrics.
> -
>
> Key: HBASE-7370
> URL: https://issues.apache.org/jira/browse/HBASE-7370
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
>
> Right now ScanMetrics uses Writable to be able to set MapReduce counters.  We 
> should remove this and use protobuf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7370) Remove Writable From ScanMetrics.

2012-12-17 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7370:
-

Component/s: metrics
 mapreduce

> Remove Writable From ScanMetrics.
> -
>
> Key: HBASE-7370
> URL: https://issues.apache.org/jira/browse/HBASE-7370
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
>
> Right now ScanMetrics uses Writable to be able to set MapReduce counters.  We 
> should remove this and use protobuf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5416) Improve performance of scans with some kind of filters.

2012-12-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534293#comment-13534293
 ] 

Hadoop QA commented on HBASE-5416:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12561329/HBASE-5416-v9.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
additional warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 26 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are zombie tests. See build 
logs for details.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3573//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3573//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3573//console

This message is automatically generated.

> Improve performance of scans with some kind of filters.
> ---
>
> Key: HBASE-5416
> URL: https://issues.apache.org/jira/browse/HBASE-5416
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters, Performance, regionserver
>Affects Versions: 0.90.4
>Reporter: Max Lapan
>Assignee: Max Lapan
> Fix For: 0.96.0
>
> Attachments: 5416-Filtered_scans_v6.patch, 5416-v5.txt, 5416-v6.txt, 
> Filtered_scans.patch, Filtered_scans_v2.patch, Filtered_scans_v3.patch, 
> Filtered_scans_v4.patch, Filtered_scans_v5.1.patch, Filtered_scans_v5.patch, 
> Filtered_scans_v7.patch, HBASE-5416-v7-rebased.patch, HBASE-5416-v8.patch, 
> HBASE-5416-v9.patch
>
>
> When the scan is performed, whole row is loaded into result list, after that 
> filter (if exists) is applied to detect that row is needed.
> But when scan is performed on several CFs and filter checks only data from 
> the subset of these CFs, data from CFs, not checked by a filter is not needed 
> on a filter stage. Only when we decided to include current row. And in such 
> case we can significantly reduce amount of IO performed by a scan, by loading 
> only values, actually checked by a filter.
> For example, we have two CFs: flags and snap. Flags is quite small (bunch of 
> megabytes) and is used to filter large entries from snap. Snap is very large 
> (10s of GB) and it is quite costly to scan it. If we needed only rows with 
> some flag specified, we use SingleColumnValueFilter to limit result to only 
> small subset of region. But current implementation is loading both CFs to 
> perform scan, when only small subset is needed.
> Attached patch adds one routine to Filter interface to allow filter to 
> specify which CF is needed to it's operation. In HRegion, we separate all 
> scanners into two groups: needed for filter and the rest (joined). When new 
> row is considered, only needed data is loaded, filter applied, and only if 
> filter accepts the row, rest of data is loaded. At our data, this speeds up 
> such kind of scans 30-50 times. Also, this gives us the way to better 
> normalize the data into separate columns by optimizing the scans performed.

--
This message is automatically generated by JIRA.
If yo

[jira] [Commented] (HBASE-7369) HConnectionManager should remove aborted connections

2012-12-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534320#comment-13534320
 ] 

Hadoop QA commented on HBASE-7369:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12561341/patch2.diff
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
additional warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 26 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor

 {color:red}-1 core zombie tests{color}.  There are zombie tests. See build 
logs for details.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3572//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3572//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3572//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3572//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3572//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3572//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3572//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3572//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3572//console

This message is automatically generated.

> HConnectionManager should remove aborted connections
> 
>
> Key: HBASE-7369
> URL: https://issues.apache.org/jira/browse/HBASE-7369
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 0.94.3
>Reporter: Bryan Baugher
>Priority: Minor
> Attachments: patch2.diff, patch.diff
>
>
> When an HConnection is abort()'ed (i.e. if numerous services are lost) the 
> connection becomes unusable. HConnectionManager cache of HConnections 
> currently does not have any logic around removing aborted connections 
> automatically. Currently it is up to the consumer to do so using 
> HConnectionManager.deleteStaleConnection(HConnection).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7349) Jenkins build should compare trunk vs patch for Javadoc warnings

2012-12-17 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534328#comment-13534328
 ] 

nkeywal commented on HBASE-7349:


Yep, I actually used IntelliJ to detect the warnings, it seems maven asks for 
more. I will do an addendum.
About checking on a different branch, I would personally prefer to to save the 
3 minutes and keep/drive the javadoc errors to zero. It's a little bit extreme 
(especially as a return without comment shows up as an error), but in other 
cases a bad javadoc is not nice from a end user point of view.


> Jenkins build should compare trunk vs patch for Javadoc warnings
> 
>
> Key: HBASE-7349
> URL: https://issues.apache.org/jira/browse/HBASE-7349
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: 7349-build-improve-javadoc-warnings.0.diff
>
>
> The javadoc check should look for an increase in the number of warnings. It 
> can do so by running javadoc against trunk before running it for the patch. 
> This will increase build times.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534341#comment-13534341
 ] 

Jonathan Hsieh commented on HBASE-7367:
---

One argument from the RB is that this is on the snapshot dev branch that IMO 
[~andrew.purt...@gmail.com]'s concern is would be a valid follow on issue that 
would  be a blocker that needed to be resolved before merging the branch to 
trunk.

The first cut posted on RB basically says, snapshots not supported when 
security is on.  This patch's single concern is adding the cp hooks we'd come 
up with something better before merging to trunk. (maybe remove the  sec coproc 
portions).  The expectation here is that there would be a follow on that 
implements a simple sane default.

Andrew suggests that we use global admin privs and that only folks with admin 
privs would be able to clone/restore.  The admins also would be responsible for 
adding acls to the clones if they were a concern.  This seems simple and 
sufficient from my point of view.

[~andrew.purt...@gmail.com]: my question now is -- if we add the global admin 
privs checks, would this be sufficient functionality for when attempting to 
merge to trunk?  

If it is, then hooray, I think we've got what we need to merge to trunk.  I 
personally prefer one-patch-one-concern (new snapshot cp hooks minus the 
security cp policy, then another that implements the security+snapshot cp 
policy), but this should be manageable enough to do in one patch with multiple 
concerns.

If it isn't, then we need to discuss what semantics and functionality would 
unblock the eventual merge to trunk.

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534341#comment-13534341
 ] 

Jonathan Hsieh edited comment on HBASE-7367 at 12/17/12 9:50 PM:
-

One argument from the RB is that this is on the snapshot dev branch that IMO 
[~andrew.purt...@gmail.com]'s concern is would be a valid follow on issue that 
would  be a blocker that needed to be resolved before merging the branch to 
trunk.

The first cut posted on RB basically says, snapshots not supported when 
security is on.  This patch's single concern is adding the cp hooks we'd come 
up with something better before merging to trunk. (maybe remove the  sec coproc 
portions).  The expectation here is that there would be a follow on that 
implements a simple sane default.

Andrew suggests that we use global admin privs and that only folks with admin 
privs would be able to clone/restore.  The admins also would be responsible for 
adding acls to the clones if they were a concern.  This seems simple and 
sufficient from my point of view.

[~andrew.purt...@gmail.com]: my question now is -- if we add the global admin 
privs checks, would this be sufficient functionality for when attempting to 
merge to trunk?  

If it is, then hooray, I think we've know what we need to do to merge to trunk. 
 I personally prefer one-patch-one-concern (new snapshot cp hooks minus the 
security cp policy, then another that implements the security+snapshot cp 
policy), but this should be manageable enough to do in one patch with multiple 
concerns.

If it isn't, then we need to discuss what semantics and functionality would 
unblock the eventual merge to trunk.

  was (Author: jmhsieh):
One argument from the RB is that this is on the snapshot dev branch that 
IMO [~andrew.purt...@gmail.com]'s concern is would be a valid follow on issue 
that would  be a blocker that needed to be resolved before merging the branch 
to trunk.

The first cut posted on RB basically says, snapshots not supported when 
security is on.  This patch's single concern is adding the cp hooks we'd come 
up with something better before merging to trunk. (maybe remove the  sec coproc 
portions).  The expectation here is that there would be a follow on that 
implements a simple sane default.

Andrew suggests that we use global admin privs and that only folks with admin 
privs would be able to clone/restore.  The admins also would be responsible for 
adding acls to the clones if they were a concern.  This seems simple and 
sufficient from my point of view.

[~andrew.purt...@gmail.com]: my question now is -- if we add the global admin 
privs checks, would this be sufficient functionality for when attempting to 
merge to trunk?  

If it is, then hooray, I think we've got what we need to merge to trunk.  I 
personally prefer one-patch-one-concern (new snapshot cp hooks minus the 
security cp policy, then another that implements the security+snapshot cp 
policy), but this should be manageable enough to do in one patch with multiple 
concerns.

If it isn't, then we need to discuss what semantics and functionality would 
unblock the eventual merge to trunk.
  
> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6775) Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix

2012-12-17 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534349#comment-13534349
 ] 

Lars Hofhansl commented on HBASE-6775:
--

My comment from 11/11 should read: we shouldn't sweat a patch. If we have a 
patch, we can certainly a good improvement for 0.94.

> Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix
> --
>
> Key: HBASE-6775
> URL: https://issues.apache.org/jira/browse/HBASE-6775
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 0.94.2
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 0.94.4
>
> Attachments: HBASE-6775-v2.patch
>
>
> HBASE-6710 fixed a 0.92/0.94 compatibility issue by writing two znodes in 
> different formats.
> If a ZK failure occurs between the writing of the two znodes, strange 
> behavior can result.
> This issue is a reminder to change these two ZK writes to use ZK.multi when 
> we require ZK 3.4+. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7353) [shell] have list and list_snapshot return jruby string arrays.

2012-12-17 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-7353:
--

   Resolution: Fixed
Fix Version/s: hbase-6055
 Release Note: In the HBase shell, the 'list' command and the 
'list_snapshot' commands now return a jruby array of strings that can be bound 
to jruby variables and used for scripting purposes.
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

> [shell] have list and list_snapshot return jruby string arrays.
> ---
>
> Key: HBASE-7353
> URL: https://issues.apache.org/jira/browse/HBASE-7353
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, shell
>Affects Versions: hbase-6055
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: hbase-6055
>
> Attachments: hbase-7353.patch
>
>
> It is really convenient to allow comamnds like list and list_snapshots return 
> a jruby array of values in the hbase shell.
> it allows for nice things like this:
> {code}
> # drop all tables starting with foo
> list("foo.*").map { |t| disable t; drop t }
> {code}
> or 
> {code}
> # clone all tables that start with bar
> list_snapshots("bar.*").map { |s| clone_snapshot s, s + "-table"}
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7371) Blocksize in TestHFileBlock is unintentionally small

2012-12-17 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-7371:


 Summary: Blocksize in TestHFileBlock is unintentionally small
 Key: HBASE-7371
 URL: https://issues.apache.org/jira/browse/HBASE-7371
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl
Priority: Minor


Looking at TestHFileBlock.writeBlocks I see this:
{code}
  for (int j = 0; j < rand.nextInt(500); ++j) {
// This might compress well.
dos.writeShort(i + 1);
dos.writeInt(j + 1);
  }
{code}

The result is probably not what the author intended. {{rand.nextInt(500)}} is 
evaluated during each iterations and that leads to very small blocks size 
mostly between ~100 and 300 bytes or so.

The author probably intended this:
{code}
  int size = rand.nextInt(500);
  for (int j = 0; j < size; ++j) {
// This might compress well.
dos.writeShort(i + 1);
dos.writeInt(j + 1);
  }
{code}

This leads to more reasonable block sizes between ~200 and 3000 bytes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-7336) HFileBlock.readAtOffset does not work well with multiple threads

2012-12-17 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-7336.
--

Resolution: Fixed

I reapplied the patch.
(the other anomaly of this test is that all data fits into a single HDFS block, 
so of course seek+read will be favored here)

> HFileBlock.readAtOffset does not work well with multiple threads
> 
>
> Key: HBASE-7336
> URL: https://issues.apache.org/jira/browse/HBASE-7336
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7336-0.94.txt, 7336-0.96.txt
>
>
> HBase grinds to a halt when many threads scan along the same set of blocks 
> and neither read short circuit is nor block caching is enabled for the dfs 
> client ... disabling the block cache makes sense on very large scans.
> It turns out that synchronizing in istream in HFileBlock.readAtOffset is the 
> culprit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6775) Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix

2012-12-17 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534386#comment-13534386
 ] 

Gregory Chanan commented on HBASE-6775:
---

Alright, going to check in later today if no objections.

> Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix
> --
>
> Key: HBASE-6775
> URL: https://issues.apache.org/jira/browse/HBASE-6775
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 0.94.2
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>Priority: Minor
> Fix For: 0.94.4
>
> Attachments: HBASE-6775-v2.patch
>
>
> HBASE-6710 fixed a 0.92/0.94 compatibility issue by writing two znodes in 
> different formats.
> If a ZK failure occurs between the writing of the two znodes, strange 
> behavior can result.
> This issue is a reminder to change these two ZK writes to use ZK.multi when 
> we require ZK 3.4+. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534389#comment-13534389
 ] 

Andrew Purtell commented on HBASE-7367:
---

I'm also saying that the global admin check is so trivial it seems strange if 
you would not entertain this feedback even on your branch.

bq. my question now is – if we add the global admin privs checks, would this be 
sufficient functionality for when attempting to merge to trunk?

In my opinion, yes. 

I'd want to dig deeper afterward though as a follow up JIRA. Having ACL 
reconstruction after a clone or restore be a manual task, even if described in 
detail in the docs, would be a pain. I'd want to see how/if the 
AccessController could cooperate to make that easier.

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7372) Check in the generated website so can point apache infrastructure at what to publish as our hbase.apache.org

2012-12-17 Thread stack (JIRA)
stack created HBASE-7372:


 Summary: Check in the generated website so can point apache 
infrastructure at what to publish as our hbase.apache.org
 Key: HBASE-7372
 URL: https://issues.apache.org/jira/browse/HBASE-7372
 Project: HBase
  Issue Type: Bug
Reporter: stack


January 1st is deadline for changing how we publish our website.  We may no 
longer rsync out to people.apache.org.  Apache infrastructure supplies two 
options here: http://www.apache.org/dev/project-site.html  We could redo our 
site in apache cms format.  Or we could just use svnpubsub and keep on w/ how 
the site is currently generated and on checkin, have it autopublished.  I'll go 
the latter route unless I hear otherwise.

For svnpubsub, we need to point apache infrastructure at a directory that has 
our checkedin site in it.  I was thinking ${hbasedir}/hbase.apache.org

Let me raise this on the dev list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7372) Check in the generated website so can point apache infrastructure at what to publish as our hbase.apache.org

2012-12-17 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534399#comment-13534399
 ] 

Nick Dimiduk commented on HBASE-7372:
-

What will the workflow look like? Commit docbook changes into the existing svn 
path, generate the site into the new path, and commit the results? That seems 
tedious. Then again, maybe this is better than maintaining the raw HTML 
manually?

> Check in the generated website so can point apache infrastructure at what to 
> publish as our hbase.apache.org
> 
>
> Key: HBASE-7372
> URL: https://issues.apache.org/jira/browse/HBASE-7372
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>
> January 1st is deadline for changing how we publish our website.  We may no 
> longer rsync out to people.apache.org.  Apache infrastructure supplies two 
> options here: http://www.apache.org/dev/project-site.html  We could redo our 
> site in apache cms format.  Or we could just use svnpubsub and keep on w/ how 
> the site is currently generated and on checkin, have it autopublished.  I'll 
> go the latter route unless I hear otherwise.
> For svnpubsub, we need to point apache infrastructure at a directory that has 
> our checkedin site in it.  I was thinking ${hbasedir}/hbase.apache.org
> Let me raise this on the dev list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6423) Writes should not block reads on blocking updates to memstores

2012-12-17 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534398#comment-13534398
 ] 

Jimmy Xiang commented on HBASE-6423:


@binlijin, yes, it is safe to use the old style for this one. Can you file a 
jira and post a patch? Thanks.

> Writes should not block reads on blocking updates to memstores
> --
>
> Key: HBASE-6423
> URL: https://issues.apache.org/jira/browse/HBASE-6423
> Project: HBase
>  Issue Type: Bug
>Reporter: Karthik Ranganathan
>Assignee: Jimmy Xiang
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 0.94-6423.patch, 0.94-6423_v4.patch, 6423.addendum, 
> trunk-6423.patch, trunk-6423_v2.1.patch, trunk-6423_v2.patch, 
> trunk-6423_v3.2.patch, trunk-6423_v3.3.patch, trunk-6423_v3.4.patch, 
> trunk-6423_v4.patch
>
>
> We have a big data use case where we turn off WAL and have a ton of reads and 
> writes. We found that:
> 1. flushing a memstore takes a while (GZIP compression)
> 2. incoming writes cause the new memstore to grow in an unbounded fashion
> 3. this triggers blocking memstore updates
> 4. in turn, this causes all the RPC handler threads to block on writes to 
> that memstore
> 5. we are not able to read during this time as RPC handlers are blocked
> At a higher level, we should not hold up the RPC threads while blocking 
> updates, and we should build in some sort of rate control.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7336) HFileBlock.readAtOffset does not work well with multiple threads

2012-12-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534423#comment-13534423
 ] 

Hudson commented on HBASE-7336:
---

Integrated in HBase-0.94 #635 (See 
[https://builds.apache.org/job/HBase-0.94/635/])
HBASE-7336 Reapply, the OOMs were not caused by this. (Revision 1423084)

 Result = FAILURE
larsh : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java


> HFileBlock.readAtOffset does not work well with multiple threads
> 
>
> Key: HBASE-7336
> URL: https://issues.apache.org/jira/browse/HBASE-7336
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7336-0.94.txt, 7336-0.96.txt
>
>
> HBase grinds to a halt when many threads scan along the same set of blocks 
> and neither read short circuit is nor block caching is enabled for the dfs 
> client ... disabling the block cache makes sense on very large scans.
> It turns out that synchronizing in istream in HFileBlock.readAtOffset is the 
> culprit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7338) Fix flaky condition for org.apache.hadoop.hbase.TestRegionRebalancing.testRebalanceOnRegionServerNumberChange

2012-12-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534425#comment-13534425
 ] 

Hudson commented on HBASE-7338:
---

Integrated in HBase-0.94 #635 (See 
[https://builds.apache.org/job/HBase-0.94/635/])
HBASE-7338 Fix flaky condition for 
org.apache.hadoop.hbase.TestRegionRebalancing.testRebalanceOnRegionServerNumberChange
 (Revision 1423115)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java


> Fix flaky condition for 
> org.apache.hadoop.hbase.TestRegionRebalancing.testRebalanceOnRegionServerNumberChange
> -
>
> Key: HBASE-7338
> URL: https://issues.apache.org/jira/browse/HBASE-7338
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.94.3, 0.96.0
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: HBASE-7338.patch
>
>
> The balancer doesn't run in case a region is in-transition. The check to 
> confirm whether there all regions are assigned looks for region count > 22, 
> where the total regions are 27. This may result in a failure:
> {code}
> java.lang.AssertionError: After 5 attempts, region assignments were not 
> balanced.
>   at org.junit.Assert.fail(Assert.java:93)
>   at 
> org.apache.hadoop.hbase.TestRegionRebalancing.assertRegionsAreBalanced(TestRegionRebalancing.java:203)
>   at 
> org.apache.hadoop.hbase.TestRegionRebalancing.testRebalanceOnRegionServerNumberChange(TestRegionRebalancing.java:123)
> .
> 2012-12-11 13:47:02,231 INFO  [pool-1-thread-1] 
> hbase.TestRegionRebalancing(120): Added fourth 
> server=p0118.mtv.cloudera.com,44414,1355262422083
> 2012-12-11 13:47:02,231 INFO  
> [RegionServer:3;p0118.mtv.cloudera.com,44414,1355262422083] 
> regionserver.HRegionServer(3769): Registered RegionServer MXBean
> 2012-12-11 13:47:02,231 DEBUG [pool-1-thread-1] master.HMaster(987): Not 
> running balancer because 1 region(s) in transition: 
> {c786446fb2542f190e937057cdc79d9d=test,kkk,1355262401365.c786446fb2542f190e937057cdc79d9d.
>  state=OPENING, ts=1355262421037, 
> server=p0118.mtv.cloudera.com,54281,1355262419765}
> 2012-12-11 13:47:02,232 DEBUG [pool-1-thread-1] 
> hbase.TestRegionRebalancing(165): There are 4 servers and 26 regions. Load 
> Average: 13.0 low border: 9, up border: 16; attempt: 0
> 2012-12-11 13:47:02,232 DEBUG [pool-1-thread-1] 
> hbase.TestRegionRebalancing(171): p0118.mtv.cloudera.com,51590,1355262395329 
> Avg: 13.0 actual: 11
> 2012-12-11 13:47:02,232 DEBUG [pool-1-thread-1] 
> hbase.TestRegionRebalancing(171): p0118.mtv.cloudera.com,52987,1355262407916 
> Avg: 13.0 actual: 15
> 2012-12-11 13:47:02,233 DEBUG [pool-1-thread-1] 
> hbase.TestRegionRebalancing(171): p0118.mtv.cloudera.com,48044,1355262421787 
> Avg: 13.0 actual: 0
> 2012-12-11 13:47:02,233 DEBUG [pool-1-thread-1] 
> hbase.TestRegionRebalancing(179): p0118.mtv.cloudera.com,48044,1355262421787 
> Isn't balanced!!! Avg: 13.0 actual: 0 slop: 0.2
> 2012-12-11 13:47:12,233 DEBUG [pool-1-thread-1] master.HMaster(987): Not 
> running balancer because 1 region(s) in transition: 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7342) Split operation without split key incorrectly finds the middle key in off-by-one error

2012-12-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534424#comment-13534424
 ] 

Hudson commented on HBASE-7342:
---

Integrated in HBase-0.94 #635 (See 
[https://builds.apache.org/job/HBase-0.94/635/])
HBASE-7342 Split operation without split key incorrectly finds the middle 
key in off-by-one error (Aleksandr Shulman) (Revision 1423112)

 Result = FAILURE
tedyu : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java


> Split operation without split key incorrectly finds the middle key in 
> off-by-one error
> --
>
> Key: HBASE-7342
> URL: https://issues.apache.org/jira/browse/HBASE-7342
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, io
>Affects Versions: 0.94.1, 0.94.2, 0.94.3, 0.96.0
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7342-0.94.txt, 7342-trunk-v3.txt, HBASE-7342-v1.patch, 
> HBASE-7342-v2.patch
>
>
> I took a deeper look into issues I was having using region splitting when 
> specifying a region (but not a key for splitting).
> The midkey calculation is off by one and when there are 2 rows, will pick the 
> 0th one. This causes the firstkey to be the same as midkey and the split will 
> fail. Removing the -1 causes it work correctly, as per the test I've added.
> Looking into the code here is what goes on:
> 1. Split takes the largest storefile
> 2. It puts all the keys into a 2-dimensional array called blockKeys[][]. Key 
> i resides as blockKeys[i]
> 3. Getting the middle root-level index should yield the key in the middle of 
> the storefile
> 4. In step 3, we see that there is a possible erroneous (-1) to adjust for 
> the 0-offset indexing.
> 5. In a result with where there are only 2 blockKeys, this yields the 0th 
> block key. 
> 6. Unfortunately, this is the same block key that 'firstKey' will be.
> 7. This yields the result in HStore.java:1873 ("cannot split because midkey 
> is the same as first or last row")
> 8. Removing the -1 solves the problem (in this case). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6389) Modify the conditions to ensure that Master waits for sufficient number of Region Servers before starting region assignments

2012-12-17 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534435#comment-13534435
 ] 

Himanshu Vashishtha commented on HBASE-6389:


[~saint@gmail.com]What is the reason for having the "incompatible change" 
flag?

> Modify the conditions to ensure that Master waits for sufficient number of 
> Region Servers before starting region assignments
> 
>
> Key: HBASE-6389
> URL: https://issues.apache.org/jira/browse/HBASE-6389
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.0, 0.96.0
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>Priority: Critical
> Fix For: 0.94.3, 0.96.0
>
> Attachments: HBASE-6389_0.94.patch, HBASE-6389_trunk.patch, 
> HBASE-6389_trunk.patch, HBASE-6389_trunk.patch, HBASE-6389_trunk_v2.patch, 
> HBASE-6389_trunk_v2.patch, org.apache.hadoop.hbase.TestZooKeeper-output.txt, 
> testReplication.jstack
>
>
> Continuing from HBASE-6375.
> It seems I was mistaken in my assumption that changing the value of 
> "hbase.master.wait.on.regionservers.mintostart" to a sufficient number (from 
> default of 1) can help prevent assignment of all regions to one (or a small 
> number of) region server(s).
> While this was the case in 0.90.x and 0.92.x, the behavior has changed in 
> 0.94.0 onwards to address HBASE-4993.
> From 0.94.0 onwards, Master will proceed immediately after the timeout has 
> lapsed, even if "hbase.master.wait.on.regionservers.mintostart" has not 
> reached.
> Reading the current conditions of waitForRegionServers() clarifies it
> {code:title=ServerManager.java (trunk rev:1360470)}
> 
> 581 /**
> 582  * Wait for the region servers to report in.
> 583  * We will wait until one of this condition is met:
> 584  *  - the master is stopped
> 585  *  - the 'hbase.master.wait.on.regionservers.timeout' is reached
> 586  *  - the 'hbase.master.wait.on.regionservers.maxtostart' number of
> 587  *region servers is reached
> 588  *  - the 'hbase.master.wait.on.regionservers.mintostart' is reached 
> AND
> 589  *   there have been no new region server in for
> 590  *  'hbase.master.wait.on.regionservers.interval' time
> 591  *
> 592  * @throws InterruptedException
> 593  */
> 594 public void waitForRegionServers(MonitoredTask status)
> 595 throws InterruptedException {
> 
> 
> 612   while (
> 613 !this.master.isStopped() &&
> 614   slept < timeout &&
> 615   count < maxToStart &&
> 616   (lastCountChange+interval > now || count < minToStart)
> 617 ){
> 
> {code}
> So with the current conditions, the wait will end as soon as timeout is 
> reached even lesser number of RS have checked-in with the Master and the 
> master will proceed with the region assignment among these RSes alone.
> As mentioned in 
> -[HBASE-4993|https://issues.apache.org/jira/browse/HBASE-4993?focusedCommentId=13237196#comment-13237196]-,
>  and I concur, this could have disastrous effect in large cluster especially 
> now that MSLAB is turned on.
> To enforce the required quorum as specified by 
> "hbase.master.wait.on.regionservers.mintostart" irrespective of timeout, 
> these conditions need to be modified as following
> {code:title=ServerManager.java}
> ..
>   /**
>* Wait for the region servers to report in.
>* We will wait until one of this condition is met:
>*  - the master is stopped
>*  - the 'hbase.master.wait.on.regionservers.maxtostart' number of
>*region servers is reached
>*  - the 'hbase.master.wait.on.regionservers.mintostart' is reached AND
>*   there have been no new region server in for
>*  'hbase.master.wait.on.regionservers.interval' time AND
>*   the 'hbase.master.wait.on.regionservers.timeout' is reached
>*
>* @throws InterruptedException
>*/
>   public void waitForRegionServers(MonitoredTask status)
> ..
> ..
> int minToStart = this.master.getConfiguration().
> getInt("hbase.master.wait.on.regionservers.mintostart", 1);
> int maxToStart = this.master.getConfiguration().
> getInt("hbase.master.wait.on.regionservers.maxtostart", 
> Integer.MAX_VALUE);
> if (maxToStart < minToStart) {
>   maxToStart = minToStart;
> }
> ..
> ..
> while (
>   !this.master.isStopped() &&
> count < maxToStart &&
> (lastCountChange+interval > now || timeout > slept || count < 
> minToStart)
>   ){
> ..
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, s

[jira] [Updated] (HBASE-7370) Remove Writable From ScanMetrics.

2012-12-17 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7370:
-

Status: Patch Available  (was: Open)

> Remove Writable From ScanMetrics.
> -
>
> Key: HBASE-7370
> URL: https://issues.apache.org/jira/browse/HBASE-7370
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-7370-0.patch
>
>
> Right now ScanMetrics uses Writable to be able to set MapReduce counters.  We 
> should remove this and use protobuf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7370) Remove Writable From ScanMetrics.

2012-12-17 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7370:
-

Attachment: HBASE-7370-0.patch

Attaching the first cut so that Jenkins can run all of the tests.

TestFromClientSide ran and passed. 

> Remove Writable From ScanMetrics.
> -
>
> Key: HBASE-7370
> URL: https://issues.apache.org/jira/browse/HBASE-7370
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-7370-0.patch
>
>
> Right now ScanMetrics uses Writable to be able to set MapReduce counters.  We 
> should remove this and use protobuf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7370) Remove Writable From ScanMetrics.

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534441#comment-13534441
 ] 

stack commented on HBASE-7370:
--

In hbase.proto we have NameStringPair and NameBytesPair... You add 
NameInt64Pair here in your MR.proto Should it be back in hbase.proto?  (No 
biggie).

+1 on patch (Elliott said he'd add comments on why new preference for 
AtomicLong over MetricsTimeVaryingLong)

> Remove Writable From ScanMetrics.
> -
>
> Key: HBASE-7370
> URL: https://issues.apache.org/jira/browse/HBASE-7370
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-7370-0.patch
>
>
> Right now ScanMetrics uses Writable to be able to set MapReduce counters.  We 
> should remove this and use protobuf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534443#comment-13534443
 ] 

Matteo Bertozzi commented on HBASE-7367:


[~andrew.purt...@gmail.com] one question, without thinking at the snapshot for 
one moment.

I'm a GLOBAL ADMIN, I create a table. 
The table is enabled by default (every one can now write on it)
meanwhile I set the permission... (too late someone has already polluted the 
table)

is that a problem? how do you solve that?

This is my main concern with the "clone from snapshot". Since I create a new 
table with the snapshot data and no acl, meanwhile I set the permission someone 
can read my data that should be protected.

if you have a workaround or by your experience you think that this is not a 
real problem, I'm +1 for the global admin check instead of disabling the 
feature if the ACL coprocessor is enabled.

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7373) table should not be required in AccessControlService

2012-12-17 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-7373:
--

 Summary: table should not be required in AccessControlService
 Key: HBASE-7373
 URL: https://issues.apache.org/jira/browse/HBASE-7373
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Priority: Minor


We should fix the proto file, add unit test for this case, and verify it works 
from hbase shell with table to be nil.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6887) Convert security-related shell commands to use PB-based AccessControlService

2012-12-17 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1353#comment-1353
 ] 

Jimmy Xiang commented on HBASE-6887:


Also filed HBASE-7373.  Currently, from hbase shell, user_permission doesn't 
work if table is not specified.

> Convert security-related shell commands to use PB-based AccessControlService
> 
>
> Key: HBASE-6887
> URL: https://issues.apache.org/jira/browse/HBASE-6887
> Project: HBase
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.96.0
>Reporter: Gary Helmling
>Assignee: Jimmy Xiang
>
> The security-related HBase shell commands (grant, revoke, user_permission) 
> are still using the old CoprocessorProtocol-based AccessControllerProtocol 
> endpoint for dynamic RPC.  These need to be converted to use the protocol 
> buffer based AccessControlService interface added in HBASE-5448.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534445#comment-13534445
 ] 

Andrew Purtell commented on HBASE-7367:
---

bq. I'm a GLOBAL ADMIN, I create a table.  The table is enabled by default 
(every one can now write on it) meanwhile I set the permission... (too late 
someone has already polluted the table)

Unless you grant access to this table, nobody but you as creator or ADMINs can 
write on it.

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534445#comment-13534445
 ] 

Andrew Purtell edited comment on HBASE-7367 at 12/18/12 12:05 AM:
--

bq. I'm a GLOBAL ADMIN, I create a table.  The table is enabled by default 
(every one can now write on it) meanwhile I set the permission... (too late 
someone has already polluted the table)

Unless you grant access to this table, nobody but you as creator or ADMINs can 
write on it.

They can't read it either.

  was (Author: apurtell):
bq. I'm a GLOBAL ADMIN, I create a table.  The table is enabled by default 
(every one can now write on it) meanwhile I set the permission... (too late 
someone has already polluted the table)

Unless you grant access to this table, nobody but you as creator or ADMINs can 
write on it.
  
> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6887) Convert security-related shell commands to use PB-based AccessControlService

2012-12-17 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534446#comment-13534446
 ] 

Jimmy Xiang commented on HBASE-6887:


I posted a patch on RB: https://reviews.apache.org/r/8652/

> Convert security-related shell commands to use PB-based AccessControlService
> 
>
> Key: HBASE-6887
> URL: https://issues.apache.org/jira/browse/HBASE-6887
> Project: HBase
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.96.0
>Reporter: Gary Helmling
>Assignee: Jimmy Xiang
>
> The security-related HBase shell commands (grant, revoke, user_permission) 
> are still using the old CoprocessorProtocol-based AccessControllerProtocol 
> endpoint for dynamic RPC.  These need to be converted to use the protocol 
> buffer based AccessControlService interface added in HBASE-5448.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534447#comment-13534447
 ] 

Matteo Bertozzi commented on HBASE-7367:


oh ok, so I've missed a change, I remember something like no rows in _acl_ 
means everyone has access.

so I think is good with global admin.
I'll test it and send the patch with the check.
Thanks!

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7367) Snapshot coprocessor and ACL security

2012-12-17 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534449#comment-13534449
 ] 

Andrew Purtell commented on HBASE-7367:
---

bq. I remember something like no rows in _acl_ means everyone has access.

Hmm.. We are supposed to have a default deny policy except for ROOT and META, 
which are special cased. If this is not the case, this is a big problem.

> Snapshot coprocessor and ACL security
> -
>
> Key: HBASE-7367
> URL: https://issues.apache.org/jira/browse/HBASE-7367
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master, regionserver, snapshots, Zookeeper
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: hbase-6055, 0.96.0
>
> Attachments: HBASE-7367-v0.patch
>
>
> Currently snapshot don't care about ACL...
> and in the first draft snapshots should be disabled if the ACL coprocessor is 
> enabled.
> After the first step, we can discuss how to handle the snapshot/restore/clone.
> Is saving and restoring the _acl_ related rights, the right way? maybe after 
> 3 months we don't want to give the access the guys listed in the old _acl_...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4791) Allow Secure Zookeeper JAAS configuration to be programmatically set (rather than only by reading JAAS configuration file)

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534451#comment-13534451
 ] 

stack commented on HBASE-4791:
--

The two tests pass for me locally.

Calling the below with a null from MasterCommandLine will give us what 
[~mbertozzi]?

+String principalName = SecurityUtil.getServerPrincipal(principalConfig, 
hostname);

Also, will it be a problem if we call the below in the regionserver constructor:

+ZKUtil.loginClient(this.conf, "hbase.zookeeper.client.keytab.file",
+  "hbase.zookeeper.client.kerberos.principal", this.isa.getHostName());

... and then, for whatever reason, after we report for duty to the master, it 
tells us use another name... See the code in handleReportForDutyResponse around 
#1156?

Maybe the  name the regionserver uses talking to the master is unrelated to 
that used here when we are registering a principal?

FYI, you need curly braces even if only one line follows the if clause ... i.e. 
the tests you do in login where you follow them by a single line w/ 'return' on 
it.  I can fix these on commit np.

> Allow Secure Zookeeper JAAS configuration to be programmatically set (rather 
> than only by reading JAAS configuration file)
> --
>
> Key: HBASE-4791
> URL: https://issues.apache.org/jira/browse/HBASE-4791
> Project: HBase
>  Issue Type: Improvement
>  Components: security, Zookeeper
>Reporter: Eugene Koontz
>Assignee: Matteo Bertozzi
>  Labels: security, zookeeper
> Attachments: DemoConfig.java, HBASE-4791-v1.patch, 
> HBASE-4791-v2.patch, HBASE-4791-v3.patch, HBASE-4791-v4-0.94.patch, 
> HBASE-4791-v4.patch, HBASE-4791-v4.patch
>
>
> In the currently proposed fix for HBASE-2418, there must be a JAAS file 
> specified in System.setProperty("java.security.auth.login.config"). 
> However, it might be preferable to construct a JAAS configuration 
> programmatically, as is done with secure Hadoop (see 
> https://github.com/apache/hadoop-common/blob/a48eceb62c9b5c1a5d71ee2945d9eea2ed62527b/src/java/org/apache/hadoop/security/UserGroupInformation.java#L175).
> This would have the benefit of avoiding a usage of a system property setting, 
> and allow instead an HBase-local configuration setting. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4791) Allow Secure Zookeeper JAAS configuration to be programmatically set (rather than only by reading JAAS configuration file)

2012-12-17 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534459#comment-13534459
 ] 

Matteo Bertozzi commented on HBASE-4791:


hostname null resolves to getLocalHost() inside 
SecurityUtil.getServerPrincipal().

{code}
   * The hostname can differ from the hostname in {@link #isa}
   * but usually doesn't if both servers resolve .
   */
  private ServerName serverNameFromMasterPOV;
{code}
yes, the isa.getHostName() can be a different name from serverNameFromMasterPOV 
name, and maybe is not even the one that we expect as login... but this is just 
a default value, if the user doesn't specify anything
so, since this property is just the mirror of the jaas.conf file I expect to 
have the same user@host of the jaas.conf


> Allow Secure Zookeeper JAAS configuration to be programmatically set (rather 
> than only by reading JAAS configuration file)
> --
>
> Key: HBASE-4791
> URL: https://issues.apache.org/jira/browse/HBASE-4791
> Project: HBase
>  Issue Type: Improvement
>  Components: security, Zookeeper
>Reporter: Eugene Koontz
>Assignee: Matteo Bertozzi
>  Labels: security, zookeeper
> Attachments: DemoConfig.java, HBASE-4791-v1.patch, 
> HBASE-4791-v2.patch, HBASE-4791-v3.patch, HBASE-4791-v4-0.94.patch, 
> HBASE-4791-v4.patch, HBASE-4791-v4.patch
>
>
> In the currently proposed fix for HBASE-2418, there must be a JAAS file 
> specified in System.setProperty("java.security.auth.login.config"). 
> However, it might be preferable to construct a JAAS configuration 
> programmatically, as is done with secure Hadoop (see 
> https://github.com/apache/hadoop-common/blob/a48eceb62c9b5c1a5d71ee2945d9eea2ed62527b/src/java/org/apache/hadoop/security/UserGroupInformation.java#L175).
> This would have the benefit of avoiding a usage of a system property setting, 
> and allow instead an HBase-local configuration setting. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser

2012-12-17 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534460#comment-13534460
 ] 

Aditya Kishore commented on HBASE-7115:
---

Hi [~stack], could you please pull this patch in?

> [shell] Provide a way to register custom filters with the Filter Language 
> Parser
> 
>
> Key: HBASE-7115
> URL: https://issues.apache.org/jira/browse/HBASE-7115
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters, shell
>Affects Versions: 0.96.0
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
> Fix For: 0.96.0
>
> Attachments: HBASE-7115_trunk.patch
>
>
> HBASE-5428 added this capability to thrift interface but the configuration 
> parameter name is "thrift" specific.
> This patch introduces a more generic parameter "hbase.user.filters" using 
> which the user defined custom filters can be specified in the configuration 
> and loaded in any client that needs to use the filter language parser.
> The patch then uses this new parameter to register any user specified filters 
> while invoking the HBase shell.
> Example usage: Let's say I have written a couple of custom filters with class 
> names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
> *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
> use them from HBase shell using the filter language.
> To do that, I would add the following configuration to {{hbase-site.xml}}
> {panel}{{}}
> {{  hbase.user.filters}}
> {{  }}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}
> {{}}{panel}
> Once this is configured, I can launch HBase shell and use these filters in my 
> {{get}} or {{scan}} just the way I would use a built-in filter.
> {code}
> hbase(main):001:0> scan 't', {FILTER => "SuperDuperFilter(true) AND 
> SilverBulletFilter(42)"}
> ROW  COLUMN+CELL
>  status  column=cf:a, 
> timestamp=30438552, value=world_peace
> 1 row(s) in 0. seconds
> {code}
> To use this feature in any client, the client needs to make the following 
> function call as part of its initialization.
> {code}
> ParseFilter.registerUserFilters(configuration);
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6585) Audit log messages should contain info about the higher level operation being executed

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534461#comment-13534461
 ] 

stack commented on HBASE-6585:
--

Can you help w/ this Matteo when I try to apply the patch?

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) 
on project hbase-server: Compilation failure: Compilation failure:
[ERROR] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java:[801,8]
 cannot find symbol
[ERROR] symbol  : method 
requirePermission(org.apache.hadoop.hbase.security.access.Permission.Action)
[ERROR] location: class org.apache.hadoop.hbase.security.access.AccessController
[ERROR] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java:[1308,4]
 cannot find symbol
[ERROR] symbol  : method 
requirePermission(org.apache.hadoop.hbase.security.access.Permission.Action)
[ERROR] location: class org.apache.hadoop.hbase.security.access.AccessController
[ERROR] 
/Users/stack/checkouts/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java:[1340,4]
 cannot find symbol
[ERROR] symbol  : method 
requirePermission(org.apache.hadoop.hbase.security.access.Permission.Action)
[ERROR] location: class org.apache.hadoop.hbase.security.access.AccessController

{code}


> Audit log messages should contain info about the higher level operation being 
> executed
> --
>
> Key: HBASE-6585
> URL: https://issues.apache.org/jira/browse/HBASE-6585
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.96.0
>Reporter: Marcelo Vanzin
>Assignee: Matteo Bertozzi
>Priority: Minor
>  Labels: acl
> Fix For: 0.96.0, 0.94.4
>
> Attachments: HBASE-6585-v0.patch, HBASE-6585-v1.patch
>
>
> Currently, audit log messages contains the "action" for which access was 
> checked; this is one of READ, WRITE, CREATE or ADMIN.
> These give very little information to the person digging into the logs about 
> what was done, though. You can't ask "who deleted rows from table x?", 
> because "delete" is translated to a "WRITE" action.
> It would be nice if the audit logs contained the higher-level operation, 
> either replacing or in addition to the RWCA information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7361) Fix all javadoc warnings in hbase-server/{,mapreduce}

2012-12-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534465#comment-13534465
 ] 

Hudson commented on HBASE-7361:
---

Integrated in HBase-TRUNK #3629 (See 
[https://builds.apache.org/job/HBase-TRUNK/3629/])
HBASE-7361 Fix all javadoc warnings in hbase-server/{,mapreduce} (Revision 
1423096)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/MasterAdminProtocol.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Action.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/ServerCallable.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRowProcessorEndpoint.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/BitComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/ByteArrayComparable.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/Filter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/FilterWrapper.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyValueMatchingQualifiersFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/NullComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/RandomRowFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/SkipFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/SubstringComparator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/filter/WhileMatchFilter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HFileLink.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hb

[jira] [Commented] (HBASE-7342) Split operation without split key incorrectly finds the middle key in off-by-one error

2012-12-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534466#comment-13534466
 ] 

Hudson commented on HBASE-7342:
---

Integrated in HBase-TRUNK #3629 (See 
[https://builds.apache.org/job/HBase-TRUNK/3629/])
HBASE-7342 Split operation without split key incorrectly finds the middle 
key in off-by-one error (Aleksandr Shulman) (Revision 1423110)

 Result = FAILURE
tedyu : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java


> Split operation without split key incorrectly finds the middle key in 
> off-by-one error
> --
>
> Key: HBASE-7342
> URL: https://issues.apache.org/jira/browse/HBASE-7342
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, io
>Affects Versions: 0.94.1, 0.94.2, 0.94.3, 0.96.0
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7342-0.94.txt, 7342-trunk-v3.txt, HBASE-7342-v1.patch, 
> HBASE-7342-v2.patch
>
>
> I took a deeper look into issues I was having using region splitting when 
> specifying a region (but not a key for splitting).
> The midkey calculation is off by one and when there are 2 rows, will pick the 
> 0th one. This causes the firstkey to be the same as midkey and the split will 
> fail. Removing the -1 causes it work correctly, as per the test I've added.
> Looking into the code here is what goes on:
> 1. Split takes the largest storefile
> 2. It puts all the keys into a 2-dimensional array called blockKeys[][]. Key 
> i resides as blockKeys[i]
> 3. Getting the middle root-level index should yield the key in the middle of 
> the storefile
> 4. In step 3, we see that there is a possible erroneous (-1) to adjust for 
> the 0-offset indexing.
> 5. In a result with where there are only 2 blockKeys, this yields the 0th 
> block key. 
> 6. Unfortunately, this is the same block key that 'firstKey' will be.
> 7. This yields the result in HStore.java:1873 ("cannot split because midkey 
> is the same as first or last row")
> 8. Removing the -1 solves the problem (in this case). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7370) Remove Writable From ScanMetrics.

2012-12-17 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7370:
-

Attachment: HBASE-7370-1.patch

Patch that has comments for questions that stack was asking for.

> Remove Writable From ScanMetrics.
> -
>
> Key: HBASE-7370
> URL: https://issues.apache.org/jira/browse/HBASE-7370
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-7370-0.patch, HBASE-7370-1.patch
>
>
> Right now ScanMetrics uses Writable to be able to set MapReduce counters.  We 
> should remove this and use protobuf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7365) Safer table creation and deletion using .tmp dir

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534469#comment-13534469
 ] 

stack commented on HBASE-7365:
--

Makes sense doing stuff in .tmp (which mirrors table .tmp and region .tmp).

In below... the archive tool will be able to make sense of the deleted table?


+if (fs.exists(tmpdir)) {
+  // Archive table in temp, maybe are failed deletion left over,
+  // if not the cleaner will take care of them.
+  for (Path tabledir: FSUtils.getTableDirs(fs, tmpdir)) {
+for (Path regiondir: FSUtils.getRegionDirs(fs, tabledir)) {
+  HFileArchiver.archiveRegion(fs, this.rootdir, tabledir, regiondir);
+}
+  }
+  fs.delete(tmpdir, true);


Do you want to check the returned value on the above delete?


Why not call checkTempDir rather than do this?

+// Ensure temp exists
+if (!fs.exists(tempdir) && !fs.mkdirs(tempdir)) {
+  throw new IOException("HBase temp directory '" + tempdir + "' creation 
failure.");


I love it when TODOs get cleaned up...

-// TODO: Currently we make the table descriptor and as side-effect the
-// tableDir is created.  Should we change below method to be createTable
-// where we create table in tmp dir with its table descriptor file and then
-// do rename to move it into place?


Should we remove the old handleCreateTable now we have your fancy new one?  Or 
is it still used?


Patch looks good otherwise Matteo.

> Safer table creation and deletion using .tmp dir
> 
>
> Key: HBASE-7365
> URL: https://issues.apache.org/jira/browse/HBASE-7365
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 0.96.0
>
> Attachments: HBASE-7365-v0.patch
>
>
> Currently tables are created in the root directory, and the removal works on 
> the root directory.
> Change the code to use a /hbase/.tmp directory to make the creation and 
> removal a bit safer
> Table Creation steps
>  * Create the table descriptor (table folder, in /hbase/.tmp/)
>  * Create the table regions (always in temp)
>  * Move the table from temp to the root folder
>  * Add the regions to meta
>  * Trigger assignment
>  * Set enable flag in ZooKeeper
> Table Deletion steps
>  * Wait for regions in transition
>  * Remove regions from meta (use bulk delete)
>  * Move the table in /hbase/.tmp
>  * Remove the table from the descriptor cache
>  * Remove table from zookeeper
>  * Archive the table
> The main changes in the current code are:
>  * Writing to /hbase/.tmp and then rename
>  * using bulk delete in DeletionTableHandler

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7370) Remove Writable From ScanMetrics.

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534474#comment-13534474
 ] 

stack commented on HBASE-7370:
--

+1

> Remove Writable From ScanMetrics.
> -
>
> Key: HBASE-7370
> URL: https://issues.apache.org/jira/browse/HBASE-7370
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, metrics
>Affects Versions: 0.96.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-7370-0.patch, HBASE-7370-1.patch
>
>
> Right now ScanMetrics uses Writable to be able to set MapReduce counters.  We 
> should remove this and use protobuf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7371) Blocksize in TestHFileBlock is unintentionally small

2012-12-17 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534475#comment-13534475
 ] 

Enis Soztutar commented on HBASE-7371:
--

+1 on the change. 

> Blocksize in TestHFileBlock is unintentionally small
> 
>
> Key: HBASE-7371
> URL: https://issues.apache.org/jira/browse/HBASE-7371
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Minor
>
> Looking at TestHFileBlock.writeBlocks I see this:
> {code}
>   for (int j = 0; j < rand.nextInt(500); ++j) {
> // This might compress well.
> dos.writeShort(i + 1);
> dos.writeInt(j + 1);
>   }
> {code}
> The result is probably not what the author intended. {{rand.nextInt(500)}} is 
> evaluated during each iterations and that leads to very small blocks size 
> mostly between ~100 and 300 bytes or so.
> The author probably intended this:
> {code}
>   int size = rand.nextInt(500);
>   for (int j = 0; j < size; ++j) {
> // This might compress well.
> dos.writeShort(i + 1);
> dos.writeInt(j + 1);
>   }
> {code}
> This leads to more reasonable block sizes between ~200 and 3000 bytes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-7342) Split operation without split key incorrectly finds the middle key in off-by-one error

2012-12-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-7342.
---

Resolution: Fixed

I ran TestScannerTimeout and TestMetaReaderEditor locally - they passed.

> Split operation without split key incorrectly finds the middle key in 
> off-by-one error
> --
>
> Key: HBASE-7342
> URL: https://issues.apache.org/jira/browse/HBASE-7342
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, io
>Affects Versions: 0.94.1, 0.94.2, 0.94.3, 0.96.0
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7342-0.94.txt, 7342-trunk-v3.txt, HBASE-7342-v1.patch, 
> HBASE-7342-v2.patch
>
>
> I took a deeper look into issues I was having using region splitting when 
> specifying a region (but not a key for splitting).
> The midkey calculation is off by one and when there are 2 rows, will pick the 
> 0th one. This causes the firstkey to be the same as midkey and the split will 
> fail. Removing the -1 causes it work correctly, as per the test I've added.
> Looking into the code here is what goes on:
> 1. Split takes the largest storefile
> 2. It puts all the keys into a 2-dimensional array called blockKeys[][]. Key 
> i resides as blockKeys[i]
> 3. Getting the middle root-level index should yield the key in the middle of 
> the storefile
> 4. In step 3, we see that there is a possible erroneous (-1) to adjust for 
> the 0-offset indexing.
> 5. In a result with where there are only 2 blockKeys, this yields the 0th 
> block key. 
> 6. Unfortunately, this is the same block key that 'firstKey' will be.
> 7. This yields the result in HStore.java:1873 ("cannot split because midkey 
> is the same as first or last row")
> 8. Removing the -1 solves the problem (in this case). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4791) Allow Secure Zookeeper JAAS configuration to be programmatically set (rather than only by reading JAAS configuration file)

2012-12-17 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4791:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for answers.  Thanks for patch Matteo.  Committed to trunk.  
[~lhofhansl] Ok to backport this one?

> Allow Secure Zookeeper JAAS configuration to be programmatically set (rather 
> than only by reading JAAS configuration file)
> --
>
> Key: HBASE-4791
> URL: https://issues.apache.org/jira/browse/HBASE-4791
> Project: HBase
>  Issue Type: Improvement
>  Components: security, Zookeeper
>Reporter: Eugene Koontz
>Assignee: Matteo Bertozzi
>  Labels: security, zookeeper
> Fix For: 0.96.0
>
> Attachments: DemoConfig.java, HBASE-4791-v1.patch, 
> HBASE-4791-v2.patch, HBASE-4791-v3.patch, HBASE-4791-v4-0.94.patch, 
> HBASE-4791-v4.patch, HBASE-4791-v4.patch
>
>
> In the currently proposed fix for HBASE-2418, there must be a JAAS file 
> specified in System.setProperty("java.security.auth.login.config"). 
> However, it might be preferable to construct a JAAS configuration 
> programmatically, as is done with secure Hadoop (see 
> https://github.com/apache/hadoop-common/blob/a48eceb62c9b5c1a5d71ee2945d9eea2ed62527b/src/java/org/apache/hadoop/security/UserGroupInformation.java#L175).
> This would have the benefit of avoiding a usage of a system property setting, 
> and allow instead an HBase-local configuration setting. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7349) Jenkins build should compare trunk vs patch for Javadoc warnings

2012-12-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13534486#comment-13534486
 ] 

stack commented on HBASE-7349:
--

I'd go w/ @nkeywal on this one; i.e. lets keep javadoc warning at zero.  Should 
we back out this change then?

> Jenkins build should compare trunk vs patch for Javadoc warnings
> 
>
> Key: HBASE-7349
> URL: https://issues.apache.org/jira/browse/HBASE-7349
> Project: HBase
>  Issue Type: Improvement
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Attachments: 7349-build-improve-javadoc-warnings.0.diff
>
>
> The javadoc check should look for an increase in the number of warnings. It 
> can do so by running javadoc against trunk before running it for the patch. 
> This will increase build times.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >