[ 
https://issues.apache.org/jira/browse/HDFS-10422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15288584#comment-15288584
 ] 

Hadoop QA commented on HDFS-10422:
----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 49s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 54s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHDFSXAttr |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.namenode.TestFileContextXAttr |
|   | hadoop.hdfs.server.namenode.TestNameNodeXAttr |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804567/HDFS-10422.000.patch |
| JIRA Issue | HDFS-10422 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 91e2a8659da9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8a9ecb7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15472/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15472/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15472/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15472/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15472/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Do not throw AccessControlException for syntax errors of 
> security.hdfs.unreadable.by.superuser
> ----------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10422
>                 URL: https://issues.apache.org/jira/browse/HDFS-10422
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.7.2
>            Reporter: Tianyin Xu
>         Attachments: HDFS-10422.000.patch
>
>
> The xattr {{security.hdfs.unreadable.by.superuser}} has certain syntax rules: 
> "this xattr is also write-once, and cannot be removed once set. This xattr 
> does not allow a value to be set."
> [https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html]
> Upon violation, the system should throw {{IllegalArgumentException}} to keep 
> the consistent behaviour with the other xattrs such as 
> {{raw.hdfs.crypto.encryption.zone}}. 
> Currently, it always throws {{AccessControlException}} which is supposed to 
> be something related to access control. The only case 
> {{AccessControlException}} should be thrown is when the superuser tries to 
> access the file, but *not* anything else. 
> If the user set any value, it violates the syntax of the xattr (this xattr 
> does not need values) which has nothing to do with access control:
> {code:title=XAttrPermissionFilter.java|borderStyle=solid}
>     if (XAttrHelper.getPrefixedName(xAttr).
>         equals(SECURITY_XATTR_UNREADABLE_BY_SUPERUSER)) {
>       if (xAttr.getValue() != null) {
>         throw new AccessControlException("Attempt to set a value for '" +
>             SECURITY_XATTR_UNREADABLE_BY_SUPERUSER +
>             "'. Values are not allowed for this xattr.");
>       }   
>       return;
>     }   
> {code} 
> \\
> Similarly, if the user wants to remove the xattr, it should throw 
> {{IllegalArgumentException}} just like the encryption xattr in the following 
> code. Basically I don't understand why the encryption xattr throws an 
> {{IllegalArgumentException}} (via the {{checkArgument}} call) but the 
> {{security.hdfs.unreadable.by.superuser}} xattr throws an 
> {{AccessControlException}}. It's the same thing here: both of them cannot be 
> removed...
> {code:title=FSDirXAttrOp.java|borderStyle=solid}
>         Preconditions.checkArgument(
>             !KEYID_XATTR.equalsIgnoreValue(filter),
>             "The encryption zone xattr should never be deleted.");
>         if (UNREADABLE_BY_SUPERUSER_XATTR.equalsIgnoreValue(filter)) {
>           throw new AccessControlException("The xattr '" +
>               SECURITY_XATTR_UNREADABLE_BY_SUPERUSER + "' can not be 
> deleted.");
>         }   
> {code} 
> This JIRA is related  to HDFS-6891 (which introduced this xattr).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to