[jira] [Commented] (HDFS-9348) DFS GetErasureCodingPolicy API on a non-existent file should be handled properly

2015-11-13 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003923#comment-15003923
 ] 

Rakesh R commented on HDFS-9348:


In this jira I've included both erasurecoding and encrptionzone changes. I'm 
attaching separate branch-2 patch to fix the encryption zone. Do we need to fix 
encryption zone in a separate jira to avoid confusion because if some one sees 
the jira title(says erasure coding which is available only in trunk now) and 
fix version branch-2 ?

> DFS GetErasureCodingPolicy API on a non-existent file should be handled 
> properly
> 
>
> Key: HDFS-9348
> URL: https://issues.apache.org/jira/browse/HDFS-9348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-9348-00.patch, HDFS-9348.branch-2.00.patch
>
>
> Presently calling {{dfs#getErasureCodingPolicy()}} on a non-existent file is 
> returning the ErasureCodingPolicy info. As per the 
> [discussion|https://issues.apache.org/jira/browse/HDFS-8777?focusedCommentId=14981077=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14981077]
>  it has to validate and throw FileNotFoundException.
> Also, {{dfs#getEncryptionZoneForPath()}} API has the same behavior. Again we 
> can discuss to add the file existence validation in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9348) DFS GetErasureCodingPolicy API on a non-existent file should be handled properly

2015-11-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986075#comment-14986075
 ] 

Andrew Wang commented on HDFS-9348:
---

Nice find here, yea I think this is a bug, we should throw an exception. The 
javadoc in HdfsAdmin for getEncryptionZoneForPath says:

{noformat}
   * Get the path of the encryption zone for a given file or directory.
   *
   * @param path The path to get the ez for.
   *
   * @return The EncryptionZone of the ez, or null if path is not in an ez.
   * @throws IOExceptionif there was a general IO exception
   * @throws AccessControlException if the caller does not have access to path
   * @throws FileNotFoundException  if the path does not exist
{noformat}

> DFS GetErasureCodingPolicy API on a non-existent file should be handled 
> properly
> 
>
> Key: HDFS-9348
> URL: https://issues.apache.org/jira/browse/HDFS-9348
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-9348-00.patch
>
>
> Presently calling {{dfs#getErasureCodingPolicy()}} on a non-existent file is 
> returning the ErasureCodingPolicy info. As per the 
> [discussion|https://issues.apache.org/jira/browse/HDFS-8777?focusedCommentId=14981077=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14981077]
>  it has to validate and throw FileNotFoundException.
> Also, {{dfs#getEncryptionZoneForPath()}} API has the same behavior. Again we 
> can discuss to add the file existence validation in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9348) DFS GetErasureCodingPolicy API on a non-existent file should be handled properly

2015-11-02 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986476#comment-14986476
 ] 

Yi Liu commented on HDFS-9348:
--

Thanks Uma for ping me, and Rakesh for the work. I think it makes sense to 
throw FileNotFoundException for non-existent file.

> DFS GetErasureCodingPolicy API on a non-existent file should be handled 
> properly
> 
>
> Key: HDFS-9348
> URL: https://issues.apache.org/jira/browse/HDFS-9348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-9348-00.patch
>
>
> Presently calling {{dfs#getErasureCodingPolicy()}} on a non-existent file is 
> returning the ErasureCodingPolicy info. As per the 
> [discussion|https://issues.apache.org/jira/browse/HDFS-8777?focusedCommentId=14981077=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14981077]
>  it has to validate and throw FileNotFoundException.
> Also, {{dfs#getEncryptionZoneForPath()}} API has the same behavior. Again we 
> can discuss to add the file existence validation in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9348) DFS GetErasureCodingPolicy API on a non-existent file should be handled properly

2015-11-02 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986611#comment-14986611
 ] 

Rakesh R commented on HDFS-9348:


Thank you [~umamaheswararao], [~andrew.wang], [~hitliuyi] for the discussion 
and comments. Could you please take a look at the patch when you get a chance.

> DFS GetErasureCodingPolicy API on a non-existent file should be handled 
> properly
> 
>
> Key: HDFS-9348
> URL: https://issues.apache.org/jira/browse/HDFS-9348
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-9348-00.patch
>
>
> Presently calling {{dfs#getErasureCodingPolicy()}} on a non-existent file is 
> returning the ErasureCodingPolicy info. As per the 
> [discussion|https://issues.apache.org/jira/browse/HDFS-8777?focusedCommentId=14981077=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14981077]
>  it has to validate and throw FileNotFoundException.
> Also, {{dfs#getEncryptionZoneForPath()}} API has the same behavior. Again we 
> can discuss to add the file existence validation in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9348) DFS GetErasureCodingPolicy API on a non-existent file should be handled properly

2015-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984063#comment-14984063
 ] 

Hadoop QA commented on HDFS-9348:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project (total was 82, now 82). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 45s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 5s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-31 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12769855/HDFS-9348-00.patch |
| JIRA Issue | HDFS-9348 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 821f9b477c10 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-e77b1ce/precommit/personality/hadoop.sh
 |
| git revision | trunk / 7fd6416 |
| Default Java | 1.7.0_79 |
| 

[jira] [Commented] (HDFS-9348) DFS GetErasureCodingPolicy API on a non-existent file should be handled properly

2015-10-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14983713#comment-14983713
 ] 

Hadoop QA commented on HDFS-9348:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
1s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 23s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project (total was 81, now 81). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 59s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 7s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 162m 49s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.datanode.TestDeleteBlockPool |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | 

[jira] [Commented] (HDFS-9348) DFS GetErasureCodingPolicy API on a non-existent file should be handled properly

2015-10-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14983301#comment-14983301
 ] 

Uma Maheswara Rao G commented on HDFS-9348:
---

Hi Rakesh, unifying the behaviors would be a good idea. Yeah, I think ECZone 
related APIs behavior change would be a incompatible change. In my opinion, 
doing something on nonexistent file would be users unexpected situations, so 
letting user know about the file existence make sense to me. Do we also need to 
check ECPolicy setting and ECZone getting APIs?

[~andrew.wang], [~hitliuyi], could you guys also comment what is your opinion 
on this change?



> DFS GetErasureCodingPolicy API on a non-existent file should be handled 
> properly
> 
>
> Key: HDFS-9348
> URL: https://issues.apache.org/jira/browse/HDFS-9348
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-9348-00.patch
>
>
> Presently calling {{dfs#getErasureCodingPolicy()}} on a non-existent file is 
> returning the ErasureCodingPolicy info. As per the 
> [discussion|https://issues.apache.org/jira/browse/HDFS-8777?focusedCommentId=14981077=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14981077]
>  it has to validate and throw FileNotFoundException.
> Also, {{dfs#getEncryptionZoneForPath()}} API has the same behavior. Again we 
> can discuss to add the file existence validation in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9348) DFS GetErasureCodingPolicy API on a non-existent file should be handled properly

2015-10-30 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14983257#comment-14983257
 ] 

Rakesh R commented on HDFS-9348:


Attached patch to validate the file existence and throw FileNotFoundException. 
Please review it, thanks!

I've noticed that presently {{dfs.getEZForPath}} API is behaving differently 
for a +non-existent normal file+ and +non-existent ezone file+. Does this 
proposed change of throwing FNFException affect API's backward compatibility?
- If user pass a normal non-existent file then it will return {{null}} value. 
For example, {{/nonexistentfile}}
- if user pass a non-existent file but which is under an existing encryption 
zone then it is returning the parent's encryption zone info. For example, 
{{/ezone/nonexistentfile}}

> DFS GetErasureCodingPolicy API on a non-existent file should be handled 
> properly
> 
>
> Key: HDFS-9348
> URL: https://issues.apache.org/jira/browse/HDFS-9348
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Minor
> Attachments: HDFS-9348-00.patch
>
>
> Presently calling {{dfs#getErasureCodingPolicy()}} on a non-existent file is 
> returning the ErasureCodingPolicy info. As per the 
> [discussion|https://issues.apache.org/jira/browse/HDFS-8777?focusedCommentId=14981077=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14981077]
>  it has to validate and throw FileNotFoundException.
> Also, {{dfs#getEncryptionZoneForPath()}} API has the same behavior. Again we 
> can discuss to add the file existence validation in this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)