[jira] [Commented] (HDFS-11172) Support an erasure coding policy using RS 10 + 4

2016-12-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724734#comment-15724734
 ] 

Kai Zheng commented on HDFS-11172:
--

It's good to support an RS(10, 4) policy, as it's widely used in the industry. 
Thanks [~zhouwei] for doing this.

> Support an erasure coding policy using RS 10 + 4
> 
>
> Key: HDFS-11172
> URL: https://issues.apache.org/jira/browse/HDFS-11172
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: Wei Zhou
> Attachments: HDFS-11172-v1.patch, HDFS-11172-v2.patch
>
>
> So far, "hdfs erasurecode" command supports three policies, 
> RS-DEFAULT-3-2-64k, RS-DEFAULT-6-3-64k and RS-LEGACY-6-3-64k. This task is 
> going to add RS-DEFAULT-10-4-64k policy to this command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11172) Support an erasure coding policy using RS 10 + 4

2016-12-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-11172:
-
Summary: Support an erasure coding policy using RS 10 + 4  (was: Support an 
erasure coding policy RS-DEFAULT-10-4-64k in HDFS)

> Support an erasure coding policy using RS 10 + 4
> 
>
> Key: HDFS-11172
> URL: https://issues.apache.org/jira/browse/HDFS-11172
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: Wei Zhou
> Attachments: HDFS-11172-v1.patch, HDFS-11172-v2.patch
>
>
> So far, "hdfs erasurecode" command supports three policies, 
> RS-DEFAULT-3-2-64k, RS-DEFAULT-6-3-64k and RS-LEGACY-6-3-64k. This task is 
> going to add RS-DEFAULT-10-4-64k policy to this command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11212) FilterFileSystem should override rename(.., options) to take effect of Rename options called via FilterFileSystem implementations

2016-12-05 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-11212:


 Summary: FilterFileSystem should override rename(.., options) to 
take effect of Rename options called via FilterFileSystem implementations
 Key: HDFS-11212
 URL: https://issues.apache.org/jira/browse/HDFS-11212
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B


HDFS-8312 Added Rename.TO_TRASH option to add a security check before moving to 
trash.

But for FilterFileSystem implementations since this rename(..options) is not 
overridden, it uses default FileSystem implementation where Rename.TO_TRASH 
option is not delegated to NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11213) FilterFileSystem should override rename(.., options) to take effect of Rename options called via FilterFileSystem implementations

2016-12-05 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-11213:


 Summary: FilterFileSystem should override rename(.., options) to 
take effect of Rename options called via FilterFileSystem implementations
 Key: HDFS-11213
 URL: https://issues.apache.org/jira/browse/HDFS-11213
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B


HDFS-8312 Added Rename.TO_TRASH option to add a security check before moving to 
trash.

But for FilterFileSystem implementations since this rename(..options) is not 
overridden, it uses default FileSystem implementation where Rename.TO_TRASH 
option is not delegated to NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11211) Add a time unit to the DataNode client trace format

2016-12-05 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724676#comment-15724676
 ] 

Akira Ajisaka commented on HDFS-11211:
--

"duration(ns): %s" seems fine.

> Add a time unit to the DataNode client trace format 
> 
>
> Key: HDFS-11211
> URL: https://issues.apache.org/jira/browse/HDFS-11211
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie, supportability
>
> {code:title=DataNode.java}
>   public static final String DN_CLIENTTRACE_FORMAT =
> "src: %s" +  // src IP
> ", dest: %s" +   // dst IP
> ", bytes: %s" +  // byte count
> ", op: %s" + // operation
> ", cliID: %s" +  // DFSClient id
> ", offset: %s" + // offset
> ", srvID: %s" +  // DatanodeRegistration
> ", blockid: %s" + // block id
> ", duration: %s";  // duration time
> {code}
> The time unit of the duration is nanosecond, but it is not documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11211) Add a time unit to the DataNode client trace format

2016-12-05 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11211:
-
Labels: newbie supportability  (was: supportability)

> Add a time unit to the DataNode client trace format 
> 
>
> Key: HDFS-11211
> URL: https://issues.apache.org/jira/browse/HDFS-11211
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie, supportability
>
> {code:title=DataNode.java}
>   public static final String DN_CLIENTTRACE_FORMAT =
> "src: %s" +  // src IP
> ", dest: %s" +   // dst IP
> ", bytes: %s" +  // byte count
> ", op: %s" + // operation
> ", cliID: %s" +  // DFSClient id
> ", offset: %s" + // offset
> ", srvID: %s" +  // DatanodeRegistration
> ", blockid: %s" + // block id
> ", duration: %s";  // duration time
> {code}
> The time unit of the duration is nanosecond, but it is not documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724671#comment-15724671
 ] 

Hudson commented on HDFS-11156:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10946 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10946/])
Revert "HDFS-11156. Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST (wang: rev 
08a7253bc0eb6c9155457feecb9c5cdc17c3a814)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java


> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11211) Add a time unit to the DataNode client trace format

2016-12-05 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDFS-11211:


 Summary: Add a time unit to the DataNode client trace format 
 Key: HDFS-11211
 URL: https://issues.apache.org/jira/browse/HDFS-11211
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Akira Ajisaka
Priority: Minor


{code:title=DataNode.java}
  public static final String DN_CLIENTTRACE_FORMAT =
"src: %s" +  // src IP
", dest: %s" +   // dst IP
", bytes: %s" +  // byte count
", op: %s" + // operation
", cliID: %s" +  // DFSClient id
", offset: %s" + // offset
", srvID: %s" +  // DatanodeRegistration
", blockid: %s" + // block id
", duration: %s";  // duration time
{code}
The time unit of the duration is nanosecond, but it is not documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-11156 started by Weiwei Yang.
--
> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724654#comment-15724654
 ] 

Weiwei Yang commented on HDFS-11156:


Sure, thanks a lot.

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724650#comment-15724650
 ] 

Mingliang Liu commented on HDFS-11156:
--

Sorry I missed the point of not changing the WebHdfsFileSystem. I think Andrew 
is making a good point. Please take care of the revert. Thanks,

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11156:
---
Fix Version/s: (was: 3.0.0-alpha2)
   (was: 2.8.0)

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10685) libhdfs++: return explicit error when non-secured client connects to secured server

2016-12-05 Thread Kai Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724651#comment-15724651
 ] 

Kai Jiang commented on HDFS-10685:
--

Thanks! I am still interested in this one. I will open a Pull Request on Github 
later this week.

> libhdfs++: return explicit error when non-secured client connects to secured 
> server
> ---
>
> Key: HDFS-10685
> URL: https://issues.apache.org/jira/browse/HDFS-10685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>
> When a non-secured client tries to connect to a secured server, the first 
> indication is an error from RpcConnection::HandleRpcRespose complaining about 
> "RPC response with Unknown call id -33".
> We should insert code in HandleRpcResponse to detect if the unknown call id 
> == RpcEngine::kCallIdSasl and return an informative error that you have an 
> unsecured client connecting to a secured server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HDFS-11156:


> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724646#comment-15724646
 ] 

Andrew Wang commented on HDFS-11156:


Great, thanks [~cheersyang]. I'm going to revert this in the meantime.

Also, could you fold the doc improvements in HDFS-11166 into this patch? 
Normally we commit the code and docs at the same time.

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724634#comment-15724634
 ] 

Weiwei Yang commented on HDFS-11156:


Hi [~andrew.wang]

You are right. I missed that. Thanks!
A fallback would be nice, looks like the fallback should only happen when the 
client gets a "UnsupportedOperationException - GETFILEBLOCKLOCATIONS is not 
supported" from old version of webhdfs server. Let me work on this.

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-12-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724599#comment-15724599
 ] 

Xiao Chen edited comment on HDFS-10899 at 12/6/16 6:39 AM:
---

Oops, was fixing some last-minute checkstyle which turns out failed 
compilation. Re-uploaded patch 1.

After some offline sync with Andrew, created HDFS-11210 to handle key rolling 
end-to-end.


was (Author: xiaochen):
Oops, was fixing some last-minute checkstyle which turns out failed compilation.

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.01.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-12-05 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Attachment: (was: HDFS-10899.01.patch)

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.01.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-12-05 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Attachment: HDFS-10899.01.patch

Oops, was fixing some last-minute checkstyle which turns out failed compilation.

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.01.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11210) Enhance key rolling to be atomic

2016-12-05 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-11210:


 Summary: Enhance key rolling to be atomic
 Key: HDFS-11210
 URL: https://issues.apache.org/jira/browse/HDFS-11210
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: encryption, kms
Affects Versions: 2.6.5
Reporter: Xiao Chen
Assignee: Xiao Chen


To support re-encrypting EDEK, we need to make sure after a key is rolled, no 
old version EDEKs are used anymore. This includes various caches when 
generating EDEK.
This is not true currently, simply because no such requirements / necessities 
before.

This includes
- Client Provider(s), and corresponding cache(s).
When LoadBalancingKMSCP is used, we need to clear all KMSCPs.
- KMS server instance(s), and corresponding cache(s)
When KMS HA is configured with multiple KMS instances, only 1 will receive the 
{{rollNewVersion}} request, we need to make sure other instances are rolled too.
- The Client instance inside NN(s), and corresponding cache(s)
When {{hadoop key roll}} is succeeded, the client provider inside NN should be 
drained too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724552#comment-15724552
 ] 

Andrew Wang commented on HDFS-11156:


Hi [~cheersyang] I think you misunderstand: the new version of 
WebHdfsFileSystem#getFileBlockLocations always calls GETFILEBLOCKLOCATIONS, 
there is no fallback mechanism to call GET_BLOCK_LOCATIONS. I'm basically 
proposing we add a fallback to the old code.

Looking at HDFS-10756, it implements a fallback to the default implementation 
if the call fails:

{code}
1638} catch(IOException e) {
1639  LOG.warn("Cannot find trash root of " + path, e);
1640  // keep the same behavior with dfs
1641  return super.getTrashRoot(path).makeQualified(getUri(), null);
1642}
{code}

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11198) NN UI should link DN web address using hostnames

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724486#comment-15724486
 ] 

Hadoop QA commented on HDFS-11198:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11198 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841892/HDFS-11198.02.patch |
| Optional Tests |  asflicense  |
| uname | Linux 6e361f8ed188 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b2a3d6c |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17771/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NN UI should link DN web address using hostnames
> 
>
> Key: HDFS-11198
> URL: https://issues.apache.org/jira/browse/HDFS-11198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Kihwal Lee
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-11198.01.patch, HDFS-11198.02.patch
>
>
> The new NN UI shows links to DN web pages, but since the link is from the 
> info address returned from jmx, it is in the IP address:port form. This 
> breaks if users are using filters utilizing cookies.
> Since this is a new feature in 2.8, I didn't mark it as a blocker. I.e. it 
> does not break any existing functions. It just doesn't work properly in 
> certain environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11198) NN UI should link DN web address using hostnames

2016-12-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11198:
---
Attachment: HDFS-11198.02.patch

Upload the patch again to trigger the jenkins job, v2 file is just a copy of v1 
patch. I tried to "Cancel Patch" then "Submit Patch" over again but it did not 
trigger the job somehow.

> NN UI should link DN web address using hostnames
> 
>
> Key: HDFS-11198
> URL: https://issues.apache.org/jira/browse/HDFS-11198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Kihwal Lee
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-11198.01.patch, HDFS-11198.02.patch
>
>
> The new NN UI shows links to DN web pages, but since the link is from the 
> info address returned from jmx, it is in the IP address:port form. This 
> breaks if users are using filters utilizing cookies.
> Since this is a new feature in 2.8, I didn't mark it as a blocker. I.e. it 
> does not break any existing functions. It just doesn't work properly in 
> certain environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724475#comment-15724475
 ] 

Hadoop QA commented on HDFS-10930:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 467 unchanged - 12 fixed = 470 total (was 479) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 7 
unchanged - 0 fixed = 8 total (was 7) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File,
 long) may fail to clean up java.io.InputStream on checked exception  
Obligation to clean up resource created at BlockPoolSlice.java:clean up 
java.io.InputStream on checked exception  Obligation to clean up resource 
created at BlockPoolSlice.java:[line 720] is not discharged |
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestEncryptionZones |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10930 |
| GITHUB PR | https://github.com/apache/hadoop/pull/160 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9a1b538bf396 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8e63fa9 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17769/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
ht

[jira] [Comment Edited] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724468#comment-15724468
 ] 

Weiwei Yang edited comment on HDFS-11156 at 12/6/16 6:00 AM:
-

Hi [~andrew.wang]

In that case client can still call GET_BLOCK_LOCATIONS, that is still working. 
Adding a new API should not break compatibility, this one is jus like 
HDFS-10756 which adds "GETTRASHROOT" to webhdfs.  What do you think?


was (Author: cheersyang):
Hi [~andrew.wang]

In that case client can still call GET_BLOCK_LOCATIONS, that is still working. 
Adding a new API should not break compatibility, this one now is jus like 
HDFS-10756 which adds "GETTRASHROOT" to webhdfs. 

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724468#comment-15724468
 ] 

Weiwei Yang commented on HDFS-11156:


Hi [~andrew.wang]

In that case client can still call GET_BLOCK_LOCATIONS, that is still working. 
Adding a new API should not break compatibility, this one now is jus like 
HDFS-10756 which adds "GETTRASHROOT" to webhdfs. 

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11207) Unnecessary incompatible change of NNHAStatusHeartbeat.state in DatanodeProtocolProtos

2016-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11207:
---
Affects Version/s: 3.0.0-alpha1
 Target Version/s: 3.0.0-alpha2
 Priority: Critical  (was: Major)
  Component/s: rolling upgrades

> Unnecessary incompatible change of NNHAStatusHeartbeat.state in 
> DatanodeProtocolProtos
> --
>
> Key: HDFS-11207
> URL: https://issues.apache.org/jira/browse/HDFS-11207
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Critical
> Attachments: HDFS-11207.001.patch
>
>
> HDFS-5079 changed the meaning of state in {{NNHAStatusHeartbeat}} when it 
> added in the {{INITIALIZING}} state via {{HAServiceStateProto}}.
> Before change:
> {noformat}
> enum State {
>ACTIVE = 0;
>STANDBY = 1;
> }
> {noformat}
> After change:
> {noformat}
> enum HAServiceStateProto {
>   INITIALIZING = 0;
>   ACTIVE = 1;
>   STANDBY = 2;
> }
> {noformat}
> So the new {{INITIALIZING}} state will be interpreted as {{ACTIVE}}, new 
> {{ACTIVE}} interpreted as {{STANDBY}} and new {{STANDBY}} interpreted as 
> unknown. Any rolling upgrade to 3.0.0 will break because the datanodes that 
> haven't been updated will misinterpret the NN state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724456#comment-15724456
 ] 

Hadoop QA commented on HDFS-10899:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
36s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  1m 36s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 36s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 47s{color} | {color:orange} root: The patch generated 29 new + 1694 
unchanged - 25 fixed = 1723 total (was 1719) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
16s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
5s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841699/HDFS-10899.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux f7097d7577cb 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provi

[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724450#comment-15724450
 ] 

Andrew Wang commented on HDFS-10899:


I think that will work, but do we also need a drain command for the cache in 
the NN's client?

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.01.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1572#comment-1572
 ] 

Andrew Wang commented on HDFS-11156:


To expand on my previous comment, the new client will try to call 
GETFILEBLOCKLOCATIONS in WebHdfsFileSystem, which will not be implemented on an 
old cluster.

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724397#comment-15724397
 ] 

ASF GitHub Bot commented on HDFS-10930:
---

GitHub user xiaoyuyao opened a pull request:

https://github.com/apache/hadoop/pull/170

HDFS-10930

HDFS-10930.10

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xiaoyuyao/hadoop HDFS-10930

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/170.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #170


commit 500f665ab940f174314005eb2eb68388c96babbc
Author: Xiaoyu Yao 
Date:   2016-11-12T15:43:37Z

HDFS-10930.05.patch

commit 08d5f43e54a5b659953ed78921e38eb3c7119bfe
Author: Xiaoyu Yao 
Date:   2016-11-15T02:41:48Z

HDFS-10930.06.patch

commit eacfb8451a5ca64c185ca073e3ab41573c32cd33
Author: Xiaoyu Yao 
Date:   2016-11-29T03:09:36Z

HDFS-10930.07.patch

commit 0df4aff0d15a270a912f2eb10952220c41bb6fde
Author: Xiaoyu Yao 
Date:   2016-12-06T05:03:28Z

HDFS-10930.10.patch




> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10930-branch-2.00.patch, 
> HDFS-10930-branch-2.001.patch, HDFS-10930.01.patch, HDFS-10930.02.patch, 
> HDFS-10930.03.patch, HDFS-10930.04.patch, HDFS-10930.05.patch, 
> HDFS-10930.06.patch, HDFS-10930.07.patch, HDFS-10930.08.patch, 
> HDFS-10930.09.patch, HDFS-10930.10.patch, HDFS-10930.barnch-2.00.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-12-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724398#comment-15724398
 ] 

Xiao Chen commented on HDFS-10899:
--

Good question. My intention was that after a {{hadoop key roll}} returns an 
admin can safely {{hdfs crypto reencrypt}}.

KMS historically didn't care about this, and hence all caches may not be 
invalidated. HADOOP-13827 fixes the client side to drain all KMSCPs, but only 1 
server would be drained. To drain all the servers I think we need to add one 
interface to the server to explicitly do that. Given that KMS servers ain't 
aware of each other, this seems to be the only reasonable way. (And only drain 
client after servers are all drained. The new {{drain}} interface can be 
controlled under the {{MANAGEMENT}} ACL which currently controls 
{{rollNewVersion}}).
Thoughts?

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.01.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11172) Support an erasure coding policy RS-DEFAULT-10-4-64k in HDFS

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724296#comment-15724296
 ] 

Hadoop QA commented on HDFS-11172:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tracing.TestTracing |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11172 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841870/HDFS-11172-v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 41ccfd2a8ac6 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a2b5d60 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17768/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17768/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U

[jira] [Commented] (HDFS-11115) Remove bytes2Array and string2Bytes

2016-12-05 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724288#comment-15724288
 ] 

Akira Ajisaka commented on HDFS-5:
--

I doubt that using String "UTF-8" is a optimization. I did a micro benchmark 
and the result is that {{new String(byte, StandardCharsets.UTF-8)}} is faster 
than {{DFSUtilClient.bytes2String(byte)}} and 
{{str.getBytes(StandardCharsets.UTF-8)}} is almost as fast as 
{{DFSUtilClient.string2Bytes(str)}}.
* 
https://github.com/aajisaka/hadoop-tools/commit/62c5ea6f459084d5042fe83e9c465e14683f4d18

> Remove bytes2Array and string2Bytes
> ---
>
> Key: HDFS-5
> URL: https://issues.apache.org/jira/browse/HDFS-5
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Reporter: Sahil Kang
>Priority: Minor
>
> In DFSUtilClient.java we have something like:
> {code: language=java}
> public static byte[] string2Bytes(String str) {
>   try {
> return str.getBytes("UTF-8");
>   } catch (UnsupportedEncodingException e) {
> throw new IllegalArgumentException("UTF8 decoding is not supported", e);
>   }
> }
> static String bytes2String(byte[] bytes, int offset, int length) {
>   try {
> return new String(bytes, offset, length, "UTF-8");
>   } catch (UnsupportedEncodingException e) {
> throw new IllegalArgumentException("UTF8 encoding is not supported", e);
>   }
> }
> {code}
> Using StandardCharsets, these methods become trivial:
> {code: language=java}
> public static byte[] string2Bytes(String str) {
>   return str.getBytes(StandardCharsets.UTF_8);
> }
> static String bytes2String(byte[] bytes, int offset, int length) {
>   return new String(bytes, offset, length, StandardCharsets.UTF_8);
> }
> {code}
> I think we should remove these methods and use StandardCharsets whenever we 
> need to convert between bytes and strings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11164) Mover should avoid unnecessary retries if the block is pinned

2016-12-05 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724275#comment-15724275
 ] 

Uma Maheswara Rao G commented on HDFS-11164:


[~rakeshr] Overall patch looks good to me. +1

[~surendrasingh] since you tested mover scenarios before, do you mind checking 
this patch with your clusters whether its effecting any of your scenarios?

> Mover should avoid unnecessary retries if the block is pinned
> -
>
> Key: HDFS-11164
> URL: https://issues.apache.org/jira/browse/HDFS-11164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11164-00.patch, HDFS-11164-01.patch, 
> HDFS-11164-02.patch, HDFS-11164-03.patch
>
>
> When mover is trying to move a pinned block to another datanode, it will 
> internally hits the following IOException and mark the block movement as 
> {{failure}}. Since the Mover has {{dfs.mover.retry.max.attempts}} configs, it 
> will continue moving this block until it reaches {{retryMaxAttempts}}. If the 
> block movement failure(s) are only due to block pinning, then retry is 
> unnecessary. The idea of this jira is to avoid retry attempts of pinned 
> blocks as they won't be able to move to a different node. 
> {code}
> 2016-11-22 10:56:10,537 WARN 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher: Failed to move 
> blk_1073741825_1001 with size=52 from 127.0.0.1:19501:DISK to 
> 127.0.0.1:19758:ARCHIVE through 127.0.0.1:19501
> java.io.IOException: Got error, status=ERROR, status message opReplaceBlock 
> BP-1772076264-10.252.146.200-1479792322960:blk_1073741825_1001 received 
> exception java.io.IOException: Got error, status=ERROR, status message Not 
> able to copy block 1073741825 to /127.0.0.1:19826 because it's pinned , copy 
> block BP-1772076264-10.252.146.200-1479792322960:blk_1073741825_1001 from 
> /127.0.0.1:19501, reportedBlock move is failed
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:118)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.receiveResponse(Dispatcher.java:417)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:358)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$5(Dispatcher.java:322)
>   at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:1075)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10581) Hide redundant table on NameNode WebUI when no nodes are decomissioning

2016-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724247#comment-15724247
 ] 

Weiwei Yang commented on HDFS-10581:


Thanks [~djp], that makes sense to me.

> Hide redundant table on NameNode WebUI when no nodes are decomissioning
> ---
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, 
> HDFS-10581.03.patch, after.2.jpg, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-12-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10930:
--
Attachment: HDFS-10930.10.patch

I've tested it locally and the try with resource finally solve the findbugs 
issue here. Hope we will have a clean Jenkins run.

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10930-branch-2.00.patch, 
> HDFS-10930-branch-2.001.patch, HDFS-10930.01.patch, HDFS-10930.02.patch, 
> HDFS-10930.03.patch, HDFS-10930.04.patch, HDFS-10930.05.patch, 
> HDFS-10930.06.patch, HDFS-10930.07.patch, HDFS-10930.08.patch, 
> HDFS-10930.09.patch, HDFS-10930.10.patch, HDFS-10930.barnch-2.00.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10581) Hide redundant table on NameNode WebUI when no nodes are decomissioning

2016-12-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724206#comment-15724206
 ] 

Junping Du commented on HDFS-10581:
---

The latest patch indeed becomes a trivial fix which looks good to me.
Per Andrew's response above, I am not saying compatibility for UI element, but 
rather a stable issue as a mature product. Like 02 patch's work, people/admin 
could get confused where is page for checking decommissioning node after 
upgrade which probably even worse than previous redundancy. My 2 cents is user 
experience is not only a static picture but also a historic view. Thoughts?

> Hide redundant table on NameNode WebUI when no nodes are decomissioning
> ---
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, 
> HDFS-10581.03.patch, after.2.jpg, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Hide redundant table on NameNode WebUI when no nodes are decomissioning

2016-12-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-10581:
--
Priority: Trivial  (was: Major)

> Hide redundant table on NameNode WebUI when no nodes are decomissioning
> ---
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, 
> HDFS-10581.03.patch, after.2.jpg, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10581) Hide redundant table on NameNode WebUI when no nodes are decomissioning

2016-12-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724130#comment-15724130
 ] 

Hudson commented on HDFS-10581:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10944 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10944/])
HDFS-10581. Hide redundant table on NameNode WebUI when no nodes are (wang: rev 
8e63fa98eabac55bdb2254306584ad1e759c79eb)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> Hide redundant table on NameNode WebUI when no nodes are decomissioning
> ---
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ui, web-ui
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, 
> HDFS-10581.03.patch, after.2.jpg, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10581) Hide redundant table on NameNode WebUI when no nodes are decomissioning

2016-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724108#comment-15724108
 ] 

Weiwei Yang commented on HDFS-10581:


My pleasure, thank you [~andrew.wang] :).

> Hide redundant table on NameNode WebUI when no nodes are decomissioning
> ---
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ui, web-ui
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, 
> HDFS-10581.03.patch, after.2.jpg, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11195) When appending files by webhdfs rest api fails, it returns 200

2016-12-05 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724096#comment-15724096
 ] 

Yuanbo Liu commented on HDFS-11195:
---

I have reproduced this issue in trunk branch, but still doesn't come up with a 
good solution to solve this problem.
The related code is here {{WebHdfsHandler#onAppend}}:
{code}
  private void onAppend(ChannelHandlerContext ctx) throws IOException {
 
resp = new DefaultHttpResponse(HTTP_1_1, OK);
resp.headers().set(CONTENT_LENGTH, 0);
ctx.pipeline().replace(this, HdfsWriter.class.getSimpleName(),
  new HdfsWriter(dfsClient, out, resp));
  }
{code}
{{HdfsWriter}} doesn't write successfully, but the response has been set to 
200(OK).

> When appending files by webhdfs rest api fails, it returns 200
> --
>
> Key: HDFS-11195
> URL: https://issues.apache.org/jira/browse/HDFS-11195
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>
> Suppose that there is a Hadoop cluster contains only one datanode, and 
> dfs.replication=3. Run:
> {code}
> curl -i -X POST -T  
> "http://:/webhdfs/v1/?op=APPEND"
> {code}
> it returns 200, even though append operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724091#comment-15724091
 ] 

Weiwei Yang commented on HDFS-11156:


Hello [~andrew.wang]

Thank you for raising up your concern. I may not fully get your point, I think 
we are maintaining the compatibility here. Webhdfs GET_BLOCK_LOCATIONS is still 
working as before, see [this comment | 
https://issues.apache.org/jira/browse/HDFS-11156?focusedCommentId=15711204&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15711204],
 and {{WebHdfsFileSystem#getFileBlockLocations}} is still returning 
{{BlockLocation[]}} as before, nothing really changes. Can you please give me 
some more detail what would be broken? I can test the scenario in advance.

Thanks

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10885) [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier is on

2016-12-05 Thread Wei Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724086#comment-15724086
 ] 

Wei Zhou commented on HDFS-10885:
-

The test case failures are unrelated to this patch. Thanks!

> [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier 
> is on
> --
>
> Key: HDFS-10885
> URL: https://issues.apache.org/jira/browse/HDFS-10885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Fix For: HDFS-10285
>
> Attachments: HDFS-10800-HDFS-10885-00.patch, 
> HDFS-10800-HDFS-10885-01.patch, HDFS-10800-HDFS-10885-02.patch, 
> HDFS-10885-HDFS-10285-10.patch, HDFS-10885-HDFS-10285-11.patch, 
> HDFS-10885-HDFS-10285.03.patch, HDFS-10885-HDFS-10285.04.patch, 
> HDFS-10885-HDFS-10285.05.patch, HDFS-10885-HDFS-10285.06.patch, 
> HDFS-10885-HDFS-10285.07.patch, HDFS-10885-HDFS-10285.08.patch, 
> HDFS-10885-HDFS-10285.09.patch
>
>
> These two can not work at the same time to avoid conflicts and fight with 
> each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Hide redundant table on NameNode WebUI when no nodes are decomissioning

2016-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10581:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2, branch-2.8. Thanks for the contribution Weiwei 
Yang!

> Hide redundant table on NameNode WebUI when no nodes are decomissioning
> ---
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ui, web-ui
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, 
> HDFS-10581.03.patch, after.2.jpg, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Hide redundant table on NameNode WebUI when no nodes are decomissioning

2016-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10581:
---
Summary: Hide redundant table on NameNode WebUI when no nodes are 
decomissioning  (was: Hide redundant table on dfshealth WebUI when no nodes are 
decomissioning)

> Hide redundant table on NameNode WebUI when no nodes are decomissioning
> ---
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, 
> HDFS-10581.03.patch, after.2.jpg, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Hide redundant table on dfshealth WebUI when no nodes are decomissioning

2016-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10581:
---
Summary: Hide redundant table on dfshealth WebUI when no nodes are 
decomissioning  (was: Redundant table on Datanodes page when no nodes under 
decomissioning)

> Hide redundant table on dfshealth WebUI when no nodes are decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, 
> HDFS-10581.03.patch, after.2.jpg, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724051#comment-15724051
 ] 

Hadoop QA commented on HDFS-10581:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10581 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841871/HDFS-10581.03.patch |
| Optional Tests |  asflicense  |
| uname | Linux 426bf6ed9392 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a2b5d60 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17767/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, 
> HDFS-10581.03.patch, after.2.jpg, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11209) SNN can't checkpoint when rolling upgrade is not finalized

2016-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11209:
---
Affects Version/s: 2.8.0
   3.0.0-alpha1
 Target Version/s: 2.8.0, 3.0.0-alpha2
 Priority: Critical  (was: Major)
  Component/s: rolling upgrades

Thanks for the report [~xyao], I'm setting the target/affects versions based on 
HDFS-8432.

Since most users are in an HA setup these days, this might not be a blocker, 
but I think it's at least a critical issue.

> SNN can't checkpoint when rolling upgrade is not finalized
> --
>
> Key: HDFS-11209
> URL: https://issues.apache.org/jira/browse/HDFS-11209
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Critical
>
> Similar problem has been fixed with HDFS-7185. Recent change in HDFS-8432 
> brings this back. 
> With HDFS-8432, the primary NN will not update the VERSION file to the new 
> version after running with "rollingUpgrade" option until upgrade is 
> finalized. This is to support more downgrade use cases.
> However, the checkpoint on the SNN is incorrectly updating the VERSION file 
> when the rollingUpgrade is not finalized yet. As a result, the SNN checkpoint 
> successfully but fail to push it to the primary NN because its version is 
> higher than the primary NN as shown below.
> {code}
> 2016-12-02 05:25:31,918 ERROR namenode.SecondaryNameNode 
> (SecondaryNameNode.java:doWork(399)) - Exception in doCheckpoint
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpPutFailedException:
>  Image uploading failed, status: 403, url: 
> http://NN:50070/imagetransfer?txid=345404754&imageFile=IMAGE&File-Le..., 
> message: This namenode has storage info -60:221856466:1444080250181:clusterX 
> but the secondary expected -63:221856466:1444080250181:clusterX
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724041#comment-15724041
 ] 

Andrew Wang commented on HDFS-10899:


I just looking at HADOOP-13827, and noticed that it also tries to drain the key 
caches when a key is rolled.

Question, how will admins know when they can start reencryption after rolling a 
key? This is also complicated because there can be multiple KMSs serving 
requests, and only one of them sees the {{rollNewVersion}} call, and we have 
layers of caches.

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.01.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11150) [SPS]: Provide persistence when satisfying storage policy.

2016-12-05 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724038#comment-15724038
 ] 

Yuanbo Liu commented on HDFS-11150:
---

[~rakeshr] Thanks a lot for your comment.

> [SPS]: Provide persistence when satisfying storage policy.
> --
>
> Key: HDFS-11150
> URL: https://issues.apache.org/jira/browse/HDFS-11150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11150-HDFS-10285.001.patch
>
>
> Provide persistence for SPS in case that Hadoop cluster crashes by accident. 
> Basically we need to change EditLog and FsImage here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-12-05 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15724026#comment-15724026
 ] 

Weiwei Yang edited comment on HDFS-10581 at 12/6/16 2:01 AM:
-

According to [~andrew.wang]'s comment, retain the header and displays message 
"No nodes are decommissioning". Thanks.


was (Author: cheersyang):
According to [~andrew.wang]'s comment, retain the header and displays message 
"No nodes are decommissioning".

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, 
> HDFS-10581.03.patch, after.2.jpg, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-12-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Attachment: HDFS-10581.03.patch

According to [~andrew.wang]'s comment, retain the header and displays message 
"No nodes are decommissioning".

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, 
> HDFS-10581.03.patch, after.2.jpg, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11172) Support an erasure coding policy RS-DEFAULT-10-4-64k in HDFS

2016-12-05 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-11172:

Attachment: HDFS-11172-v2.patch

Thanks [~andrew.wang] for reviewing! The patch is updated and other failures 
are unrelated. Thanks!

> Support an erasure coding policy RS-DEFAULT-10-4-64k in HDFS
> 
>
> Key: HDFS-11172
> URL: https://issues.apache.org/jira/browse/HDFS-11172
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: Wei Zhou
> Attachments: HDFS-11172-v1.patch, HDFS-11172-v2.patch
>
>
> So far, "hdfs erasurecode" command supports three policies, 
> RS-DEFAULT-3-2-64k, RS-DEFAULT-6-3-64k and RS-LEGACY-6-3-64k. This task is 
> going to add RS-DEFAULT-10-4-64k policy to this command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723989#comment-15723989
 ] 

Andrew Wang commented on HDFS-10581:


Seems like a nice change, thanks for working on this Weiwei Yang!

Re Junping's question, the webui is explicitly excluded by our compatibility 
guidelines, so this is okay to do in branch-2.

I'd actually prefer to keep the Decommissioning header, but can we get a better 
error message than "No data is available"? e.g. "No nodes are decommisioning"? 
Otherwise I'm +1.

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, after.2.jpg, 
> after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8411) Add bytes count metrics to datanode for ECWorker

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723974#comment-15723974
 ] 

Andrew Wang commented on HDFS-8411:
---

Thanks for working on this Sammi and Rakesh for reviewing, I had a few 
additional comments/questions:

* In StripedReader, bytesRead is incremented when the future is submitted. 
However, we don't know if the future has succeeded until later on. Seems like 
we should increment only for successful requests?
* Why don't we increment the metrics directly in StripedReader / StripedWriter 
rather than doing it in StripedBlockReconstructor?
* If we aren't going to handle remote vs. local reconstruction targets in this 
JIRA, could you file a new JIRA to track this as a follow-on? It's pretty 
important, especially since on the destination side, it gets lumped together 
with other OP_WRITE calls.

> Add bytes count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8411
> URL: https://issues.apache.org/jira/browse/HDFS-8411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, 
> HDFS-8411-003.patch, HDFS-8411-004.patch, HDFS-8411-005.patch, 
> HDFS-8411-006.patch, HDFS-8411-007.patch, HDFS-8411-008.patch, 
> HDFS-8411-009.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is 
> read from local or remote to attend decoding work, and also the amount of 
> data that is written to local or remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11191) In the command “hdfs dfsadmin -report” The Configured Capacity is misleading if the dfs.datanode.data.dir is configured with two directories from the same file system.

2016-12-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11191:
---
Status: Open  (was: Patch Available)

> In the command “hdfs dfsadmin -report” The Configured Capacity is misleading 
> if the dfs.datanode.data.dir is configured with two directories from the same 
> file system.
> ---
>
> Key: HDFS-11191
> URL: https://issues.apache.org/jira/browse/HDFS-11191
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.5.0
> Environment: SLES 11SP3
> HDP 2.5.0
>Reporter: Deepak Chander
>Assignee: Weiwei Yang
> Attachments: HDFS-11191.01.patch
>
>
> In the command “hdfs dfsadmin -report” The Configured Capacity is misleading 
> if the dfs.datanode.data.dir is configured with two directories from the same 
> file system.
> hdfs@kimtest1:~> hdfs dfsadmin -report
> Configured Capacity: 239942369274 (223.46 GB)
> Present Capacity: 207894724602 (193.62 GB)
> DFS Remaining: 207894552570 (193.62 GB)
> DFS Used: 172032 (168 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (3):
> Name: 172.26.79.87:50010 (kimtest3)
> Hostname: kimtest3
> Decommission Status : Normal
> Configured Capacity: 79980789758 (74.49 GB)
> DFS Used: 57344 (56 KB)
> Non DFS Used: 9528000512 (8.87 GB)
> DFS Remaining: 70452731902 (65.61 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 88.09%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Tue Nov 29 06:59:02 PST 2016
> Name: 172.26.80.38:50010 (kimtest4)
> Hostname: kimtest4
> Decommission Status : Normal
> Configured Capacity: 79980789758 (74.49 GB)
> DFS Used: 57344 (56 KB)
> Non DFS Used: 13010952192 (12.12 GB)
> DFS Remaining: 66969780222 (62.37 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 83.73%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Tue Nov 29 06:59:02 PST 2016
> Name: 172.26.79.86:50010 (kimtest2)
> Hostname: kimtest2
> Decommission Status : Normal
> Configured Capacity: 79980789758 (74.49 GB)
> DFS Used: 57344 (56 KB)
> Non DFS Used: 9508691968 (8.86 GB)
> DFS Remaining: 70472040446 (65.63 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 88.11%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Tue Nov 29 06:59:02 PST 2016
> If you see my datanode root file system size its only 38GB
> kimtest3:~ # df -h /
> Filesystem   Size  Used Avail Use% Mounted on
> /dev/mapper/system-root   38G  2.6G   33G   8% /
> kimtest4:~ # df -h /
> Filesystem   Size  Used Avail Use% Mounted on
> /dev/mapper/system-root   38G  4.2G   32G  12% /
> kimtest2:~ # df -h /
> Filesystem   Size  Used Avail Use% Mounted on
> /dev/mapper/system-root   38G  2.6G   33G   8% /
> The below is from hdfs-site.xml file 
> 
> dfs.datanode.data.dir
> file:///grid/hadoop/hdfs/dn, file:///grid1/hadoop/hdfs/dn
>   
> I have removed the other directory grid1 and restarted datanode process.
>   
> dfs.datanode.data.dir
> file:///grid/hadoop/hdfs/dn
>   
> Now the size is reflecting correctly
> hdfs@kimtest1:/grid> hdfs dfsadmin -report
> Configured Capacity: 119971184637 (111.73 GB)
> Present Capacity: 103947243517 (96.81 GB)
> DFS Remaining: 103947157501 (96.81 GB)
> DFS Used: 86016 (84 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (3):
> Name: 172.26.79.87:50010 (kimtest3)
> Hostname: kimtest3
> Decommission Status : Normal
> Configured Capacity: 39990394879 (37.24 GB)
> DFS Used: 28672 (28 KB)
> Non DFS Used: 4764057600 (4.44 GB)
> DFS Remaining: 35226308607 (32.81 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 88.09%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Tue Nov 29 07:34:02 PST 2016
> Name: 172.26.80.38:50010 (kimtest4)
> Hostname: kimtest4
> Decommission Status : Normal
> Configured Capacity: 39990394879 (37.24 GB)
> DFS Used: 28672 (28 KB)
> Non DFS Used: 6505525248 (6.06 GB)
> DFS Remaining: 33484840959 (31.19 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 83.73%
> Config

[jira] [Updated] (HDFS-11198) NN UI should link DN web address using hostnames

2016-12-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11198:
---
Status: Open  (was: Patch Available)

> NN UI should link DN web address using hostnames
> 
>
> Key: HDFS-11198
> URL: https://issues.apache.org/jira/browse/HDFS-11198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Kihwal Lee
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-11198.01.patch
>
>
> The new NN UI shows links to DN web pages, but since the link is from the 
> info address returned from jmx, it is in the IP address:port form. This 
> breaks if users are using filters utilizing cookies.
> Since this is a new feature in 2.8, I didn't mark it as a blocker. I.e. it 
> does not break any existing functions. It just doesn't work properly in 
> certain environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11198) NN UI should link DN web address using hostnames

2016-12-05 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11198:
---
Status: Patch Available  (was: Open)

> NN UI should link DN web address using hostnames
> 
>
> Key: HDFS-11198
> URL: https://issues.apache.org/jira/browse/HDFS-11198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Kihwal Lee
>Assignee: Weiwei Yang
>Priority: Critical
> Attachments: HDFS-11198.01.patch
>
>
> The new NN UI shows links to DN web pages, but since the link is from the 
> info address returned from jmx, it is in the IP address:port form. This 
> breaks if users are using filters utilizing cookies.
> Since this is a new feature in 2.8, I didn't mark it as a blocker. I.e. it 
> does not break any existing functions. It just doesn't work properly in 
> certain environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10684) WebHDFS DataNode calls fail without parameter createparent

2016-12-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723956#comment-15723956
 ] 

Junping Du commented on HDFS-10684:
---

Thanks for comments, Andrew. [~jzhuge], do we make any progress so far?

> WebHDFS DataNode calls fail without parameter createparent
> --
>
> Key: HDFS-10684
> URL: https://issues.apache.org/jira/browse/HDFS-10684
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Samuel Low
>Assignee: John Zhuge
>Priority: Blocker
>  Labels: compatibility, webhdfs
> Attachments: HDFS-10684.001-branch-2.patch
>
>
> Optional boolean parameters that are not provided in the URL cause the 
> WebHDFS create file command to fail.
> curl -i -X PUT 
> "http://hadoop-primarynamenode:50070/webhdfs/v1/tmp/test1234?op=CREATE&overwrite=false";
> Response:
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> Location: 
> http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false
> Content-Length: 0
> Server: Jetty(6.1.26)
> Following the redirect:
> curl -i -X PUT -T MYFILE 
> "http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false";
> Response:
> HTTP/1.1 100 Continue
> HTTP/1.1 400 Bad Request
> Content-Type: application/json; charset=utf-8
> Content-Length: 162
> Connection: close
> 
> {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"Failed
>  to parse \"null\" to Boolean."}}
> The problem can be circumvented by providing both "createparent" and 
> "overwrite" parameters.
> However, this is not possible when I have no control over the WebHDFS calls, 
> e.g. Ambari and Hue have errors due to this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10684) WebHDFS DataNode calls fail without parameter createparent

2016-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10684:
---
Target Version/s: 2.8.0, 3.0.0-alpha2  (was: 2.8.0)

> WebHDFS DataNode calls fail without parameter createparent
> --
>
> Key: HDFS-10684
> URL: https://issues.apache.org/jira/browse/HDFS-10684
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Samuel Low
>Assignee: John Zhuge
>Priority: Blocker
>  Labels: compatibility, webhdfs
> Attachments: HDFS-10684.001-branch-2.patch
>
>
> Optional boolean parameters that are not provided in the URL cause the 
> WebHDFS create file command to fail.
> curl -i -X PUT 
> "http://hadoop-primarynamenode:50070/webhdfs/v1/tmp/test1234?op=CREATE&overwrite=false";
> Response:
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> Location: 
> http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false
> Content-Length: 0
> Server: Jetty(6.1.26)
> Following the redirect:
> curl -i -X PUT -T MYFILE 
> "http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false";
> Response:
> HTTP/1.1 100 Continue
> HTTP/1.1 400 Bad Request
> Content-Type: application/json; charset=utf-8
> Content-Length: 162
> Connection: close
> 
> {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"Failed
>  to parse \"null\" to Boolean."}}
> The problem can be circumvented by providing both "createparent" and 
> "overwrite" parameters.
> However, this is not possible when I have no control over the WebHDFS calls, 
> e.g. Ambari and Hue have errors due to this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10684) WebHDFS DataNode calls fail without parameter createparent

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723942#comment-15723942
 ] 

Andrew Wang commented on HDFS-10684:


I think this is still a blocker, it was found via downstream testing with Hue 
and Ambari.

> WebHDFS DataNode calls fail without parameter createparent
> --
>
> Key: HDFS-10684
> URL: https://issues.apache.org/jira/browse/HDFS-10684
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Samuel Low
>Assignee: John Zhuge
>Priority: Blocker
>  Labels: compatibility, webhdfs
> Attachments: HDFS-10684.001-branch-2.patch
>
>
> Optional boolean parameters that are not provided in the URL cause the 
> WebHDFS create file command to fail.
> curl -i -X PUT 
> "http://hadoop-primarynamenode:50070/webhdfs/v1/tmp/test1234?op=CREATE&overwrite=false";
> Response:
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> Location: 
> http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false
> Content-Length: 0
> Server: Jetty(6.1.26)
> Following the redirect:
> curl -i -X PUT -T MYFILE 
> "http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false";
> Response:
> HTTP/1.1 100 Continue
> HTTP/1.1 400 Bad Request
> Content-Type: application/json; charset=utf-8
> Content-Length: 162
> Connection: close
> 
> {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"Failed
>  to parse \"null\" to Boolean."}}
> The problem can be circumvented by providing both "createparent" and 
> "overwrite" parameters.
> However, this is not possible when I have no control over the WebHDFS calls, 
> e.g. Ambari and Hue have errors due to this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723936#comment-15723936
 ] 

Andrew Wang commented on HDFS-11156:


Sorry I didn't get a chance to review this earlier, but I have a possible 
compatibility concern. As I described earlier, webhdfs compatibility is 
important so we can move data from an older to a newer cluster with distcp. 
distcp is also often run in "pull" mode, with a new client on the new cluster 
reading from the old cluster. See these tables which recommend running on the 
destination cluster:

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_Sys_Admin_Guides/content/ref-cfb69f75-d06f-46a2-862f-efeba959b152.1.html
https://www.cloudera.com/documentation/enterprise/5-8-x/topics/cdh_admin_distcp_data_cluster_migrate.html

Since we don't have a fallback to use GET_BLOCK_LOCATIONS instead, 
getFileBlockLocations won't work with the new client/old cluster case.

[~cheersyang], [~liuml07] what do you think? Wondering if we should revert to 
handle this fallback.

> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723932#comment-15723932
 ] 

Hadoop QA commented on HDFS-10930:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 468 unchanged - 12 fixed = 471 total (was 480) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 7 
unchanged - 0 fixed = 8 total (was 7) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File,
 long) may fail to clean up java.io.InputStream on checked exception  
Obligation to clean up resource created at BlockPoolSlice.java:clean up 
java.io.InputStream on checked exception  Obligation to clean up resource 
created at BlockPoolSlice.java:[line 720] is not discharged |
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10930 |
| GITHUB PR | https://github.com/apache/hadoop/pull/160 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a123de5ab40e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dcedb72 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17766/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17766/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
| javado

[jira] [Commented] (HDFS-11172) Support an erasure coding policy RS-DEFAULT-10-4-64k in HDFS

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723904#comment-15723904
 ] 

Andrew Wang commented on HDFS-11172:


Thanks for working on this Wei Zhou, looks good overall. checkstyle caught an 
unnecessary import which we should fix. We also could use a space before the 
braces here:

{noformat}
extends TestDFSStripedOutputStream{
extends TestDFSStripedOutputStreamWithFailure{
{noformat}

[~Sammi], [~rakeshr] want to take a quick look too? I'm +1 pending.

> Support an erasure coding policy RS-DEFAULT-10-4-64k in HDFS
> 
>
> Key: HDFS-11172
> URL: https://issues.apache.org/jira/browse/HDFS-11172
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: Wei Zhou
> Attachments: HDFS-11172-v1.patch
>
>
> So far, "hdfs erasurecode" command supports three policies, 
> RS-DEFAULT-3-2-64k, RS-DEFAULT-6-3-64k and RS-LEGACY-6-3-64k. This task is 
> going to add RS-DEFAULT-10-4-64k policy to this command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11072) Add ability to unset and change directory EC policy

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723896#comment-15723896
 ] 

Andrew Wang commented on HDFS-11072:


Hi Sammi, thanks for working on this. Some review comments in addition to 
Rakesh's:

* Can we just say "replication" rather than "continuous replicate"? e.g. 
"getReplicationPolicy" instead of "getContinuousReplicatePolicy"
* Note that setting a "replication" EC policy is still different from 
unsetting. Unsetting means the policy will be inherited from an ancestor. 
Setting a "replication" policy means the "replication" policy will be used. 
Imagine a situation where there are "/a" has RS 6,3 set and "/a/b" has XOR 2,1 
set. On "/a/b", unsetting vs. setting "replication" will have different 
effects. So we also need an unset API, similar to the unset storage policy API.

Comment in ECPolicyManager, recommend reword like this:
{noformat}
  /*
   * This is a special policy. When this policy is applied to a directory, its
   * children will be replicated rather than inheriting an erasure coding policy
   * from an ancestor directory.
   *
   * This policy is only used when setting an erasure coding policy. It will 
not be
   * returned when get erasure coding policy is called.
   */
{noformat}

* FSDirErasureCodingOp: rename "ecXAttrExisted" to "hasEcXAttr"
* FSDirErasureCodingOp: should rename createErasureCodingPolicyXAttr to 
setErasureCodingPolicyXAttr, since it can now replace
* Why do we hide the replication policy for calls to 
getErasureCodingPolicyForPath for directories? Makes sense for files since they 
are just replicated, but directory-level policies act like normal EC policies 
in that they can be inherited.
* Rather than add new function getErasureCodingPolicyXAttrForLastINode to set a 
boolean, seems like we could call a "hasErasureCodingPolicy" method (the 
current one is also unused). Since this is only for paths that exist, it's safe 
to use FSDirectory.resolveLastINode instead of a for loop that skips nulls. We 
only need that for loop when creating a new path.
* To assist with the above, I feel like we should have a 
{{getErasureCodingPolicy(INode)}} method that does this block in 
getErasureCodingPolicyForPath:

{code}
final XAttrFeature xaf = inode.getXAttrFeature();
if (xaf != null) {
  XAttr xattr = xaf.getXAttr(XATTR_ERASURECODING_POLICY);
  if (xattr != null) {
ByteArrayInputStream bIn = new 
ByteArrayInputStream(xattr.getValue());
DataInputStream dIn = new DataInputStream(bIn);
String ecPolicyName = WritableUtils.readString(dIn);
if (!ecPolicyName.equalsIgnoreCase(ErasureCodingPolicyManager
.getContinuousReplicatePolicy().getName())) {
  return fsd.getFSNamesystem().getErasureCodingPolicyManager().
  getPolicyByName(ecPolicyName);
} else {
  return null;
}
  }
}
{code}

Documentation:
* "Another purpose of this special policy is to unset the erasure coding policy 
of a directory back to the traditional replications.", I don't think we should 
say this, since we also support actually unsetting the EC policy. The 
replication policy is still a policy that overrides policies on ancestor 
directories.
* Do the parameters "1-2-64K" have any meaning? If not, we should explain that 
they are meaningless, or hide the parameters so we don't need to talk about 
them.

Tests:
* It's better to use more specific asserts like {{assertNull}}, 
{{assertNotNull}, etc instead of just {{assertTrue}}
* Would be good to create files with different replication factors.

> Add ability to unset and change directory EC policy
> ---
>
> Key: HDFS-11072
> URL: https://issues.apache.org/jira/browse/HDFS-11072
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11072-v1.patch, HDFS-11072-v2.patch, 
> HDFS-11072-v3.patch, HDFS-11072-v4.patch
>
>
> Since the directory-level EC policy simply applies to files at create time, 
> it makes sense to make it more similar to storage policies and allow changing 
> and unsetting the policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723897#comment-15723897
 ] 

Hadoop QA commented on HDFS-11201:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
476 unchanged - 1 fixed = 477 total (was 477) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
10s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
4s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestStartup |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11201 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841830/HDFS-11201.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0eb08c47584d 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dcedb72 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17765/artifact/patchprocess/diff-chec

[jira] [Commented] (HDFS-10684) WebHDFS DataNode calls fail without parameter createparent

2016-12-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723837#comment-15723837
 ] 

Junping Du commented on HDFS-10684:
---

Hi, this doesn't sounds like a blocker for our 2.8 release. 
Can some HDFS guy here to comment if this issue belongs to a blocker or it is 
just something nice to fix? 

> WebHDFS DataNode calls fail without parameter createparent
> --
>
> Key: HDFS-10684
> URL: https://issues.apache.org/jira/browse/HDFS-10684
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Samuel Low
>Assignee: John Zhuge
>Priority: Blocker
>  Labels: compatibility, webhdfs
> Attachments: HDFS-10684.001-branch-2.patch
>
>
> Optional boolean parameters that are not provided in the URL cause the 
> WebHDFS create file command to fail.
> curl -i -X PUT 
> "http://hadoop-primarynamenode:50070/webhdfs/v1/tmp/test1234?op=CREATE&overwrite=false";
> Response:
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Expires: Fri, 15 Jul 2016 04:10:13 GMT
> Date: Fri, 15 Jul 2016 04:10:13 GMT
> Pragma: no-cache
> Content-Type: application/octet-stream
> Location: 
> http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false
> Content-Length: 0
> Server: Jetty(6.1.26)
> Following the redirect:
> curl -i -X PUT -T MYFILE 
> "http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false";
> Response:
> HTTP/1.1 100 Continue
> HTTP/1.1 400 Bad Request
> Content-Type: application/json; charset=utf-8
> Content-Length: 162
> Connection: close
> 
> {"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"Failed
>  to parse \"null\" to Boolean."}}
> The problem can be circumvented by providing both "createparent" and 
> "overwrite" parameters.
> However, this is not possible when I have no control over the WebHDFS calls, 
> e.g. Ambari and Hue have errors due to this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8630) WebHDFS : Support get/set/unset StoragePolicy

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723804#comment-15723804
 ] 

Andrew Wang commented on HDFS-8630:
---

A couple tiny nits:

* We need to update WebHDFS's Get All Storage Policies to say it returns a 
BlockStoragePolicies rather than BlockStoragePolicy. We also need a newline 
after this before starting the example code block. The "HTTP1.1 200 OK" line is 
missing spaces. Starting and ending brace need to be indented 8 times, not 7.
* Need another space for Set Storage Policy curl example

Please generate the site docs with "mvn site" and check this page. Otherwise 
I'm +1, thanks again for sticking with this Surendra!

> WebHDFS : Support get/set/unset StoragePolicy 
> --
>
> Key: HDFS-8630
> URL: https://issues.apache.org/jira/browse/HDFS-8630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Reporter: nijel
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8630.001.patch, HDFS-8630.002.patch, 
> HDFS-8630.003.patch, HDFS-8630.004.patch, HDFS-8630.005.patch, 
> HDFS-8630.006.patch, HDFS-8630.007.patch, HDFS-8630.008.patch, 
> HDFS-8630.009.patch, HDFS-8630.010.patch, HDFS-8630.patch
>
>
> User can set and get the storage policy from filesystem object. Same 
> operation can be allowed trough REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11209) SNN can't checkpoint when rolling upgrade is not finalized

2016-12-05 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-11209:
-

 Summary: SNN can't checkpoint when rolling upgrade is not finalized
 Key: HDFS-11209
 URL: https://issues.apache.org/jira/browse/HDFS-11209
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Similar problem has been fixed with HDFS-7185. Recent change in HDFS-8432 
brings this back. 

With HDFS-8432, the primary NN will not update the VERSION file to the new 
version after running with "rollingUpgrade" option until upgrade is finalized. 
This is to support more downgrade use cases.

However, the checkpoint on the SNN is incorrectly updating the VERSION file 
when the rollingUpgrade is not finalized yet. As a result, the SNN checkpoint 
successfully but fail to push it to the primary NN because its version is 
higher than the primary NN as shown below.

{code}
2016-12-02 05:25:31,918 ERROR namenode.SecondaryNameNode 
(SecondaryNameNode.java:doWork(399)) - Exception in doCheckpoint
org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpPutFailedException: 
Image uploading failed, status: 403, url: 
http://NN:50070/imagetransfer?txid=345404754&imageFile=IMAGE&File-Le..., 
message: This namenode has storage info -60:221856466:1444080250181:clusterX 
but the secondary expected -63:221856466:1444080250181:clusterX
{code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11205) Fix findbugs issue with BlockPoolSlice#validateIntegrityAndSetLength

2016-12-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDFS-11205.
---
Resolution: Not A Problem

Will track the fix along with the recommit of HDFS-10930, which will fix both 
the commit message and the findbugs issue.

> Fix findbugs issue with BlockPoolSlice#validateIntegrityAndSetLength
> 
>
> Key: HDFS-11205
> URL: https://issues.apache.org/jira/browse/HDFS-11205
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11205.001.patch
>
>
> This ticket is opened to fix the follow up findbugs issue introduced by 
> HDFS-10930. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11205) Fix findbugs issue with BlockPoolSlice#validateIntegrityAndSetLength

2016-12-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11205:
--
Status: Open  (was: Patch Available)

> Fix findbugs issue with BlockPoolSlice#validateIntegrityAndSetLength
> 
>
> Key: HDFS-11205
> URL: https://issues.apache.org/jira/browse/HDFS-11205
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11205.001.patch
>
>
> This ticket is opened to fix the follow up findbugs issue introduced by 
> HDFS-10930. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11188) Change min supported DN and NN versions back to 2.x

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723760#comment-15723760
 ] 

Andrew Wang commented on HDFS-11188:


To further elaborate, I haven't done any rolling upgrade testing yet, but we 
need this patch to unblock it.

> Change min supported DN and NN versions back to 2.x
> ---
>
> Key: HDFS-11188
> URL: https://issues.apache.org/jira/browse/HDFS-11188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HDFS-11188.001.patch
>
>
> This is the inverse of HDFS-10398 and HADOOP-13142. Currently, trunk requires 
> a software DN and NN version of 3.0.0-alpha1. This means we cannot perform a 
> rolling upgrade from 2.x to 3.x.
> The first step towards supporting rolling upgrade is changing these back to a 
> 2.x version. For reference, branch-2 has these versions set to "2.1.0-beta".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-12-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10930:
--
Attachment: HDFS-10930.09.patch

Change to use {{IOUtils.cleanup}}

{code}
  IOUtils.cleanup(null, checksumIn, blockIn, ris);
{code}

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10930-branch-2.00.patch, 
> HDFS-10930-branch-2.001.patch, HDFS-10930.01.patch, HDFS-10930.02.patch, 
> HDFS-10930.03.patch, HDFS-10930.04.patch, HDFS-10930.05.patch, 
> HDFS-10930.06.patch, HDFS-10930.07.patch, HDFS-10930.08.patch, 
> HDFS-10930.09.patch, HDFS-10930.barnch-2.00.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11188) Change min supported DN and NN versions back to 2.x

2016-12-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723754#comment-15723754
 ] 

Andrew Wang commented on HDFS-11188:


Hi Yongjun, this is just setting the min software version back to the same 
settings as in branch-2. This will allow rolling upgrades from 2.x releases to 
3.x releases.

We could consider setting this to a slightly higher 2.x version like 2.4.0 (I 
think that's when we started supporting rolling upgrade), but it seems okay to 
be more permissive in this case.

> Change min supported DN and NN versions back to 2.x
> ---
>
> Key: HDFS-11188
> URL: https://issues.apache.org/jira/browse/HDFS-11188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HDFS-11188.001.patch
>
>
> This is the inverse of HDFS-10398 and HADOOP-13142. Currently, trunk requires 
> a software DN and NN version of 3.0.0-alpha1. This means we cannot perform a 
> rolling upgrade from 2.x to 3.x.
> The first step towards supporting rolling upgrade is changing these back to a 
> 2.x version. For reference, branch-2 has these versions set to "2.1.0-beta".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11208) Deadlock in WebHDFS on shutdown

2016-12-05 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-11208:
---
Attachment: HDFS-11208-test-deadlock.patch

I am attaching a patch containing a unit test which demonstrates this issue 
(currently times out with deadlock when applied). 

I am open to ideas on how best to solve this deadlock issue. 

> Deadlock in WebHDFS on shutdown
> ---
>
> Key: HDFS-11208
> URL: https://issues.apache.org/jira/browse/HDFS-11208
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-11208-test-deadlock.patch
>
>
> Currently on the client side if the {{DelegationTokenRenewer}} attempts to 
> renew a WebHdfs delegation token while the client system is shutting down 
> (i.e. {{FileSystem.Cache.ClientFinalizer}} is running) a deadlock may occur. 
> This happens because {{ClientFinalizer}} calls 
> {{FileSystem.Cache.closeAll()}} which first takes a lock on the 
> {{FileSystem.Cache}} object and then locks each file system in the cache as 
> it iterates over them. {{DelegationTokenRenewer}} takes a lock on a 
> filesystem object while it is renewing that filesystem's token, but within 
> {{TokenAspect.TokenManager.renew()}} (used for renewal of WebHdfs tokens) 
> {{FileSystem.get}} is called, which in turn takes a lock on the FileSystem 
> cache object, potentially causing deadlock if {{ClientFinalizer}} is 
> currently running.
> See below for example deadlock output:
> {code}
> Found one Java-level deadlock:
> =
> "Thread-8572":
> waiting to lock monitor 0x7eff401f9878 (object 0x00051ec3f930, a
> dali.hdfs.web.WebHdfsFileSystem),
> which is held by "FileSystem-DelegationTokenRenewer"
> "FileSystem-DelegationTokenRenewer":
> waiting to lock monitor 0x7f005c08f5c8 (object 0x00050389c8b8, a
> dali.fs.FileSystem$Cache),
> which is held by "Thread-8572"
> Java stack information for the threads listed above:
> ===
> "Thread-8572":
> at dali.hdfs.web.WebHdfsFileSystem.close(WebHdfsFileSystem.java:864)
>- waiting to lock <0x00051ec3f930> (a
>dali.hdfs.web.WebHdfsFileSystem)
>at dali.fs.FilterFileSystem.close(FilterFileSystem.java:449)
>at dali.fs.FileSystem$Cache.closeAll(FileSystem.java:2407)
>- locked <0x00050389c8b8> (a dali.fs.FileSystem$Cache)
>at dali.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2424)
>- locked <0x00050389c8d0> (a
>dali.fs.FileSystem$Cache$ClientFinalizer)
>at dali.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
>"FileSystem-DelegationTokenRenewer":
>at dali.fs.FileSystem$Cache.getInternal(FileSystem.java:2343)
>- waiting to lock <0x00050389c8b8> (a dali.fs.FileSystem$Cache)
>at dali.fs.FileSystem$Cache.get(FileSystem.java:2332)
>at dali.fs.FileSystem.get(FileSystem.java:369)
>at
>dali.hdfs.web.TokenAspect$TokenManager.getInstance(TokenAspect.java:92)
>at dali.hdfs.web.TokenAspect$TokenManager.renew(TokenAspect.java:72)
>at dali.security.token.Token.renew(Token.java:373)
>at
>
> dali.fs.DelegationTokenRenewer$RenewAction.renew(DelegationTokenRenewer.java:127)
>- locked <0x00051ec3f930> (a dali.hdfs.web.WebHdfsFileSystem)
>at
>
> dali.fs.DelegationTokenRenewer$RenewAction.access$300(DelegationTokenRenewer.java:57)
>at dali.fs.DelegationTokenRenewer.run(DelegationTokenRenewer.java:258)
> Found 1 deadlock.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11207) Unnecessary incompatible change of NNHAStatusHeartbeat.state in DatanodeProtocolProtos

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723664#comment-15723664
 ] 

Hadoop QA commented on HDFS-11207:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 54s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11207 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841827/HDFS-11207.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 769c0ab5c0fc 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dcedb72 |
| Default Java | 1.8.0_111 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17763/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17763/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17763/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Unnecessary incompatible change of NNHAStatusHeartbeat.state in 
> DatanodeProtocolProtos
> --
>
> Key: HDFS-11207
> URL: https://issues.apache.org/jira/browse/HDFS-11207
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11207.001.patch
>
>
> HDFS-5079 changed the meaning of state in {{NNHAStatusHeartbeat}} when it 
> added in the {{INITIALIZING}} state via {{HAServiceStateProto}}.
> Before change:
> {noformat}
> enum State {
>ACTIVE = 0;
>STANDBY = 1;
> }
> {noformat}
> After change:
> {noformat}
> enum HAServiceStateProto {
>   INITIALIZING = 0;
>   ACTIVE = 1;
>   STANDBY = 2;
> }
> {noformat}
> So the new {{INITIALIZING}} state will be interpreted as {{ACTIVE}}, new 
> {{ACTIVE}} interpreted as {{STANDBY}} and new {{STANDBY}} interpreted as 
> un

[jira] [Created] (HDFS-11208) Deadlock in WebHDFS on shutdown

2016-12-05 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-11208:
--

 Summary: Deadlock in WebHDFS on shutdown
 Key: HDFS-11208
 URL: https://issues.apache.org/jira/browse/HDFS-11208
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0-alpha1, 2.6.5, 2.7.3, 2.8.0
Reporter: Erik Krogen
Assignee: Erik Krogen


Currently on the client side if the {{DelegationTokenRenewer}} attempts to 
renew a WebHdfs delegation token while the client system is shutting down (i.e. 
{{FileSystem.Cache.ClientFinalizer}} is running) a deadlock may occur. This 
happens because {{ClientFinalizer}} calls {{FileSystem.Cache.closeAll()}} which 
first takes a lock on the {{FileSystem.Cache}} object and then locks each file 
system in the cache as it iterates over them. {{DelegationTokenRenewer}} takes 
a lock on a filesystem object while it is renewing that filesystem's token, but 
within {{TokenAspect.TokenManager.renew()}} (used for renewal of WebHdfs 
tokens) {{FileSystem.get}} is called, which in turn takes a lock on the 
FileSystem cache object, potentially causing deadlock if {{ClientFinalizer}} is 
currently running.

See below for example deadlock output:
{code}
Found one Java-level deadlock:
=
"Thread-8572":
waiting to lock monitor 0x7eff401f9878 (object 0x00051ec3f930, a
dali.hdfs.web.WebHdfsFileSystem),
which is held by "FileSystem-DelegationTokenRenewer"
"FileSystem-DelegationTokenRenewer":
waiting to lock monitor 0x7f005c08f5c8 (object 0x00050389c8b8, a
dali.fs.FileSystem$Cache),
which is held by "Thread-8572"

Java stack information for the threads listed above:
===
"Thread-8572":
at dali.hdfs.web.WebHdfsFileSystem.close(WebHdfsFileSystem.java:864)

   - waiting to lock <0x00051ec3f930> (a
   dali.hdfs.web.WebHdfsFileSystem)
   at dali.fs.FilterFileSystem.close(FilterFileSystem.java:449)
   at dali.fs.FileSystem$Cache.closeAll(FileSystem.java:2407)
   - locked <0x00050389c8b8> (a dali.fs.FileSystem$Cache)
   at dali.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2424)
   - locked <0x00050389c8d0> (a
   dali.fs.FileSystem$Cache$ClientFinalizer)
   at dali.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
   "FileSystem-DelegationTokenRenewer":
   at dali.fs.FileSystem$Cache.getInternal(FileSystem.java:2343)
   - waiting to lock <0x00050389c8b8> (a dali.fs.FileSystem$Cache)
   at dali.fs.FileSystem$Cache.get(FileSystem.java:2332)
   at dali.fs.FileSystem.get(FileSystem.java:369)
   at
   dali.hdfs.web.TokenAspect$TokenManager.getInstance(TokenAspect.java:92)
   at dali.hdfs.web.TokenAspect$TokenManager.renew(TokenAspect.java:72)
   at dali.security.token.Token.renew(Token.java:373)
   at

   
dali.fs.DelegationTokenRenewer$RenewAction.renew(DelegationTokenRenewer.java:127)
   - locked <0x00051ec3f930> (a dali.hdfs.web.WebHdfsFileSystem)
   at

   
dali.fs.DelegationTokenRenewer$RenewAction.access$300(DelegationTokenRenewer.java:57)
   at dali.fs.DelegationTokenRenewer.run(DelegationTokenRenewer.java:258)

Found 1 deadlock.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11207) Unnecessary incompatible change of NNHAStatusHeartbeat.state in DatanodeProtocolProtos

2016-12-05 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723622#comment-15723622
 ] 

Eric Badger edited comment on HDFS-11207 at 12/5/16 10:54 PM:
--

It looks like {{HAServiceState}} is used in more places than just 
DatanodeProtocolProtos. Because of that, we can't simply change 
{{HAServiceState}} or else we will have the exact same problem that we're 
trying to fix. Moral of the story, we need 2 enums that will define {{ACTIVE}} 
and {{STANDBY}} differently. Cancelling the patch


was (Author: ebadger):
It looks like {{HAServiceState}} is used in more places than just 
DatanodeProtocolProtos. Because of that, we can't simply change 
{{HAServiceState}} or else we will have the exact same problem that we're 
trying to fix. Moral of the story, we need 2 enums that will define {{ACTIVE}} 
and {{STANDBY}} differently. 

> Unnecessary incompatible change of NNHAStatusHeartbeat.state in 
> DatanodeProtocolProtos
> --
>
> Key: HDFS-11207
> URL: https://issues.apache.org/jira/browse/HDFS-11207
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11207.001.patch
>
>
> HDFS-5079 changed the meaning of state in {{NNHAStatusHeartbeat}} when it 
> added in the {{INITIALIZING}} state via {{HAServiceStateProto}}.
> Before change:
> {noformat}
> enum State {
>ACTIVE = 0;
>STANDBY = 1;
> }
> {noformat}
> After change:
> {noformat}
> enum HAServiceStateProto {
>   INITIALIZING = 0;
>   ACTIVE = 1;
>   STANDBY = 2;
> }
> {noformat}
> So the new {{INITIALIZING}} state will be interpreted as {{ACTIVE}}, new 
> {{ACTIVE}} interpreted as {{STANDBY}} and new {{STANDBY}} interpreted as 
> unknown. Any rolling upgrade to 3.0.0 will break because the datanodes that 
> haven't been updated will misinterpret the NN state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11207) Unnecessary incompatible change of NNHAStatusHeartbeat.state in DatanodeProtocolProtos

2016-12-05 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723622#comment-15723622
 ] 

Eric Badger commented on HDFS-11207:


It looks like {{HAServiceState}} is used in more places than just 
DatanodeProtocolProtos. Because of that, we can't simply change 
{{HAServiceState}} or else we will have the exact same problem that we're 
trying to fix. Moral of the story, we need 2 enums that will define {{ACTIVE}} 
and {{STANDBY}} differently. 

> Unnecessary incompatible change of NNHAStatusHeartbeat.state in 
> DatanodeProtocolProtos
> --
>
> Key: HDFS-11207
> URL: https://issues.apache.org/jira/browse/HDFS-11207
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11207.001.patch
>
>
> HDFS-5079 changed the meaning of state in {{NNHAStatusHeartbeat}} when it 
> added in the {{INITIALIZING}} state via {{HAServiceStateProto}}.
> Before change:
> {noformat}
> enum State {
>ACTIVE = 0;
>STANDBY = 1;
> }
> {noformat}
> After change:
> {noformat}
> enum HAServiceStateProto {
>   INITIALIZING = 0;
>   ACTIVE = 1;
>   STANDBY = 2;
> }
> {noformat}
> So the new {{INITIALIZING}} state will be interpreted as {{ACTIVE}}, new 
> {{ACTIVE}} interpreted as {{STANDBY}} and new {{STANDBY}} interpreted as 
> unknown. Any rolling upgrade to 3.0.0 will break because the datanodes that 
> haven't been updated will misinterpret the NN state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11207) Unnecessary incompatible change of NNHAStatusHeartbeat.state in DatanodeProtocolProtos

2016-12-05 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-11207:
---
Status: Open  (was: Patch Available)

> Unnecessary incompatible change of NNHAStatusHeartbeat.state in 
> DatanodeProtocolProtos
> --
>
> Key: HDFS-11207
> URL: https://issues.apache.org/jira/browse/HDFS-11207
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11207.001.patch
>
>
> HDFS-5079 changed the meaning of state in {{NNHAStatusHeartbeat}} when it 
> added in the {{INITIALIZING}} state via {{HAServiceStateProto}}.
> Before change:
> {noformat}
> enum State {
>ACTIVE = 0;
>STANDBY = 1;
> }
> {noformat}
> After change:
> {noformat}
> enum HAServiceStateProto {
>   INITIALIZING = 0;
>   ACTIVE = 1;
>   STANDBY = 2;
> }
> {noformat}
> So the new {{INITIALIZING}} state will be interpreted as {{ACTIVE}}, new 
> {{ACTIVE}} interpreted as {{STANDBY}} and new {{STANDBY}} interpreted as 
> unknown. Any rolling upgrade to 3.0.0 will break because the datanodes that 
> haven't been updated will misinterpret the NN state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723611#comment-15723611
 ] 

Hadoop QA commented on HDFS-10930:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 467 unchanged - 12 fixed = 470 total (was 479) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 7 
unchanged - 0 fixed = 8 total (was 7) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 63m 
52s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File,
 long) may fail to clean up java.io.InputStream on checked exception  
Obligation to clean up resource created at BlockPoolSlice.java:clean up 
java.io.InputStream on checked exception  Obligation to clean up resource 
created at BlockPoolSlice.java:[line 720] is not discharged |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10930 |
| GITHUB PR | https://github.com/apache/hadoop/pull/160 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 11a75c09f0fc 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dcedb72 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17762/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17762/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-

[jira] [Updated] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2016-12-05 Thread Grant Sohn (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Sohn updated HDFS-11201:
--
Attachment: HDFS-11201.3.patch

Had to rebase.  Corrected diff.

> Spelling errors in the logging, help, assertions and exception messages
> ---
>
> Key: HDFS-11201
> URL: https://issues.apache.org/jira/browse/HDFS-11201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer, httpfs, namenode, nfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11201.1.patch, HDFS-11201.2.patch, 
> HDFS-11201.3.patch
>
>
> Found a set of spelling errors in the user-facing code.
> Examples are:
> odlest -> oldest
> Illagal -> Illegal
> bounday -> boundary



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8344) NameNode doesn't recover lease for files with missing blocks

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723593#comment-15723593
 ] 

Hadoop QA commented on HDFS-8344:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 344 unchanged - 0 fixed = 351 total (was 344) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-8344 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780439/HDFS-8344.10.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux eb76f9f6694a 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dcedb72 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17764/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17764/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17764/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17764/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17764/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17764/artifact/patchprocess/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
ht

[jira] [Commented] (HDFS-8344) NameNode doesn't recover lease for files with missing blocks

2016-12-05 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723567#comment-15723567
 ] 

Ravi Prakash commented on HDFS-8344:


Hi Wei-Chiu Chuang! FWIW, we have not seen this bug specifically since I merged 
a patch in our code similar to the one I had uploaded here. Although, 
strangely, I did see HDFS-8406 today (lolz, after months of silence.) 

> NameNode doesn't recover lease for files with missing blocks
> 
>
> Key: HDFS-8344
> URL: https://issues.apache.org/jira/browse/HDFS-8344
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-8344.01.patch, HDFS-8344.02.patch, 
> HDFS-8344.03.patch, HDFS-8344.04.patch, HDFS-8344.05.patch, 
> HDFS-8344.06.patch, HDFS-8344.07.patch, HDFS-8344.08.patch, 
> HDFS-8344.09.patch, HDFS-8344.10.patch, TestHadoop.java
>
>
> I found another\(?) instance in which the lease is not recovered. This is 
> reproducible easily on a pseudo-distributed single node cluster
> # Before you start it helps if you set. This is not necessary, but simply 
> reduces how long you have to wait
> {code}
>   public static final long LEASE_SOFTLIMIT_PERIOD = 30 * 1000;
>   public static final long LEASE_HARDLIMIT_PERIOD = 2 * 
> LEASE_SOFTLIMIT_PERIOD;
> {code}
> # Client starts to write a file. (could be less than 1 block, but it hflushed 
> so some of the data has landed on the datanodes) (I'm copying the client code 
> I am using. I generate a jar and run it using $ hadoop jar TestHadoop.jar)
> # Client crashes. (I simulate this by kill -9 the $(hadoop jar 
> TestHadoop.jar) process after it has printed "Wrote to the bufferedWriter"
> # Shoot the datanode. (Since I ran on a pseudo-distributed cluster, there was 
> only 1)
> I believe the lease should be recovered and the block should be marked 
> missing. However this is not happening. The lease is never recovered.
> The effect of this bug for us was that nodes could not be decommissioned 
> cleanly. Although we knew that the client had crashed, the Namenode never 
> released the leases (even after restarting the Namenode) (even months 
> afterwards). There are actually several other cases too where we don't 
> consider what happens if ALL the datanodes die while the file is being 
> written, but I am going to punt on that for another time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-8344) NameNode doesn't recover lease for files with missing blocks

2016-12-05 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723538#comment-15723538
 ] 

Wei-Chiu Chuang edited comment on HDFS-8344 at 12/5/16 10:23 PM:
-

I believe I saw a variant of this bug.
Symptom: HBase WAL file's last block went missing. These blocks are 83 bytes 
according to NameNode, and cannot be deleted, cannot be recovered via hdfs 
debug -recoverLease command.

I am aware of at least one scenario where this can happen, when performing 
volume hot-swap. I suppose other operations such as datanode decommissioning 
might also lead to the same symptom.

When this happens, the NameNode has log like this:
{quote}
2016-10-12 16:39:32,885 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_751860335_1, pendingcreates: 3], 
src=/hbase/WALs/hadoopdev6.example.com,60020,1465915112353-splitting/hadoopdev6.example.com%2C60020%2C1465915112353.null0.1471340470256
2016-10-12 16:39:32,885 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: recoverLease: [Lease.  
Holder: DFSClient_NONMAPREDUCE_751860335_1, pendingcreates: 3], 
src=/hbase/WALs/hadoopdev6.example.com,60020,1465915112353-splitting/hadoopdev6.example.com%2C60020%2C1465915112353.null0.1471340470256
 from client DFSClient_NONMAPREDUCE_751860335_1


2016-10-12 16:39:32,885 DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 
0 on 8020: org.apache.hadoop.hdfs.protocol.ClientProtocol.recoverLease from 
192.168.4.86:35276 Call#708331 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
2016-10-12 16:39:32,885 DEBUG org.apache.hadoop.ipc.Server:  got #708331
2016-10-12 16:39:32,886 WARN BlockStateChange: BLOCK* 
BlockInfoUnderConstruction.initLeaseRecovery: No blocks found, lease removed.
2016-10-12 16:39:32,886 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.internalReleaseLease: File 
/hbase/WALs/hadoopdev6.example.com,60020,1465915112353-splitting/hadoopdev6.example.com%2C60020%2C1465915112353.null0.1471340470256
 has not been closed. Lease recovery is in progress. RecoveryId = 1099610544470 
for block blk_1074528903_1099610239882{blockUCState=UNDER_RECOVERY, primary
NodeIndex=-1, replicas=[]}
2016-10-12 16:39:32,887 DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 
0 on 8020: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.recoverLease from 
192.168.4.86:35276 Call#708331 Retry#0 Wrote 36 bytes.
2016-10-12 16:39:32,887 DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 
0 on 8020: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.recoverLease from 
192.168.4.86:35276 Call#708331 Retry#0
2016-10-12 16:39:32,887 DEBUG org.apache.hadoop.ipc.Server: Served: 
recoverLease queueTime= 0 procesingTime= 2
{quote}

And the lease recover would repeat every one minute non-stop.


was (Author: jojochuang):
I believe I saw a variant of this bug.
Symptom: HBase WAL file's last block went missing. These blocks are 83 bytes 
according to NameNode, and cannot be deleted, cannot be recovered via hdfs 
debug -recoverLease command.

I am aware at least one scenario where this can happen, when performance volume 
hot-swap. I suppose other operations such as datanode decommissioning might 
also lead to the same symptom.

When this happens, the NameNode has log like this:
{quote}
2016-10-12 16:39:32,885 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_751860335_1, pendingcreates: 3], 
src=/hbase/WALs/hadoopdev6.example.com,60020,1465915112353-splitting/hadoopdev6.example.com%2C60020%2C1465915112353.null0.1471340470256
2016-10-12 16:39:32,885 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: recoverLease: [Lease.  
Holder: DFSClient_NONMAPREDUCE_751860335_1, pendingcreates: 3], 
src=/hbase/WALs/hadoopdev6.example.com,60020,1465915112353-splitting/hadoopdev6.example.com%2C60020%2C1465915112353.null0.1471340470256
 from client DFSClient_NONMAPREDUCE_751860335_1


2016-10-12 16:39:32,885 DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 
0 on 8020: org.apache.hadoop.hdfs.protocol.ClientProtocol.recoverLease from 
192.168.4.86:35276 Call#708331 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
2016-10-12 16:39:32,885 DEBUG org.apache.hadoop.ipc.Server:  got #708331
2016-10-12 16:39:32,886 WARN BlockStateChange: BLOCK* 
BlockInfoUnderConstruction.initLeaseRecovery: No blocks found, lease removed.
2016-10-12 16:39:32,886 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.internalReleaseLease: File 
/hbase/WALs/hadoopdev6.example.com,60020,1465915112353-splitting/hadoopdev6.example.com%2C60020%2C1465915112353.null0.1471340470256
 has not been closed. Lease recovery is in progress. RecoveryId = 1099610544470 
for block blk_1074528903_1099610239882{blockUCState=UNDER_RECOVERY, primary
NodeIndex=-1, replicas=[]}
2016-10-12 16:39:32,887 DEBUG org.apache.hadoop.ipc.

[jira] [Commented] (HDFS-8344) NameNode doesn't recover lease for files with missing blocks

2016-12-05 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723538#comment-15723538
 ] 

Wei-Chiu Chuang commented on HDFS-8344:
---

I believe I saw a variant of this bug.
Symptom: HBase WAL file's last block went missing. These blocks are 83 bytes 
according to NameNode, and cannot be deleted, cannot be recovered via hdfs 
debug -recoverLease command.

I am aware at least one scenario where this can happen, when performance volume 
hot-swap. I suppose other operations such as datanode decommissioning might 
also lead to the same symptom.

When this happens, the NameNode has log like this:
{quote}
2016-10-12 16:39:32,885 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_751860335_1, pendingcreates: 3], 
src=/hbase/WALs/hadoopdev6.example.com,60020,1465915112353-splitting/hadoopdev6.example.com%2C60020%2C1465915112353.null0.1471340470256
2016-10-12 16:39:32,885 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: recoverLease: [Lease.  
Holder: DFSClient_NONMAPREDUCE_751860335_1, pendingcreates: 3], 
src=/hbase/WALs/hadoopdev6.example.com,60020,1465915112353-splitting/hadoopdev6.example.com%2C60020%2C1465915112353.null0.1471340470256
 from client DFSClient_NONMAPREDUCE_751860335_1


2016-10-12 16:39:32,885 DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 
0 on 8020: org.apache.hadoop.hdfs.protocol.ClientProtocol.recoverLease from 
192.168.4.86:35276 Call#708331 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
2016-10-12 16:39:32,885 DEBUG org.apache.hadoop.ipc.Server:  got #708331
2016-10-12 16:39:32,886 WARN BlockStateChange: BLOCK* 
BlockInfoUnderConstruction.initLeaseRecovery: No blocks found, lease removed.
2016-10-12 16:39:32,886 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.internalReleaseLease: File 
/hbase/WALs/hadoopdev6.example.com,60020,1465915112353-splitting/hadoopdev6.example.com%2C60020%2C1465915112353.null0.1471340470256
 has not been closed. Lease recovery is in progress. RecoveryId = 1099610544470 
for block blk_1074528903_1099610239882{blockUCState=UNDER_RECOVERY, primary
NodeIndex=-1, replicas=[]}
2016-10-12 16:39:32,887 DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 
0 on 8020: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.recoverLease from 
192.168.4.86:35276 Call#708331 Retry#0 Wrote 36 bytes.
2016-10-12 16:39:32,887 DEBUG org.apache.hadoop.ipc.Server: IPC Server handler 
0 on 8020: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.recoverLease from 
192.168.4.86:35276 Call#708331 Retry#0
2016-10-12 16:39:32,887 DEBUG org.apache.hadoop.ipc.Server: Served: 
recoverLease queueTime= 0 procesingTime= 2
{quote}

And the lease recover would repeat every one minute non-stop.

> NameNode doesn't recover lease for files with missing blocks
> 
>
> Key: HDFS-8344
> URL: https://issues.apache.org/jira/browse/HDFS-8344
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-8344.01.patch, HDFS-8344.02.patch, 
> HDFS-8344.03.patch, HDFS-8344.04.patch, HDFS-8344.05.patch, 
> HDFS-8344.06.patch, HDFS-8344.07.patch, HDFS-8344.08.patch, 
> HDFS-8344.09.patch, HDFS-8344.10.patch, TestHadoop.java
>
>
> I found another\(?) instance in which the lease is not recovered. This is 
> reproducible easily on a pseudo-distributed single node cluster
> # Before you start it helps if you set. This is not necessary, but simply 
> reduces how long you have to wait
> {code}
>   public static final long LEASE_SOFTLIMIT_PERIOD = 30 * 1000;
>   public static final long LEASE_HARDLIMIT_PERIOD = 2 * 
> LEASE_SOFTLIMIT_PERIOD;
> {code}
> # Client starts to write a file. (could be less than 1 block, but it hflushed 
> so some of the data has landed on the datanodes) (I'm copying the client code 
> I am using. I generate a jar and run it using $ hadoop jar TestHadoop.jar)
> # Client crashes. (I simulate this by kill -9 the $(hadoop jar 
> TestHadoop.jar) process after it has printed "Wrote to the bufferedWriter"
> # Shoot the datanode. (Since I ran on a pseudo-distributed cluster, there was 
> only 1)
> I believe the lease should be recovered and the block should be marked 
> missing. However this is not happening. The lease is never recovered.
> The effect of this bug for us was that nodes could not be decommissioned 
> cleanly. Although we knew that the client had crashed, the Namenode never 
> released the leases (even after restarting the Namenode) (even months 
> afterwards). There are actually several other cases too where we don't 
> consider what happens if ALL the datanodes die while the file is being 
> written, but I am goi

[jira] [Commented] (HDFS-9562) libhdfs++ RpcConnectionImpl::Connect should acquire connection state lock

2016-12-05 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723532#comment-15723532
 ] 

James Clampffer commented on HDFS-9562:
---

Underlying fix for this is easy, but it breaks a ton of tests that use Mock 
objects because they don't take locks the same way as what they mock.

{code}
@@ -96,6 +96,10 @@ void RpcConnectionImpl::Connect(
 const std::vector<::asio::ip::tcp::endpoint> &server,
 const AuthInfo & auth_info,
 RpcCallback &handler) {
+
+  // Must be called with lock since it's mutating pending_requests_
+  assert(lock_held(connection_state_lock_));
+
   LOG_TRACE(kRPC, << "RpcConnectionImpl::Connect called");
{code}

> libhdfs++ RpcConnectionImpl::Connect should acquire connection state lock
> -
>
> Key: HDFS-9562
> URL: https://issues.apache.org/jira/browse/HDFS-9562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Priority: Critical
>
> RpcConnectionImpl::Connect calls pending_requests_.push_back() without 
> holding the connection_state_lock_.  This isn't a huge issue at the moment 
> because Connect generally shouldn't be called on the same instance from many 
> threads but it wouldn't hurt to add to prevent future confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9562) libhdfs++ RpcConnectionImpl::Connect should acquire connection state lock

2016-12-05 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9562:
--
Assignee: (was: James Clampffer)

> libhdfs++ RpcConnectionImpl::Connect should acquire connection state lock
> -
>
> Key: HDFS-9562
> URL: https://issues.apache.org/jira/browse/HDFS-9562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Priority: Critical
>
> RpcConnectionImpl::Connect calls pending_requests_.push_back() without 
> holding the connection_state_lock_.  This isn't a huge issue at the moment 
> because Connect generally shouldn't be called on the same instance from many 
> threads but it wouldn't hurt to add to prevent future confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11207) Unnecessary incompatible change of NNHAStatusHeartbeat.state in DatanodeProtocolProtos

2016-12-05 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-11207:
---
Status: Patch Available  (was: Open)

> Unnecessary incompatible change of NNHAStatusHeartbeat.state in 
> DatanodeProtocolProtos
> --
>
> Key: HDFS-11207
> URL: https://issues.apache.org/jira/browse/HDFS-11207
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11207.001.patch
>
>
> HDFS-5079 changed the meaning of state in {{NNHAStatusHeartbeat}} when it 
> added in the {{INITIALIZING}} state via {{HAServiceStateProto}}.
> Before change:
> {noformat}
> enum State {
>ACTIVE = 0;
>STANDBY = 1;
> }
> {noformat}
> After change:
> {noformat}
> enum HAServiceStateProto {
>   INITIALIZING = 0;
>   ACTIVE = 1;
>   STANDBY = 2;
> }
> {noformat}
> So the new {{INITIALIZING}} state will be interpreted as {{ACTIVE}}, new 
> {{ACTIVE}} interpreted as {{STANDBY}} and new {{STANDBY}} interpreted as 
> unknown. Any rolling upgrade to 3.0.0 will break because the datanodes that 
> haven't been updated will misinterpret the NN state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11207) Unnecessary incompatible change of NNHAStatusHeartbeat.state in DatanodeProtocolProtos

2016-12-05 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HDFS-11207:
---
Attachment: HDFS-11207.001.patch

Attaching a patch that adds a new field to the enum, but won't change the 
functionality of the old fields. This will still break the datanodes if they 
are not equipped to handle the {{INITIALIZING}} state.

> Unnecessary incompatible change of NNHAStatusHeartbeat.state in 
> DatanodeProtocolProtos
> --
>
> Key: HDFS-11207
> URL: https://issues.apache.org/jira/browse/HDFS-11207
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HDFS-11207.001.patch
>
>
> HDFS-5079 changed the meaning of state in {{NNHAStatusHeartbeat}} when it 
> added in the {{INITIALIZING}} state via {{HAServiceStateProto}}.
> Before change:
> {noformat}
> enum State {
>ACTIVE = 0;
>STANDBY = 1;
> }
> {noformat}
> After change:
> {noformat}
> enum HAServiceStateProto {
>   INITIALIZING = 0;
>   ACTIVE = 1;
>   STANDBY = 2;
> }
> {noformat}
> So the new {{INITIALIZING}} state will be interpreted as {{ACTIVE}}, new 
> {{ACTIVE}} interpreted as {{STANDBY}} and new {{STANDBY}} interpreted as 
> unknown. Any rolling upgrade to 3.0.0 will break because the datanodes that 
> haven't been updated will misinterpret the NN state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11207) Unnecessary incompatible change of NNHAStatusHeartbeat.state in DatanodeProtocolProtos

2016-12-05 Thread Eric Badger (JIRA)
Eric Badger created HDFS-11207:
--

 Summary: Unnecessary incompatible change of 
NNHAStatusHeartbeat.state in DatanodeProtocolProtos
 Key: HDFS-11207
 URL: https://issues.apache.org/jira/browse/HDFS-11207
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


HDFS-5079 changed the meaning of state in {{NNHAStatusHeartbeat}} when it added 
in the {{INITIALIZING}} state via {{HAServiceStateProto}}.

Before change:
{noformat}
enum State {
   ACTIVE = 0;
   STANDBY = 1;
}
{noformat}

After change:
{noformat}
enum HAServiceStateProto {
  INITIALIZING = 0;
  ACTIVE = 1;
  STANDBY = 2;
}
{noformat}

So the new {{INITIALIZING}} state will be interpreted as {{ACTIVE}}, new 
{{ACTIVE}} interpreted as {{STANDBY}} and new {{STANDBY}} interpreted as 
unknown. Any rolling upgrade to 3.0.0 will break because the datanodes that 
haven't been updated will misinterpret the NN state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10685) libhdfs++: return explicit error when non-secured client connects to secured server

2016-12-05 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723485#comment-15723485
 ] 

James Clampffer commented on HDFS-10685:


Looks like Jira got locked down again so I can't assign this bug directly to 
you (or edit my original post to add this comment).

> libhdfs++: return explicit error when non-secured client connects to secured 
> server
> ---
>
> Key: HDFS-10685
> URL: https://issues.apache.org/jira/browse/HDFS-10685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>
> When a non-secured client tries to connect to a secured server, the first 
> indication is an error from RpcConnection::HandleRpcRespose complaining about 
> "RPC response with Unknown call id -33".
> We should insert code in HandleRpcResponse to detect if the unknown call id 
> == RpcEngine::kCallIdSasl and return an informative error that you have an 
> unsecured client connecting to a secured server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723478#comment-15723478
 ] 

Hadoop QA commented on HDFS-11201:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
476 unchanged - 1 fixed = 477 total (was 477) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
55s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11201 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841795/HDFS-11201.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 68ed45b1f688 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revisi

[jira] [Commented] (HDFS-11205) Fix findbugs issue with BlockPoolSlice#validateIntegrityAndSetLength

2016-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723474#comment-15723474
 ] 

Hadoop QA commented on HDFS-11205:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
7s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
|
|   | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11205 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841800/HDFS-11205.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 90e1926c6ee4 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 15dd1f3 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17761/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17761/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17761/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17761/console |
| Powered by | Apache Yetus 0.4.0-SN

[jira] [Commented] (HDFS-10685) libhdfs++: return explicit error when non-secured client connects to secured server

2016-12-05 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723471#comment-15723471
 ] 

James Clampffer commented on HDFS-10685:


Hey [~vectorijk], sorry I didn't see this for so long.  The patch you posted 
looks good, the only thing I'd change is use Status::AuthenticationFailed 
rather than the more general Status::Error so calling applications can handle 
errors without having to compare strings.  If you're still interested in 
working on this you can just drop a new patch in as a comment and I'll post it 
to get CI to run.  Once it gets a passing run I'll commit.

> libhdfs++: return explicit error when non-secured client connects to secured 
> server
> ---
>
> Key: HDFS-10685
> URL: https://issues.apache.org/jira/browse/HDFS-10685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>
> When a non-secured client tries to connect to a secured server, the first 
> indication is an error from RpcConnection::HandleRpcRespose complaining about 
> "RPC response with Unknown call id -33".
> We should insert code in HandleRpcResponse to detect if the unknown call id 
> == RpcEngine::kCallIdSasl and return an informative error that you have an 
> unsecured client connecting to a secured server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory

2016-12-05 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723414#comment-15723414
 ] 

Wellington Chevreuil commented on HDFS-11197:
-

Hi [~jojochuang]. Thanks for the comments. Regarding the tests behaviour, 
please see my comments below:

1) Only the 1st test is actually testing the condition that caused {{hdfs 
crypto -listZones}} to fail, without the fix. The other two test the conditions 
that would not cause {{hdfs crypto -listZones}} to fail, but were added to 
ensure the proposed change would not cause the normal scenario to fail.

2) The reason for the NPE on the 1st test without the fix is because it's 
mocking {{FSDirectory}}, but it's not changing behaviour for 
{{FSDirectory.getINodesInPath}} when an invalid path is informed. Without the 
*if* that checks if the inode has a parent or is the root inode, this 1st test 
will cause {{EncryptionZoneManager.listEncryptionZones}} to call 
{{FSDirectory.getINodesInPath}} passing *second* as value, but the mocked 
FSDirectory does not mock such condition. To emulate this, it would need to add 
{{when(mockedDir.getINodesInPath("second", DirOp.READ_LINK)).thenThrow(new 
AssertionError("Absolute path required, but got 'second'"));}}. Would you think 
it would be relevant?

3) Sure, I will create some end-to-end tests. Do you think it's ok if I just 
add new tests on 
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml and 
TestCryptoAdminCLI, or do you it's better create a separate class/config file 
pair? 

> Listing encryption zones fails when deleting a EZ that is on a snapshotted 
> directory
> 
>
> Key: HDFS-11197
> URL: https://issues.apache.org/jira/browse/HDFS-11197
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, 
> HDFS-11197-3.patch, HDFS-11197-4.patch, HDFS-11197-5.patch, HDFS-11197-6.patch
>
>
> If a EZ directory is under a snapshotable directory, and a snapshot has been 
> taking, then if this EZ is permanently deleted, it causes *hdfs crypto 
> listZones* command to fail without showing any of the still available zones.
> This happens only after the EZ is removed from Trash folder. For example, 
> considering */test-snap* folder is snapshotable and there is already an 
> snapshot for it:
> {noformat}
> $ hdfs crypto -listZones
> /user/systest   my-key
> /test-snap/EZ-1   my-key
> $ hdfs dfs -rmr /test-snap/EZ-1
> INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: 
> hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1
> $ hdfs crypto -listZones
> /user/systest   my-key
> /user/hdfs/.Trash/Current/test-snap/EZ-1  my-key 
> $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1
> Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1
> $ hdfs crypto -listZones
> RemoteException: Absolute path required
> {noformat}
> Once this error happens, *hdfs crypto -listZones* only works again if we 
> remove the snapshot:
> {noformat}
> $ hdfs dfs -deleteSnapshot /test-snap snap1
> $ hdfs crypto -listZones
> /user/systest   my-key
> {noformat}
> If we instead delete the EZ using *skipTrash* option, *hdfs crypto 
> -listZones* does not break:
> {noformat}
> $ hdfs crypto -listZones
> /user/systest   my-key
> /test-snap/EZ-2  my-key
> $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2
> Deleted /test-snap/EZ-2
> $ hdfs crypto -listZones
> /user/systest   my-key
> {noformat}
> The different behaviour seems to be because when removing the EZ trash 
> folder, it's related INode is left with no parent INode. This causes 
> *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when 
> trying to resolve the inodes in the given path.
> Am proposing a patch that fixes this issue by simply performing an additional 
> check on *EncryptionZoneManager.listEncryptionZones* for the case an inode 
> has no parent, so that it would be skipped on the list without trying to 
> resolve it. Feedback on the proposal is appreciated. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11206) libhdfs++: FileSystem doesn't handle directory paths with a trailing "/"

2016-12-05 Thread James Clampffer (JIRA)
James Clampffer created HDFS-11206:
--

 Summary: libhdfs++: FileSystem doesn't handle directory paths with 
a trailing "/"
 Key: HDFS-11206
 URL: https://issues.apache.org/jira/browse/HDFS-11206
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer
Priority: Trivial


FileSystem methods that expect directories error when they receive a path with 
a trailing slash.  The java hadoop CLI tool is able to handle this without any 
issues. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-12-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10930:
--
Attachment: HDFS-10930.08.patch

Revert and reattach a new patch to correct the typo in commit message reported 
by [~brahmareddy].
 
The only delta from v07->v08 is the findbugs fix discussed in HDFS-11025. Plan 
to close HDFS-11025 and fix it here once we get a clean Jenkins run.

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10930-branch-2.00.patch, 
> HDFS-10930-branch-2.001.patch, HDFS-10930.01.patch, HDFS-10930.02.patch, 
> HDFS-10930.03.patch, HDFS-10930.04.patch, HDFS-10930.05.patch, 
> HDFS-10930.06.patch, HDFS-10930.07.patch, HDFS-10930.08.patch, 
> HDFS-10930.barnch-2.00.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2016-12-05 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723377#comment-15723377
 ] 

Xiaoyu Yao edited comment on HDFS-10930 at 12/5/16 9:09 PM:


Revert and reattach a new patch to correct the typo in commit message reported 
by [~brahmareddy].
 
The only delta from v07->v08 is the findbugs fix discussed in HDFS-11205. Plan 
to close HDFS-11205 and fix it here once we get a clean Jenkins run.


was (Author: xyao):
Revert and reattach a new patch to correct the typo in commit message reported 
by [~brahmareddy].
 
The only delta from v07->v08 is the findbugs fix discussed in HDFS-11025. Plan 
to close HDFS-11025 and fix it here once we get a clean Jenkins run.

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10930-branch-2.00.patch, 
> HDFS-10930-branch-2.001.patch, HDFS-10930.01.patch, HDFS-10930.02.patch, 
> HDFS-10930.03.patch, HDFS-10930.04.patch, HDFS-10930.05.patch, 
> HDFS-10930.06.patch, HDFS-10930.07.patch, HDFS-10930.08.patch, 
> HDFS-10930.barnch-2.00.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >